id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.10090
Consequences of a Stabilizing Field's Self-Interactions for RS Cosmology
It has been argued that the Randall-Sundrum (RS) phase transition rate is suppressed when the holographic theory corresponds to a large $N$ Yang-Mills and when the stabilizing field has a small mass. Here we argue that self-interactions can alleviate the latter suppression. We consider a cubic term in the bulk potential for the Goldberger-Wise (GW) scalar that is responsible for stabilizing the RS geometry. Adding a cubic term suffices to separate the two roles of the GW stabilization: generating a large hierarchy and triggering confinement. We study the resulting radion potential and the dynamics of the early universe phase transition. For a negative coefficient of the cubic term, the effect of the cubic becomes important in the infra-red, and the resulting radion potential is deeper, thereby increasing the radion mass while maintaining a large hierarchy. Staying within the radion effective field theory, we calculate the rate of bubble nucleation from the hot phase to the confined RS phase, both in thin and thick wall limits. The cubic term enhances the rate and allows relaxing the condition on the maximum number of colors $N_\text{max}$ of the dual theory for which the phase transition can be completed. Importantly, this reduces the amount of supercooling that the false vacuum undergoes, increases the peak frequency of the gravitational waves (GW) produced from bubble collisions, and reduces the strength of the GW signal. The reduced GW signal is however still within the reach of proposed space-based GW detectors.
Rashmish K. Mishra, Lisa Randall
2023-09-18T19:04:08Z
http://arxiv.org/abs/2309.10090v2
# Consequences of a Stabilizing Field's Self-Interactions for RS Cosmology ###### Abstract It has been argued that the Randall-Sundrum (RS) phase transition rate is suppressed when the holographic theory corresponds to a large \(N\) Yang-Mills and when the stabilizing field has a small mass. Here we argue that self-interactions can alleviate the latter suppression. We consider a cubic term in the bulk potential for the Goldberger-Wise (GW) scalar that is responsible for stabilizing the RS geometry. Adding a cubic term suffices to separate the two roles of the GW stabilization: generating a large hierarchy and triggering confinement. We study the resulting radion potential and the dynamics of the early universe phase transition. For a negative coefficient of the cubic term, the effect of the cubic becomes important in the infra-red, and the resulting radion potential is deeper, thereby increasing the radion mass while maintaining a large hierarchy. Staying within the radion effective field theory, we calculate the rate of bubble nucleation from the hot phase to the confined RS phase, both in thin and thick wall limits. The cubic term enhances the rate and allows relaxing the condition on the maximum number of colors \(N_{\rm max}\) of the dual theory for which the phase transition can be completed. Importantly, this reduces the amount of supercooling that the false vacuum undergoes, increases the peak frequency of the gravitational waves (GW) produced from bubble collisions, and reduces the strength of the GW signal. The reduced GW signal is however still within the reach of proposed space-based GW detectors. ## 1 Introduction The Randall-Sundrum (RS) framework based on warped extra-dimensional geometry is an elegant way to understand various hierarchies in the Standard Model (SM), and provides a rich framework for exploring several directions in both formal and phenomenological research [1]. The early cosmological history of these models is a confluence of many interesting phenomena. At low temperatures, the RS phase is the thermodynamically stable phase. At high temperatures, the RS phase is only metastable, while the stable phase is given by a black-brane geometry, which in the dual field theory corresponds to the deconfined phase. In minimal constructions, these two phases are separated by a barrier [2], and a transition between the two phases proceeds by bubble nucleation. Starting in the deconfined phase at high temperatures, the rate of transition to the confined phase is very suppressed in the minimal models, and they generically supercool past the critical temperature. The field configuration that allows tunneling from one phase to another is a gravitational instanton, the grammar for which is an active area of investigation. Early cosmological history of RS models is therefore tied to the physics of confinement, supercooling, and gravitational instantons. All these effects have direct phenomenological relevance--the confinement/deconfinement phase transition and its order is relevant for potential gravitational wave (GW) signals. Supercooling is important for estimating the peak frequency and abundance for the GW signal [3; 4], the present abundance of relics from that era, as well as when and whether the phase transition completes.1 The characteristic peak frequency and the frequency dependence of the GW abundance have an imprint of the energy scales involved and can be probed in present and proposed GW detectors [6; 7; 8; 9; 10; 11; 12; 13; 14]. Finally, the exploration of gravitational instantons is relevant for all of these since it affects the rate of transition. These considerations make an exploration of the theoretical and phenomenological aspects of phase transitions in RS framework a well-motivated direction to pursue. If the phase transition is first-order, the resulting GWs can provide access to the cosmological history of the universe [15; 16; 17; 18] and point towards yet to be discovered beyond the Standard Model (BSM) physics. Footnote 1: See ref. [5] for a general discussion of supercooling at both weak and strong coupling. In a realistic UV complete warped scenario, we generically expect significant IR modifications, since in the dual picture, the theory is close to confinement, and is very far away from a conformal field theory (CFT). With this motivation, in the present work we allow for an IR modification in the RS framework stabilized by a Goldberger-Wise (GW) scalar. We focus on how such a modification affects various cosmological features. More concretely, we consider a quadratic and a cubic term in the bulk potential for the stabilizing GW field, with both their coefficients negative, so that the GW profile grows in the IR, and the effect of the cubic becomes important in the IR as well. Generically one expects even higher-order terms in the potential, but for the present purposes, a cubic suffices to model the IR modification. A non-zero cubic allows splitting the two roles played by the GW mechanism--a logarithmic running which gives a large hierarchy, and the triggering of the IR brane. As we will see, a non-zero cubic term can change the shape of the radion potential while maintaining a large hierarchy. This translates to a modification of the free energy and the bounce action for the phase transition. In the present work, we will focus on the modifications on the RS phase only and stay in the regime where the backreaction is not important. A suppressed rate of phase transition in the minimal models can be tracked down to two parametric reasons: large \(N_{c}\) and small \(\delta\). Here \(N_{c}\) is the number of colors in the dual theory, and \(\delta\) characterizes CFT breaking in the IR where the phase transition takes place (and is a function of the parameters of the stabilization mechanism). For theoretical control, we need \(N_{c}\gg 1\). The minimal RS models, stabilized with a quadratic bulk GW potential, also have \(\delta\ll 1\). The present work can be understood as a way to enhance the rate by increasing \(\delta\), and is similar in spirit to other such attempts in the literature. Addressing the \(N_{c}\) suppression directly will require a better modeling of the IR dynamics, and including backreaction, which we will present in a future work. Cosmological aspects of RS models have been studied both in the minimal scenario [2], and with interesting variations [4; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] that were aimed towards addressing the issues in the minimal scenario. The authors in ref. [2] pointed out that without a stabilization mechanism, if the RS model is in the deconfined phase at high temperatures, it never completes a transition to the confined phase. With a stabilization mechanism from a GW scalar, the rate for the phase transition is extremely suppressed, and is very constraining on what constitute viable physical parameters. Ref. [20] pointed out that by including back-reaction, the situation is less constraining than originally claimed. In subsequent work, ref. [21] used corrections to the radion potential coming from the QCD condensate, which reduced the barrier depth, and hence enhanced the rate for the phase transition. The authors in [25] further required the 4D CFT degrees of freedom to flow to another fixed point after QCD confinement, thereby gaining more theoretical control, and gave a geometrical picture for this situation where the mass of the GW scalar is tachyonic [27]. The authors in [28; 32] used similar ingredients where they modeled the IR flow to another fixed point by a special form of bulk potential for the GW field. In an orthogonal direction, the authors of ref. [33] used finite temperature corrections to the combined potential of radion and other light fields to address the issue. Ref. [34] used a relevant UV deformation to model the CFT breaking in the IR, which resulted in a deeper radion potential. See also [35; 36] for other interesting alternatives. Compared to the previous approaches, the present work is different in the following ways: the modified bulk potential we consider is generic, and is expected in a realistic UV completion. We stay within the radion EFT parameter space. We use a combination of analytical and numerical methods to handle the computations. We consider both thin and thick wall limits, for completeness. Our approach is a first step towards systematically including strongly coupled IR effects. In some respects our results mimic those of [28; 32], who explicitly separated the two roles of the radion by having one CFT to establish the hierarchy and the second to establish the mass. Though perhaps under less control, our model is more generic and reproduces the second feature organically. Once the GW field is sufficiently big, playing a role in triggering the IR phase transition, it also contributes to the radion mass and eases the phase transition completion. The outline of the paper is the following. In sec. 2 we set up the notation and obtain the GW scalar profile for both phases. In sec. 3 we derive the radion potential, and in sec. 4 we derive the free energy of the two phases as a function of temperature. In sec. 5 we calculate the rate of phase transition, first in thin-wall approximation and then away from it, focusing on the role of the IR effects. Sec. 6 contains the results, and a conclusion follows in sec. 7. Technical details of the calculations are presented in app. A, B, C. 5D action and scalar profiles To set the stage we first fix some notation. We consider 5D spacetime with locally constant negative cosmological constant. The general solution can be parameterized as \[\mathrm{d}s^{2}=-e^{-2r}\left(1-e^{4(r-r_{h})}\right)\,\mathrm{d}t^{2}+e^{-2r} \mathrm{d}\vec{x}^{2}+\frac{\mathrm{d}r^{2}}{1-e^{4(r-r_{h})}}\;, \tag{1}\] which is the metric for Anti-de Sitter Schwarzschild (AdSS) space, with the asymptotic boundary of AdS space at \(r\to-\infty\) and a horizon \(r=r_{h}\), extended in the transverse directions (a black brane). We have set the AdS scale \(\ell_{\mathrm{AdS}}=1\) here, and in what follows. In the limit of \(r_{h}\to\infty\), this reduces to the Poincare patch of AdS space. A UV brane with appropriate (positive) tension, whose location can be chosen to be at \(r_{\mathrm{uv}}=0\), cuts off the UV region of the spacetime and ensures a normalizable 4d graviton. For the \(r_{h}=\infty\) case, the spacetime will be truncated at the IR brane located at \(r=r_{\mathrm{ir}}\), again with an appropriately chosen (negative) tension. We therefore have two spacetime metrics to consider, which we refer to as the BB (Black Brane) and RS (Randall Sundrum) spacetimes respectively: \[\mathrm{RS}: \mathrm{d}s^{2}=-e^{-2r}\,\mathrm{d}t^{2}+e^{-2r}\mathrm{d}\vec{ x}^{2}+\mathrm{d}r^{2}\;, 0\leq r\leq r_{\mathrm{ir}}\;,\] \[\mathrm{BB}: \mathrm{d}s^{2}=-e^{-2r}\left(1-e^{4(r-r_{h})}\right)\,\mathrm{d }t^{2}+e^{-2r}\mathrm{d}\vec{x}^{2}+\frac{\mathrm{d}r^{2}}{1-e^{4(r-r_{h})}} \;,\;\;0\leq r\leq r_{h}\;. \tag{2}\] The RS and BB spacetimes are dual to the confined and the deconfined phases respectively in the field theory. The Hawking radiation from the horizon in the BB geometry gives a temperature to the black hole, which is a function of the location of the horizon. The effect of a finite temperature \(T\) can be studied in the Euclidean version of the spacetime, with the Euclidean time \(t_{E}\) identified with a period \(\beta=1/T\). For \(r_{h}\neq 0\), the Euclidean continuation of eq. (1) with a periodic \(t_{E}\) is smooth only when \[\beta=\pi\exp(r_{h})\;, \tag{3}\] whereas any \(\beta\) is fine for \(r_{h}=\infty\) (RS background). Since we need to consider the dynamics at finite temperature, we will work in Euclidean compactified time. The location of the IR brane is a modulus in the RS geometry, and needs a stabilization mechanism to have a fixed value. Such a stabilization can be provided by a 5D GW scalar \(\chi\) with appropriately chosen parameters such that it gets a profile along the extra dimension, and generates a potential for the field \(r_{\mathrm{ir}}(x)\). In this work we consider more general potentials and boundary conditions than the original proposal [37; 38], and argue that these modifications are well-motivated and geared towards modeling the IR dynamics appropriately. Specifically, we consider a 5D scalar \(\chi\), with the action \[S_{\chi}=\int d^{5}x\sqrt{g}\left(-\frac{1}{2}\left(\partial\,\chi\right)^{2} -V_{B}(\chi)\right)-\sum_{i}\int d^{4}x\,\sqrt{g_{i}}\,V_{i}(\chi)\;, \tag{4}\] with a bulk potential \(V_{B}(\chi)\) and boundary potential(s) \(V_{i}(\chi)\) which set the boundary conditions. Note that \(i\) takes the value \((uv,ir)\) for the RS case, and only \((uv)\) for the BB case. We would like to solve for the profile of \(\chi\) in the two backgrounds, and in the limit of small back-reaction. The choice of the bulk and brane localized potentials is governed by the dynamics one wants to model. In the dual interpretation, the bulk GW potential can be mapped to the renormalization flow of a deformation. For the effect to become more important in the IR, the deformation should be relevant, and higher-order terms in the beta function should become important as the deformation grows. A constant mass for the GW scalar corresponds to a constant beta function. Higher order terms in the beta function correspond to higher order terms in the GW potential [39; 40]. We add a cubic term in the bulk GW potential, in addition to the mass term, to model the higher-order terms in the beta function. With this motivation, we choose the bulk potential to be \[V_{B}(\chi)=2\epsilon_{2}\chi^{2}+\frac{4}{3}\epsilon_{3}\chi^{3}\;, \tag{5}\] with \(\epsilon_{2}<0,\epsilon_{3}<0\). The first sign, \(\epsilon_{2}<0\), corresponds to a relevant deformation of the dual theory, and ensures a logarithmic running for small \(|\epsilon_{2}|\). The second sign, \(\epsilon_{3}<0\), ensures the deformation gets larger in the IR. We have kept to a cubic term in the potential, which is sufficient to capture the effect of strong coupling as we show later, although there can in general be quartic and higher-order terms as well.2 Footnote 2: Ref. [32] used a GW potential with up to quartic terms to model a specific flow in the dual theory, from a UV fixed point to an IR fixed point. The dynamics we want to model in the present work is different. The mass term and the self-interaction term in the GW potential model different aspects of the dynamics. A small mass allows a large running, whereas the self-interaction term, which for small \(\epsilon_{3}\) is important only in the IR, models the existence of a more complicated radion potential in the IR after confinement. It is important to separate these two effects, and study the resulting effect on radion potential and the phase transition, which is the motivation for this work. For simplicity we choose a UV brane potential that fixes the value of \(\chi\) at the UV brane. The IR brane potential is chosen to allow the scalar to adjust its value, and again for simplicity we fix its derivative. These features can be modeled by the potentials \[V_{\rm uv}(\chi) =\beta_{\rm uv}(\chi-v_{\rm uv})^{2}\;,\;\;\beta_{\rm uv}\to \infty\;,\] \[V_{\rm ir}(\chi) =2\alpha_{\rm ir}\chi\,. \tag{6}\] Note that these are simplifications, and a more complete analysis should allow for mixed boundary conditions at both the UV and the IR. Also note that the IR boundary condition is not relevant for the BB case--the GW profile is required to have a "regular" behavior at the horizon. For a truly holographic interpretation, we would not necessarily have a boundary condition imposed directly in the IR but we follow the original analysis and do this for simplicity. Given the bulk and boundary potentials, the GW field develops a profile \(\chi(r)\) along the radial direction, which is different for the RS and BB backgrounds because the IR boundary condition is different for the two cases. While an exact solution can be obtained numerically, it is possible to get approximate analytical solutions by dividing the bulk into different parts where different terms in the equation dominate. Leaving the details to appendix A, the leading order solutions are given as \[\chi_{\rm RS}(r) =-\frac{\alpha_{\rm ir}}{4}e^{4(r-r_{\rm ir})}+\frac{v_{\rm uv}e^{ -\epsilon_{2}r}}{1+v_{\rm uv}\epsilon_{3}\left(\frac{1-e^{-\epsilon_{2}r}}{ \epsilon_{2}}\right)}\,,\ 0\leq r\leq r_{\rm ir}\,,\] (7) \[\chi_{\rm BB}(r) =\frac{v_{\rm uv}e^{-\epsilon_{2}r}}{1+v_{\rm uv}\epsilon_{3} \left(\frac{1-e^{-\epsilon_{2}r}}{\epsilon_{2}}\right)}\,,\ Radion potential Given the profile for \(\chi(r)\), we can calculate the resultant radion potential. The basic idea is that as one varies \(r_{\rm ir}\), the energy contained in the potential for \(\chi(r)\) changes and for a choice of parameters, a suitable minimum can be obtained. The purely gravitational part of the action gives a kinetic term for \(r_{\rm ir}\). Evaluating the \(\chi\) action on the approximate solution for \(\chi(r)\) in eq. (7), one obtains the potential for \(r_{\rm ir}\). In terms of the field \(\varphi=e^{-r_{\rm ir}}\), the 4D action is given as \[S =\int d^{4}x\sqrt{g}\left(-12M_{5}^{3}\left(\partial\varphi\right) ^{2}-V(\varphi)\right)\,,\] \[V(\varphi) =24M_{5}^{3}\kappa^{4}\,\varphi^{4}\left(1+\frac{a_{2}}{24M_{5} ^{3}\kappa^{4}}\frac{\lambda\varphi^{\epsilon_{2}}}{1-\lambda\varphi^{ \epsilon_{2}}}-\frac{a_{3}}{24M_{5}^{3}\kappa^{4}}\,\log(1-\lambda\varphi^{ \epsilon_{2}})\right),\] \[\lambda =\frac{v_{\rm uv}\epsilon_{32}}{1+v_{\rm uv}\epsilon_{32}},\, \epsilon_{32}=\frac{\epsilon_{3}}{\epsilon_{2}},\,a_{2}=-\frac{1}{32} \epsilon_{2}\alpha_{\rm ir}^{2}-\frac{\epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir }+2\alpha_{\rm ir},\,a_{3}=\frac{1}{2}\frac{\epsilon_{2}}{\epsilon_{3}} \alpha_{\rm ir}\,. \tag{11}\] The details of the computation for \(V(\varphi)\) are given in app. B. Note that \(\varphi\) is not canonically normalized in our notation. We have pulled an overall factor of \(M_{5}^{3}\) outside from the potential, and the parameter \(\kappa\lesssim 1\) for small back-reaction.3 In the limit of \(\lambda\varphi^{\epsilon_{2}}\ll 1\) (note that for this we need \(\lambda\sim\mathcal{O}(v_{\rm uv})\ll 1\), since \(\varphi^{\epsilon_{2}}\) can be \(\mathcal{O}(1)\), for \(\epsilon_{2}<0\)), the potential can be expanded in a power series: Footnote 3: We are working in the glueball normalization where the quartic scales as \(N_{c}^{2}\sim M_{5}^{3}\). \[V(\varphi)=\varphi^{4}\left(b_{0}+b_{1}\lambda\varphi^{\epsilon_{2}}+b_{2} \lambda^{2}\varphi^{2\epsilon_{2}}+b_{3}\lambda^{3}\varphi^{3\epsilon_{2}}+ \cdots\right)\,. \tag{12}\] The coefficients \(b_{i}\) are readily calculable given the explicit form of the potential, and are functions of \(\alpha_{\rm ir},v_{\rm uv},\epsilon_{2}\) and \(\epsilon_{3}\). In the limit of \(\epsilon_{3}\to 0\), only \(b_{0}\) and \(b_{1}\) are non-zero, and the potential simplifies to the familiar racetrack form. Note that in this limit, \(\lambda\to 0\) but it is balanced by the \(\epsilon_{3}\) in the denominator of \(a_{2}\) and \(a_{3}\) in eq. (11). \[V(\varphi) \underset{\epsilon_{3}\to 0}{=} 24M_{5}^{3}\kappa^{4}\,\varphi^{4}\left(1-\frac{1}{48M_{5}^{3} \kappa^{4}}\alpha_{\rm ir}v_{\rm uv}\varphi^{\epsilon_{2}}\right) \tag{13}\] \[=24M_{5}^{3}\kappa^{4}\,\varphi^{4}\left(1-\frac{1}{1+\epsilon/4 }\left(\frac{\varphi}{\varphi_{\rm min}}\right)^{\epsilon_{2}}\right)\,.\] The \(b_{2}\) and higher order terms in eq. (12) mimic the effects of strong dynamics--as the running coupling grows to be big, the higher order terms become important [39]. In fact, we will see that for a relevant choice of parameters, we have to include several terms in the expansion. The effect of higher order terms is that for a small \(\varphi_{\rm min}\) (large hierarchy), various combinations of terms can balance each other and give a minimum in the radion potential in which case the second derivative of the potential at the minimum can be enhanced. This is the usual expectation that for a strong breaking in the IR, the radion is not parametrically light anymore. To understand this enhancement, let's first consider the generic form of the radion potential \[V(\varphi)=b_{0}\,\varphi^{4}\,P(\varphi^{\epsilon_{2}})\,, \tag{14}\] where \(P(x)\) is a polynomial in \(x\) of some given order, with the first term 1 (since we have factored out an overall constant in eq. (10)). Expanding eq. (11), some of the terms in \(P(x)\) are explicitly given as \[b_{0}\,P(x) =b_{0}+\sum_{i\geq 1}b_{i}x^{i}\,,\qquad b_{0}=24M_{5}^{3}\kappa^{4}\,,\] \[b_{1} =-v_{\rm uv}\left(\frac{1}{2}\alpha_{\rm ir}-2\alpha_{\rm ir} \frac{\epsilon_{3}}{\epsilon_{2}}+\frac{1}{32}\alpha_{\rm ir}^{2}\epsilon_{3} \right)\left(1+\frac{v_{\rm uv}\epsilon_{3}}{\epsilon_{2}}\right)^{-1}\,,\] \[b_{2} =-v_{\rm uv}\left(\frac{v_{\rm uv}\epsilon_{3}}{\epsilon_{2}} \right)\left(\frac{3}{4}\alpha_{\rm ir}-2\alpha_{\rm ir}\frac{\epsilon_{3}}{ \epsilon_{2}}+\frac{1}{32}\alpha_{\rm ir}^{2}\epsilon_{3}\right)\left(1+ \frac{v_{\rm uv}\epsilon_{3}}{\epsilon_{2}}\right)^{-2}\,,\] \[b_{3} =-v_{\rm uv}\left(\frac{v_{\rm uv}\epsilon_{3}}{\epsilon_{2}} \right)^{2}\left(\frac{3}{4}\alpha_{\rm ir}-2\alpha_{\rm ir}\frac{\epsilon_{3 }}{\epsilon_{2}}+\frac{1}{32}\alpha_{\rm ir}^{2}\epsilon_{3}\right)\left(1+ \frac{v_{\rm uv}\epsilon_{3}}{\epsilon_{2}}\right)^{-2}\,,\] \[\vdots \tag{12}\] For \(v_{\rm uv}\lesssim 1\) and \(\epsilon_{3}/\epsilon_{2}\lesssim 1\), higher order terms are successively smaller. However, near the minimum, \(x=\varphi^{\epsilon_{2}}\gtrsim 1\) so that higher powers of \(x\) are bigger. Therefore, near the minimum, terms in \(P(x)\) are products of successively increasing and successively decreasing factors. For certain choice of parameters, a combination of terms balance each other. Since \(x\gtrsim 1\), we still get a large hierarchy. This discussion also makes it clear that for a small \(\epsilon_{3}\), the higher order terms can contribute without changing the hierarchy too much when \(\alpha_{\rm ir}\) and \(v_{\rm uv}\) take larger values. At such values, \(P\) is enhanced and so is the second derivative of the potential. In summary, a non-zero \(\epsilon_{3}\), in conjunction with other parameters of the GW sector, can increase the radion mass while generating a similar hierarchy. To illustrate this, we will work with four benchmark parameters **A,B,C,D** in table 1, for which \(\varphi_{\rm min}\sim 10^{-16}\).4 These parameters are chosen with certain self-consistency conditions in mind. Requiring to stay in the radion EFT, we need to ensure that the radion mass is at most or slightly smaller than the Kaluza-Klein (KK) scale which sets the mass of other KK modes. This also ensures that the back-reaction on the geometry from the GW scalar can be ignored. Another requirement is to have \(T_{c}/\varphi_{\rm min}\lesssim 1\) so that temperature corrections to the potential can be ignored, at least in the vicinity of the minimum. \(T_{c}\) is set by the value of the potential at the minimum \(V(\varphi_{\rm min})\) (as discussed in the next section). Footnote 4: In table 1, \(\varphi_{\rm min}\) is calculated for \(M_{5}^{3}=N_{c}^{2}/16\pi^{2},N_{c}=1\). For a different \(N_{c}\), \(\alpha_{\rm ir}\) and \(v_{\rm uv}\) have to be adjusted to keep \(\varphi_{\rm min}\) fixed. The mass of the physical radion and \(T_{c}\) (defined in eq. (11)) do not change with \(N_{c}\). Fig. 2a shows the radion potential for these parameters. For **B,C,D**, with \(\epsilon_{3}\neq 0\), a deeper potential at the minimum can be clearly seen. Fig. 2b shows the value of various terms in the series expansion of the derivative of the potential near the minimum. It is clear that for **A**, the first and the second terms balance, for **B**, the first and the third terms balance, for **C**, the second and the third terms balance, and for **D**, the second term balances the third and fourth terms together. For **C,D**, \(v_{\rm uv}\) was increased to make sure the higher order terms are enhanced. Further, the magnitude of the dominant term is largest in **D**, which is correlated with the largest value of the second derivative at the minimum. One can estimate the second derivative at the minimum once we know which terms balance which. Starting with the \(m^{\rm th}\) term in the potential, \(V_{m}=b_{m}\,\varphi^{4+me_{2}}\), the derivative and the second derivative are \[V_{m}^{\prime}=b_{m}\left(4+m\epsilon_{2}\right)\varphi^{3+m\epsilon_{2}}\,, \qquad V_{m}^{\prime\prime}=b_{m}\left(4+m\epsilon_{2}\right)(3+m\epsilon_{2} )\,\varphi^{2+m\epsilon_{2}}\,. \tag{10}\] The \(m^{\rm th}\) and \(n^{\rm th}\) term in \(V^{\prime}\) can balance near the minimum if \[b_{m}\left(4+m\epsilon_{2}\right)\varphi_{\rm min}^{3+m\epsilon_{2}}\sim-b_{n }\left(4+n\epsilon_{2}\right)\varphi_{\rm min}^{3+n\epsilon_{2}}\,. \tag{11}\] Considering these two terms, and using the above, the second derivative at the minimum is \[V^{\prime\prime}/\varphi_{\rm min}^{2}\sim b_{m}\left(4+m\epsilon_{2}\right) (m-n)\epsilon_{2}\,\varphi^{m\epsilon_{2}}\sim b_{n}\left(4+n\epsilon_{2} \right)(n-m)\epsilon_{2}\,\varphi^{n\epsilon_{2}}\,. \tag{12}\] Using \(b_{0}=24M_{5}^{3}\kappa^{4}\), \(b_{m>0}\sim v_{\rm uv}\left(v_{\rm uv}\epsilon_{3}/\epsilon_{2}\right)^{m-1} \alpha_{\rm ir}\) and \(\varphi^{\epsilon_{2}}\sim(10^{-16})^{-1/25}\sim 4.3\) near the minimum, the above estimate for \(V^{\prime\prime}/\varphi_{\rm min}^{2}\) matches the numbers in table 1 obtained numerically. For parameters **C,D**, the dominant term at the minimum scales as \(1/\epsilon_{2}\) (i.e. \(m=2\)), and this parametrically cancels the \(\epsilon_{2}\) factor in the numerator in eq. (12). A word of caution for the obtained radion potential: for \(\epsilon_{2}<0,\epsilon_{3}<0\), which is the case at hand, the potential has a singularity at \(\varphi_{s}=(1/\lambda)^{1/\epsilon_{2}}\). The singularity is coming from the analytical solution for \(\chi(r)\) breaking down, and since we used this analytical solution to obtain the radion potential we see it in the potential too. In a more complete but numerically tedious calculation, this singularity would not appear. The potential in (10) therefore cannot be trusted for large \(r_{\rm ir}\), or equivalently for \(\varphi\to 0\). This is to be kept in mind when we discuss the thick wall results in sec. 5 (and the computational details in app. C), which probe small values of \(\varphi\) as the temperature becomes small. At a finite temperature \(T\), the potential is expected to be modified for \(\varphi\lesssim T\). As we will explain later, we use the potential only in the \(\varphi\gtrsim T\) region. The expression for the radion potential is therefore useful as long as \(\varphi_{s}\ll T\). This also means we cannot extend the analysis to arbitrarily low temperatures. As we will show, before running into this issue a significant reduction in bounce action can be achieved. ## 4 Free energy of the phases For both RS and BB phases, the free energy gets contributions from the gravitational and the GW sectors. Since we are not including back-reaction, the gravitational calculation \begin{table} \begin{tabular}{||c|c|c|c|c|c||c|c|c||} \hline & \(\kappa\) & \(\epsilon_{2}\) & \(\epsilon_{3}\) & \(\alpha_{\rm ir}\) & \(v_{\rm uv}\) & \(\varphi_{\rm min}\times 10^{16}\) & \(V^{\prime\prime}(\varphi_{\rm min})/\varphi_{\rm min}^{2}\) & \(-V(\varphi_{\rm min})/\varphi_{\rm min}^{4}\) \\ \hline \hline **A** & \(10^{-1/4}\) & -1/25 & 0 & 1/10 & 1/14 & 1.47 & 0.002 & \(10^{-4}\) \\ **B** & \(10^{-1/4}\) & -1/25 & -1/100 & 5/2 & 1/14 & 1.09 & 0.005 & \(3\times 10^{-4}\) \\ **C** & \(10^{-1/4}\) & -1/25 & -1/90 & 5/2 & 1/5 & 0.86 & 0.032 & \(2\times 10^{-3}\) \\ **D** & \(10^{-1/4}\) & -1/25 & -1/81 & 5/2 & 1/3 & 0.59 & 0.135 & \(8\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Benchmark choice of parameters to show the effect of self-interaction. is the same as reported in the literature [2]. The free energy is UV sensitive, so it is more sensible to talk about the difference in the free energies between the phases. At the minimum, the difference in free energies coming purely from the gravitational part of the action is given by: \[\left(F_{\text{GR, BB}}-F_{\text{GR, RS}}\right)_{\text{min}}=-2\pi^{4}M_{5}^{3}T^{4}+C=- \frac{\pi^{2}}{8}\,N_{c}^{2}\,T^{4}+C\:, \tag{4.1}\] Figure 2: Radion potentials, for the parameter choices in table 1. where \(C\) is a finite constant to be determined later, and in the last equality we have used the relation between \(M_{5}\) and the number of colors \(N_{c}\) of the dual theory: \(16\pi^{2}M_{5}^{3}=N_{c}^{2}\). As discussed in ref. [2], in this computation the location of the horizon \(r_{h}\), parameterized by a temperature \(T_{h}=\pi\exp(r_{h})\), is made a dynamical variable. For generic values of \(T_{h}\) (in the Euclidean picture), there is a conical singularity at the horizon. Regulating the singularity one gets a contribution to the free energy for \(T_{h}\neq T\). At the minimum, \(T_{h}=T\), the conical singularity disappears, and \(T\) is related to \(r_{h}\) by eq. (3). In deriving eq. (19), \(\beta\) in the RS phase is adjusted to match the geometry at the UV cutoff (see refs. [2; 41] for details). We can effectively define eq. (19) to be the free energy of the BB phase at the minimum. Taking \(C\equiv 2\pi^{4}M_{5}^{3}T_{c}^{4}\), the free energy of the BB phase at the minimum can be written as \[F_{\text{BB, min}}=2\pi^{4}M_{5}^{3}\left(T_{c}^{4}-T^{4}\right)\,. \tag{20}\] At the moment, \(T_{c}\) is a parameter and the free energy of the BB phase is positive or negative depending on whether \(T<T_{c}\) or \(T>T_{c}\). In the presence of a GW scalar, the free energy of the BB phase gets an additional contribution. This can be computed by first solving for the scalar profile in the BB background, which is then used to calculate the free energy. The scalar contribution is subleading compared to the purely gravitational contribution, and we will not include it here. This is self-consistent with not including back-reaction from the scalar on the BB geometry. Similarly, the free energy of the RS phase gets an additional contribution in the presence of a GW scalar. Staying within the radion EFT, and at low enough temperatures, the free energy is given by the radion potential itself. The radion potential is normalized so that it vanishes at the minimum. This amounts to tuning the cosmological constant in the 4D EFT to zero. We therefore have \[F_{\text{GW, RS}}=V(\varphi)-V(\varphi_{\text{min}})\,,\,\,\,F_{\text{GW, RS, min}}=0\,. \tag{21}\] We further require that the free energies of the two phases are equal in the \(\varphi\to 0\) and \(T\to 0\) limits, which gives \[-V(\varphi_{\text{min}})=2\pi^{4}M_{5}^{3}T_{c}^{4}\,. \tag{22}\] This fixes \(T_{c}\) in terms of the radion potential. Putting everything together, we have \[F_{\text{BB, min}} =2\pi^{4}M_{5}^{3}\left(T_{c}^{4}-T^{4}\right)\,, \tag{23}\] \[F_{\text{RS}} =V(\varphi)-V(\varphi_{\text{min}})\,,\] (24) \[F_{\text{RS, min}} =0\,,\] (25) \[F_{\text{BB, min}}-F_{\text{RS, min}} =2\pi^{4}M_{5}^{3}\left(T_{c}^{4}-T^{4}\right)\,. \tag{26}\] The meaning of \(T_{c}\) is made clear by eq. (26): At the critical temperature \(T=T_{c}\), the difference in the free energy of the two phases is zero. For \(T>T_{c}\), the BB phase has a lower free energy than the RS phase and is the preferred phase thermodynamically. As the temperature drops below \(T_{c}\), the RS phase has a lower free energy and becomes the thermodynamically preferred phase. Dynamics of the phase transition We assume that after the end of inflation and reheating, the RS model is in the BB phase, which is the thermodynamically stable phase at very high temperatures. As the temperature drops, the RS phase becomes thermodynamically favorable. Since both these phases are local minima of free energy, they are separated by some kind of barrier in the field space, and the phase transition is first order, proceeding by bubbles of true vacuum nucleating inside the false vacuum. The standard prescription to calculate the rate of such a phase transition [42; 43] can be applied, but with important modifications pointed out in ref. [2], which we mention next. To calculate the tunneling rate across a barrier, one has to identify the relevant fields that interpolate between the true and the false vacuum. In the present case, staying within the radion EFT, the relevant field for the tunneling rate calculation on the RS side is the radion itself. The radion is a composite, and a different field has to be identified in the BB phase. Ref [2] assumed the relevant degree of freedom in the BB phase to be \(T_{h}\) (temperature of the horizon), for which a potential can be written, but the kinetic term is not known (see however ref. [31] for a discussion of this aspect). The way out was provided by noticing that the path in the field space that interpolates between the false and the true vacua has some features in the phenomenological cases of interest. One can split it into three regions: \(i)\) the BB region, \(ii)\) the \(\varphi\lesssim T\) region, and \(iii)\) the \(T\lesssim\varphi\lesssim\varphi_{\rm min}\) region. For \(T\lesssim T_{c}\) (i.e. in the thin wall limit), the calculable contribution from region \(iii)\) is parametrically larger than the combined contribution from \(i)\) and \(ii)\), so that it provides a useful estimate for the full action. As \(T/T_{c}\) reduces, the contribution from region \(iii)\) also reduces. This means that if at a given \(T/T_{c}\), the contribution from region \(iii)\) is too small, one cannot ignore the other two contributions. Generically, we can take the other regions to contribute \({\cal O}(1)\) amount to the action, so that the actually computable action, from region \(iii)\), can only be trusted for values of \(T/T_{c}\) such that it is at least \({\cal O}(1)\). All in all, this means the above approach cannot be extended to very small values of \(T/T_{c}\). Also, note that the radion potential has a singularity at \(\varphi_{s}=(1/\lambda)^{1/\epsilon_{2}}\ll\varphi_{\rm min}\) coming from a breakdown of the approximations that were used to derive it. Since \(\varphi\gtrsim T\) in region \(iii)\), we do not need to worry about this as long as \(T\) is not too small. With these considerations in mind, we now calculate the bounce action in both the thin and thick wall limits We leave the technical details of the bounce action calculation for thin and thick wall cases to app. C, and give the final expressions here. For the thin wall case, the bounce action is \[S_{b} = \frac{4}{3\pi^{7}M_{5}^{6}}\left(\frac{S_{1}}{T_{c}^{3}}\right) ^{3}\frac{T_{c}/T}{(1-T^{4}/T_{c}^{4})^{2}}\;,\] \[S_{1} = \sqrt{48M_{5}^{3}}\int_{T}^{\varphi_{\rm min}}{\rm d}\varphi \sqrt{V(\varphi)-V(\varphi_{\rm min})}\,. \tag{5.1}\] For the thick wall case, the approach is numerical. We have to minimize the action \[S_{b}=\int{\rm d}^{4}x\left(12M_{5}^{2}(\partial\varphi)^{2}+V(\varphi)+2\pi^{ 4}M_{5}^{3}T^{4}\right)\;, \tag{5.2}\] subject to the boundary conditions \(\varphi^{\prime}(0)=0\) and \(\varphi^{\prime}(\varphi\approx 0)=-(\pi^{2}/\sqrt{6})T^{2}\). The second condition comes from equating the energy across the bubble boundary [32]. Working with rescaled quantities \[\frac{1}{24M_{5}^{3}}\widetilde{V}(\varphi)\equiv\kappa^{4}\,\varphi^{4}\,v( \varphi)\;,\;y=\kappa\,\varphi\,T^{-1}\;,\;\;x=\kappa\,r\,T\;, \tag{10}\] the thick wall bounce action looks like \[S_{b}=\frac{96\pi M_{5}^{3}}{\kappa^{3}}\int\mathrm{d}x\,x^{2}\left(\frac{1}{ 2}\left(\frac{\mathrm{d}y}{\mathrm{d}x}\right)^{2}+y^{4}\,v(Ty/\kappa)+\frac{ \pi^{4}}{12}\right)\;. \tag{11}\] The overall factor of \(M_{5}^{3}\sim N_{c}^{2}\) makes the \(N_{c}^{2}\) dependence of the bounce action manifest. For the \(O(3)\) symmetric bubbles with bounce action \(S_{b}\), the tunnelling probability per unit time per unit 3-volume, at temperature \(T\) is given by \[\Gamma(T)=T^{4}e^{-S_{b}(T)}\,, \tag{12}\] where we have ignored constant \(\mathcal{O}(1)\) multiplicative factors in the above. For the phase transition to complete, we need the probability in a Hubble volume to be at least of order 1, which translates to \[\Gamma(T)\gtrsim H^{4}\,, \tag{13}\] where \(H\) is the Hubble constant in the BB phase, and is fixed by the Friedmann equations \[H^{2}=\frac{\rho_{\text{total}}}{3M_{\text{pl}}^{2}}\;. \tag{14}\] The energy density \(\rho_{\text{total}}\) gets contributions from the vacuum energy and from the radiation (since the false vacuum is at a temperature \(T\)). Recall that the vacuum energy is tuned to zero at the RS minimum, so that when BB phase is meta-stable, it has higher free energy than the RS phase, and therefore has a positive vacuum energy. This is given as \[\rho_{\text{vac, BB}}=2\pi^{4}M_{5}^{3}T_{c}^{4}\;. \tag{15}\] The energy density from radiation scales as \(T^{4}\) and quickly becomes subdominant as \(T\lesssim T_{c}\), which is necessary for the RS phase to become stable. The condition for the phase transition to complete becomes \[T^{4}e^{-S_{b}(T)}>\frac{4\pi^{8}}{9}\frac{M_{5}^{6}T_{c}^{8}}{M_{\text{Pl}}^ {4}}\;, \tag{16}\] or equivalently \[S_{b}(T)<-\log\left(\frac{4\pi^{8}}{9}\frac{M_{5}^{6}T_{c}^{8}}{T^{4}M_{\text{ Pl}}^{4}}\right)\equiv S_{b}^{\text{max}}(T/T_{c})\;. \tag{17}\] Defining the nucleation temperature \(T_{n}\) as the temperature at which \(\Gamma(T_{n})=H^{4}\), we get \[S_{b}(T_{n})=S_{b}^{\text{max}}(T_{n}/T_{c})\;. \tag{18}\] Ignoring order 1 factors, for \(T_{c}\) around the TeV scale, \(S_{b}^{\rm max}\sim 140\) and changes slowly as a function of \(T/T_{c}\). Using \(M_{\rm pl}^{2}=M_{5}^{3}\) (\(\ell_{\rm AdS}=1\)), \(S_{b}^{\rm max}\) is independent of the number of colors \(N_{c}\). The bounce action \(S_{b}(T)\) on the other hand, scales as \(N_{c}^{2}\). Using this we can effectively obtain a bound on the maximum \(N_{c}\) that allows a phase transition to complete, at a given \(T_{n}/T_{c}\) (i.e. the amount of supercooling): \[N_{\rm max}(T_{n}/T_{c})=\sqrt{\frac{S_{b}^{\rm max}(T_{n}/T_{c})}{S_{b,N_{c}= 1}(T_{n})}}\;. \tag{5.12}\] In the next section, for the parameter choices in table 1, we present the results for \(S_{b}\) and \(S_{b}^{\rm max}\), both in the thin wall limit and away from it. We find that the bounce action is reduced for some of the parameter choices, and is correlated with increasing the radion mass. ## 6 Results ### Bounce action and maximum \(N\) The left panel of fig. 2(a) shows the bounce action (normalized by \(16\pi^{2}M_{5}^{3}=N_{c}^{2}\)) in the thin wall limit (solid lines) and away from it (dashed lines), for the parameter choices in table 1. Also shown is the maximum bounce action (dotted) beyond which the rate is too small to compete with the Hubble expansion. We note a couple of things in fig. 2(a). The thin wall curves are only applicable for \(T/T_{c}\lesssim 1\), and for all the parameter choices, they are above the \(S_{b}^{\rm max}\) line, so that the phase transition rate is too small even for \(N_{c}=1\), in the thin-wall limit. The phase transition completes in the thick-wall case, at a temperature \(T_{n}\) where the \(S_{b}\) and \(S_{b}^{\rm max}\) curves intersect (which in turn depends on \(N_{c}\) because \(S_{b}\) scales as \(N_{c}^{2}\)). For a non-zero \(\epsilon_{3}\), the thick wall bounce action is smaller than the \(\epsilon_{3}=0\) case (e.g. red vs blue curves in fig. 2(a)), and is a slowly varying function of \(T/T_{c}\). Further, the variation in the thick wall bounce action as a function of \(T/T_{c}\) is larger in the presence of a non-zero \(\epsilon_{3}\), which has consequences for GW signal, as we explain later. For convenience, we also show the corresponding radion potential in the right panel of fig. 2(a). We can notice a correlation between the second derivative of the radion potential at the minimum and the bounce action. When the second derivative is larger, i.e. the physical radion is heavier, the potential is deeper and the bounce action is lower. For a given amount of supercooling, say \(T/T_{c}=10^{-3}\), the bounce action can be lowered approximately by a factor of 30 due to a non-zero \(\epsilon_{3}\) (e.g. comparing the red and the blue curves in fig. 2(a)). Using eq. (5.12), we can calculate the maximum color \(N_{\rm max}\) in the dual theory for which the phase transition can complete for a given amount of supercooling. Figure 2(b) shows \(N_{\rm max}\) as a function of \(T_{n}/T_{c}\). For some amount of supercooling, one can have \(N_{\rm max}\sim 10\) (e.g. the orange and red curves in fig. 2(b)). Such values of \(N_{\rm max}\) are obtained at \(T_{n}/T_{c}\sim 10^{-6}\), where the bounce action is becoming smaller than \(\mathcal{O}(1)\) (e.g. the orange and red curves in Fig. 2(a)). \(N_{\rm max}\sim 10\) is therefore the most one can hope in the present analysis, because we have only computed a part of the bounce action that was supposed to be dominant. Once this calculable part is reduced beyond \(\mathcal{O}(1)\) values, one cannot use this as a complete answer. Taking the lowest bounce action to be \(\mathcal{O}(1)\), eq. (5.12) gives \(N_{\rm max}\sim\sqrt{S_{b}^{\rm max}}\sim 12\) for \(S_{b}^{\rm max}\sim 140\). The amount of supercooling that the metastable phase experiences has important phenomenological implications. During this period of supercooling, the universe inflates, and this dilutes any matter abundances generated before the phase transition. If the framework is to address dark matter abundance and baryon asymmetry with sufficient supercooling, they must be generated after the phase transition completes. Supercooling Figure 3: Bounce action, potential, and the maximum number of colors for the phase transition to complete, for the parameters in table 1. also has implications for primordial black hole generation mechanisms, cosmological aspects of axion model building, topological defects and so on (e.g. see [27; 44] and references therein). A key more generic observation is that bubble collision at the end of the phase transition can generate gravitational waves, whose frequency and abundance are dependent on the amount of supercooling. For all these applications but especially with an eye to the latter, we estimate the peak frequency and the GW abundance from bubble collisions in the next subsection. ### Gravitational wave signal First-order phase transitions can give rise to stochastic gravitational wave signals, which can potentially be detected in ground and space-based GW detectors, according to their characteristic frequency and abundance (see refs. [45; 46] for a review). There are three sources of GW production in a first-order phase transition--bubble collision, sound waves, and turbulence in the plasma. The last two sources require detailed numerical analysis; here we focus on the bubble collision as a source of stochastic GW production. The signal in GW can be characterized by the peak frequency \(f_{p}\), and the frequency dependence of the fractional abundance \(\Omega_{\rm GW}\,h^{2}\). For the GWs generated by bubble collision, these two quantities are given by [45] \[f_{p} =0.037\ {\rm mHz}\ \left(\frac{\beta}{H}\right)\left(\frac{T_{*}}{ \rm TeV}\right)\left(\frac{g_{*}}{100}\right)^{1/6},\] \[\Omega_{\rm GW}\,h^{2}(f) =1.3\times 10^{-6}\ \left(\frac{H}{\beta}\right)^{2}\left(\frac{100}{g_{*}} \right)^{1/3}\frac{3.8(f/f_{p})^{2.8}}{1+2.8(f/f_{p})^{3.8}}\,. \tag{6.1}\] Here we have assumed that when the transferred latent heat is large compared to the energy of the surrounding plasma, all the latent heat is transferred to the bubble wall, and the bubble wall velocity is ultra-relativistic. \(H\) here is the Hubble scale during the phase transition, \(T_{*}\) is the temperature of the radiation bath right after the phase transition, and \(g_{*}\) is the number of relativistic degrees of freedom in the plasma during the phase transition. The parameter \(\beta/H\) is related to the duration of the phase transition [47], and can be calculated from the bounce action as: \[\frac{\beta}{H}=\left.-\frac{{\rm d}\log\Gamma}{{\rm d}\log T}\right|_{T=T_{n} }\approx-4+\left.\frac{{\rm d}S_{b}}{{\rm d}\log T}\right|_{T=T_{n}}\,, \tag{6.2}\] where we have used eq. (5.5) in the last equality above. Note that \(\beta/H\) is a function of \(T_{n}\), the temperature at which the phase transition can proceed, which in turn depends on \(N_{c}\). A small \(\beta/H\) decreases the peak frequency, but crucially increases the abundance. Figure 4 shows \(\beta/H\) as a function of \(T_{n}/T_{c}\) for the parameter choices in table 1. On each curve, \(N_{c}\) varies, which changes \(T_{n}\). Values of \(N_{c}=2,5,10\) are shown on the individual curves. The behavior of \(\beta/H\) with and without \(\epsilon_{3}\) is very different, as seen for example by the blue and the red curves in fig. 4. When \(\epsilon_{3}=0\), \(\beta/H\) is small and decreases as \(T_{n}/T_{c}\) decreases, unlike the \(\epsilon_{3}\neq 0\) case. When the deformation in the IR is very small so that one is close to a CFT, the bounce action is weakly dependent on temperature [32], and \(\beta/H\) is close to zero. This is different than the behavior seen for \(\epsilon_{3}\neq 0\). The increased peak frequency and reduced GW signal from a non-zero \(\epsilon_{3}\) is still within the reach of proposed space-based GW interferometers such as LISA [6; 7], DECIGO [8; 9; 10; 11], and BBO [12; 13; 14]. In fig. 5 we show the expected GW abundance as a function of frequency, for some values of \(\beta/H\), and compare to the projected reach of the experiments (taking \(g_{*}=100\) and \(T_{*}=\) TeV in eq. (10)). For a given value of \(N_{c}\), the values of \(\beta/H\) are different for the four parameter choices in table 1, and are at different values of \(T_{n}/T_{c}\). The left panel of fig. 5 shows the GW signal for \(\beta/H=20,50,200,350\) (corresponding to \(N_{c}=2\), shown by "x" in fig. 4). The value of \(\beta/H\) changes significantly for a non-zero \(\epsilon_{3}\): from 20 (\(\epsilon_{3}=0\), \(T_{n}/T_{c}=0.002\), parameter \(\mathbf{A}\)) to 350 (\(\epsilon_{3}=-1/81\), \(T_{n}/T_{c}=0.7\), parameter \(\mathbf{D}\)). Such different values of \(\beta/H\) correspond to very different peak frequencies and fractional abundances (e.g. the red and blue curves in the left panel of fig. 5). For comparison, we also show the GW signal for \(\beta/H=10,30,120\) in the right panel of fig. 5 (corresponding to \(N_{c}=5\), shown by "o" in fig. 4). For both choices of \(N_{c}\) we see that while the signal strength is reduced and peak frequency is increased due to a non-zero \(\epsilon_{3}\), there is still a possibility of discovery. ## 7 General comments In this work we have argued for including self-interaction terms in the bulk potential of the stabilizing scalar. We considered a bulk potential with a quadratic and a cubic term, with signs chosen such that both terms grow in the IR. We stayed in the limit of small back-reaction and within the radion EFT. In the presence of a cubic term, and for a large hierarchy, the defining features of the radion potential are changed, and the radion mass is increased. The same effect also reduced the bounce action and thereby increased the rate Figure 4: The parameter \(\beta/H\) as a function of \(T_{n}/T_{c}\) for the parameter choices in table 1. On each curve, \(N\) varies as a parameter. Points with \(N_{c}=2,5,10\) are shown by markers. for a transition from the hot phase to the RS phase. Equivalently, this reduced the amount of supercooling needed before completing the phase transition and increased the maximum number of colors \(N_{\rm max}\) for which the phase transition can complete, for a given amount of supercooling. For a choice of parameters, we were able to have \(N_{\rm max}\sim 10\). We also discussed the resulting GW signals from bubble collisions, and showed that in the presence of self-interactions, the parameter \(\beta/H\) that characterizes the frequency and abundance of the GWs does not get too small, unlike the case when the CFT breaking in the IR is small and \(\beta/H\) is close to zero. In fig. 6 we show \(N_{\rm max}\) and \(\beta/H\) for a moderate amount of supercooling \(T_{n}/T_{c}=10^{-4}\), as a function of the mass squared of the physical radion. The figure summarizes the main point of the paper--the presence of a self-interaction term increases the mass of the physical radion and the same effect also reduces the bounce action to increase \(N_{\rm max}\), while disfavoring a small \(\beta/H\). As the radion mass gets close to the KK scale, we cannot trust the result entirely, and a complete 5D calculation would be necessary. Our conclusion is similar to [34], even though the underlying dynamics driving the radion mass up is different. [FIGURE Figure 5: GW abundance as a function of frequency for some values of \(\beta/H\): values in the left panel correspond to \(N_{c}=2\) (indicated in fig. 4 by a “x”), values in the right panel correspond to \(N_{c}=5\) (indicated in fig. 4 by a “o”). Also shown is the projected reach from LISA [6; 7], DECIGO [8; 9; 10; 11], and BBO [12; 13; 14]. We have taken \(g_{*}=100\) and \(T_{*}=\rm TeV\). Figure 6: The effect on \(N_{\rm max}\) (left) and \(\beta/H\) (right) as the radion mass squared increases due to a non-zero \(\epsilon_{3}\). The curves are shown for a fixed \(T_{n}/T_{c}=10^{-4}\). Our work is a first step towards systematically including the effects of strong coupling in the IR, in the dynamics of the phase transition. In this work we have chosen parameters such that the radion is lighter than the other KK masses, allowing us to use the radion EFT. A more generic situation is when the radion mass is of the same order as other KK masses, which would drive the calculation out of the radion EFT and a full 5D calculation would be needed. The back-reaction would be important in the IR and it would change the free energy of both the RS and the BB phase. The stability of the phases can change the order of the phase transition, and a full 5D gravitational instanton computation would be needed to address the question properly. Rather than modeling choices, these are general effects. We will address some of these issues in a more general model in a future work. ## Acknowledgements We would like to thank P. Creminelli, P. Du and A. Pomarol for comments and discussions at various stages. We would also like to thank N. DePorzio, J. Lodman and W. L. Xu for collaborations at an early stage. The work of RKM and LR is supported by the National Science Foundation under Grant Nos. PHY-1620806, PHY-1748958 and PHY-1915071, the Chau Foundation HS Chau postdoc award, the Kavli Foundation grant "Kavli Dream Team," and the Moore Foundation Award 8342. Part of this work was completed at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452. ## Appendix A Numerical and approximately analytical method In this appendix we briefly outline the procedure to obtain the approximate analytical solutions for the scalar \(\chi\), for both BB and RS backgrounds. We also describe the numerical method to obtain the profile for \(\chi\) in the BB background. The equation of motion for \(\chi\) in the background of eq. (1) is \[\left(1-e^{4(r-r_{h})}\right)\frac{\mathrm{d}^{2}\chi}{\mathrm{d}r^{2}}-4 \frac{\mathrm{d}\chi}{\mathrm{d}r}-\frac{\mathrm{d}V_{B}(\chi)}{\mathrm{d} \chi}-\sum_{i}\delta(r-r_{i})\frac{\mathrm{d}V_{i}(\chi)}{\mathrm{d}\chi}=0\:, \tag{10}\] where \(i\) runs over \((uv,ir)\) for the RS case and only over \((uv)\) for the BB case. The range of \(r\) is \(0\leq r\leq r_{\mathrm{ir}}\) for the RS case and \(0\leq r\leq r_{h}\) for the BB case. We will work with a rescaled coordinate \(y=r/r_{\mathrm{ir}}\) for the RS case, and \(y=r/r_{h}\) for the BB case, so that \(0\leq y\leq 1\) for both backgrounds. Given the bulk and brane potentials in eq. (5), (6), the equation and the boundary conditions for the RS and BB backgrounds are \[\text{RS:}\qquad\frac{1}{r_{\mathrm{ir}}^{2}}\frac{\mathrm{d}^{2 }\chi}{\mathrm{d}y^{2}}-\frac{4}{r_{\mathrm{ir}}}\frac{\mathrm{d}\chi}{ \mathrm{d}y}-4\epsilon_{2}\chi-4\epsilon_{3}\chi^{2}=0,\ \chi(0)=v_{\mathrm{uv}},\ \chi^{ \prime}(1)=-r_{\mathrm{ir}}\alpha_{\mathrm{ir}}\:, \tag{11}\] \[\text{BB:}\qquad\left(\frac{1-e^{4r_{h}(y-1)}}{r_{h}^{2}}\right) \frac{\mathrm{d}^{2}\chi}{\mathrm{d}y^{2}}-\frac{4}{r_{h}}\frac{\mathrm{d}\chi }{\mathrm{d}y}-4\epsilon_{2}\chi-4\epsilon_{3}\chi^{2}=0,\ \chi(0)=v_{\mathrm{uv}}\:. \tag{12}\] ### RS background In the limit \(r_{\mathrm{ir}}\gg 1,|\epsilon_{2}|r_{\mathrm{ir}}\lesssim 1,|\epsilon_{3}|r_{ \mathrm{ir}}\lesssim 1\), eq (11) can be solved approximately by singular perturbation theory and boundary layer analysis [48]. Close to \(r=y=0\), \(\chi\) varies slowly and the derivatives are small. To leading order in \(1/r_{\rm ir}\), one can drop the second derivative in eq. (101), which results in a first-order differential equation, readily solved. Applying the boundary condition \(\chi(0)=v_{\rm uv}\) to the first-order differential equation resulting from dropping the second derivative term in eq. (101), we get \[\chi_{\rm left}(y)=\frac{v_{\rm uv}e^{-\epsilon_{2}r_{\rm ir}y}}{1+v_{\rm uv} \epsilon_{3}\left(\frac{1-e^{-\epsilon_{2}r_{\rm ir}y}}{\epsilon_{2}}\right)}\,, \tag{102}\] where the subscript "left" refers to the fact that the solution is valid only on the left side of the interval. As \(y\to 1\), \(\chi\) has to change fast to match the boundary condition at \(y=1\). In a small region near \(y=1\), of width \(\mathcal{O}(1/r_{\rm ir})\), \(\chi\) itself is small, but changes fast so that the derivatives are large. In this region (referred to as "right") one only keeps the first and the second derivative terms in eq. (101), which gives \[\chi_{\rm right}(y)=-\frac{\alpha_{\rm ir}}{4}e^{4r_{\rm ir}(y-1)}+C\,, \tag{103}\] where we have applied the boundary condition \(\chi^{\prime}(1)=-r_{\rm ir}\alpha_{\rm ir}\), and \(C\) is an undetermined constant at the moment. Using asymptotic matching in a region where both the solutions are valid, and requiring the same functional form, the constant \(C\) can be fixed. The final solution is given as (now switching to \(r=r_{\rm ir}\,y\)) \[\chi_{\rm RS}(r)=-\frac{\alpha_{\rm ir}}{4}e^{4(r-r_{\rm ir})}+\frac{v_{\rm uv }e^{-\epsilon_{2}r}}{1+v_{\rm uv}\epsilon_{3}\left(\frac{1-e^{-\epsilon_{2}r }}{\epsilon_{2}}\right)}\,. \tag{104}\] For consistency, we can check if the equations of motion and the boundary conditions are satisfied. In the limit of \(r_{\rm ir}\gg 1,v_{\rm uv}\ll 1,|\epsilon_{2}|\ll 1,|\epsilon_{3}|\ll 1\), the errors on the boundary and in the bulk are small, and under control. Note that for \(\epsilon_{2}<0\), the denominator of eq. (102) can vanish at some \(r\). Clearly that is outside the validity of the approximation since the second derivative would not be small there anymore. To be consistent, we therefore need \(-\epsilon_{2}r_{\rm ir}<\log(1+\epsilon_{2}/v_{\rm uv}\epsilon_{3})\). For \(\epsilon_{2}<0,\epsilon_{3}<0\), in the limit of small \(\epsilon_{2}\), this condition simplifies to \(r_{\rm ir}<1/v_{\rm uv}|\epsilon_{3}|\), which can be satisfied for small \(v_{\rm uv}\). We can understand intuitively the role of \(\epsilon_{3}\) as follows. For small enough \(r\) one can expand the exponential in the denominator in the second term of eq. (104), and approximate it as follows: \[\frac{v_{\rm uv}e^{-\epsilon_{2}r}}{1+v_{\rm uv}\epsilon_{3}(\frac{1-e^{- \epsilon_{2}r}}{\epsilon_{2}})}\approx\frac{v_{\rm uv}e^{-\epsilon_{2}r}}{1+v_ {\rm uv}\epsilon_{3}r}\approx v_{\rm uv}e^{-(\epsilon_{2}+v_{\rm uv}\epsilon _{3})r}\,. \tag{105}\] This makes it clear that for small \(r\), \(\epsilon_{3}\) effectively increases \(\epsilon_{2}\). This effect compounds as \(r\) increases. ### BB background Similar to the RS case, eq. (100) can be solved in the limit of large \(r_{h}\). For \(y=r/r_{h}\ll 1\), the exponential term in the coefficient of \(\chi^{\prime\prime}\) can be dropped, and we have the same solution as eq. (102). Note that the UV boundary condition is the same for both RS and BB cases. As \(y\to 1\), _unlike_ the RS case, the coefficient of \(\chi^{\prime\prime}\) becomes small, so that one can again drop the second derivative term. Since the only condition on \(\chi(r)\) is to be regular at \(y=1\), the leading order solution is \[\chi_{\rm BB}(r)=\frac{v_{\rm uv}e^{-\epsilon_{2}r}}{1+v_{\rm uv}\epsilon_{3}( \frac{1-e^{-\epsilon_{2}r}}{\epsilon_{2}})}\,. \tag{100}\] To be more precise, we can do a Taylor series expansion around \(y=1\) to solve the equation near \(y=1\) and match it to \(\chi_{\rm left}(r)\) at some intermediate value. This procedure gives the same result as above, to leading order. We would like to check the approximate solution (100) with the numerical solution for the profile. Without an explicit boundary condition to be applied at \(y=1\), it is not clear a priori how to numerically solve for \(\chi\) in the BB geometry. For this we apply a method that is akin to matching in an intermediate region. We solve the full equation numerically with given \(\chi(0)\), and for some value of \(\chi^{\prime}(0)\), varying \(\chi^{\prime}(0)\). We also expand the equation in a Taylor series expansion around \(y=1\) which analytically fixes \(\chi^{\prime}(1)\) in terms of \(\chi(1)\). As we vary \(\chi^{\prime}(0)\) in the numerical solution, we check whether the analytical relation between \(\chi(1)\) and \(\chi^{\prime}(1)\) is satisfied, which uniquely determines \(\chi^{\prime}(0)\), and hence the numerically regular solution. In the special case of \(\epsilon_{3}=0\), the solution is a linear combination of hypergeometric functions [2], and the correct linear combination regular at the horizon is easily identified. Figure 1 shows a comparison between the solutions: for \(\epsilon_{3}\neq 0\), between the approximate solutions obtained in this appendix and the numerical solution obtained by the method discussed in the previous paragraph, and for \(\epsilon_{3}=0\), between the exact solution and the approximate solution. ## Appendix B Radion potential Starting with the purely gravitational 5D action \[S=\int d^{5}x\sqrt{g}\left(-2M_{5}^{3}\mathcal{R}[g]-\Lambda_{5}\right)-\sum_ {i=uv,ir}\int d^{4}x\sqrt{g_{i}}\,T_{i}\,, \tag{101}\] where \(-\Lambda_{5}=T_{\rm uv}=-T_{\rm ir}=24M_{5}^{3}\) (setting \(\ell_{\rm AdS}=1\)), and plugging back the metric \[ds^{2}=-e^{-2r}dx^{2}+dr^{2}\;,\qquad 0\leq r\leq r_{\rm ir}\;, \tag{102}\] but making \(r_{\rm ir}\) a 4D field \(r_{\rm ir}(x)\), one generates a kinetic term for \(r_{\rm ir}(x)\). In terms of \(\varphi=\exp(-r_{\rm ir})\), the 4D action looks like \[S=-12M_{5}^{3}\int d^{4}x(\partial\varphi)^{2}\;. \tag{103}\] At this point, there is no potential for \(\varphi\) and it is a modulus. To generate a potential to stabilize the geometry, we add a GW scalar \(\chi\) with the action defined in eq. (4) and solve for the background value of \(\chi\) which is a function of \(r\) due to the choice of boundary and bulk potentials. Evaluating \(S_{\chi}\) on \(\chi(r)\) gives the potential for \(\varphi\), for which we outline the steps now. Plugging \(\chi(r)\) in \(S_{\chi}\), we get \[S_{\chi}=\int\mathrm{d}^{4}x\int_{0}^{r_{\mathrm{ir}}}\mathrm{d}r\,e^{-4r}\left( -\frac{1}{2}\left(\frac{\mathrm{d}\chi}{\mathrm{d}r}\right)^{2}-V_{B}(\chi) \right)-\sum_{i=uv,ir}e^{-4r_{i}}V_{i}(\chi(r_{i}))\:. \tag{100}\] Using \(\chi\) equations of motion \(\chi^{\prime\prime}-4\chi^{\prime}-\partial_{\chi}V_{B}(\chi)=0\) and an integration by parts, the bulk term of \(S_{\chi}\) evaluates to \[\int\mathrm{d}^{4}x\left(-\frac{1}{2}\left.\left(e^{-4r}\chi\frac{\mathrm{d} \chi}{\mathrm{d}r}\right)\right|_{0}^{r_{\mathrm{ir}}}-\int_{0}^{r_{\mathrm{ir }}}\mathrm{d}r\,e^{-4r}\left(V_{B}(\chi)-\frac{1}{2}\chi\frac{\mathrm{d}V_{B}} {\mathrm{d}\chi}\right)\right)\:. \tag{101}\] Adding the contribution from the \(uv,ir\) localized terms (in eq. (100)) to the bulk contribution in eq. (101), the potential is given as \[V(r_{\mathrm{ir}})=\frac{1}{2}\left.\left(e^{-4r}\chi\frac{ \mathrm{d}\chi}{\mathrm{d}r}\right)\right|_{0}^{r_{\mathrm{ir}}}+\int_{0}^{r_ {\mathrm{ir}}}\mathrm{d}r\,e^{-4r}\left(V_{B}(\chi)-\frac{1}{2}\chi\frac{ \mathrm{d}V_{B}}{\mathrm{d}\chi}\right)+V_{\mathrm{uv}}(\chi(0))+e^{-4r_{ \mathrm{ir}}}V_{\mathrm{ir}}(\chi(r_{\mathrm{ir}}))\:. \tag{102}\] For Dirichlet boundary condition in UV, \(V_{\mathrm{uv}}(\chi)=0\). Given the bulk potential in eq. (5), \(V_{B}-(1/2)\chi\partial_{\chi}V_{B}=-(2/3)\epsilon_{3}\chi^{3}\). Using \(\chi(r)\) from eq. (7), we need to evaluate the integral \[-\frac{2}{3}\epsilon_{3}\,\int_{0}^{r_{\mathrm{ir}}}\mathrm{d}r\,e^{-4r}\left( -\frac{\alpha_{\mathrm{ir}}}{4}e^{4(r-r_{\mathrm{ir}})}+\frac{v_{\mathrm{uv}} e^{-\epsilon_{2}r}}{1+v_{\mathrm{uv}}\epsilon_{3}\left(\frac{1-e^{-\epsilon_{2}r}}{ \epsilon_{2}}\right)}\right)^{3}\:. \tag{103}\] To proceed, we need some identities related to hypergeometric functions. First note that each term in the above is an integral of a general form, with a closed-form answer expressed in terms of hypergeometrics: \[\int\mathrm{d}r\frac{e^{ar}}{(1+be^{cr})^{n}}=\frac{e^{ar}}{(1+be^{cr})^{n-1} }\,{}_{2}F_{1}(1,\,1-n+\frac{a}{c},\,1+\frac{a}{c},\,be^{cr})\:. \tag{104}\] Two other useful identities are [49]: \[{}_{2}F_{1}(a,b,c;z) =(1-z)^{c-a-b}{}_{2}F_{1}(c-a,c-b,c;z)\:,\] \[{}_{2}F_{1}(a+\delta\lambda,b,c+\lambda) \approx(1-\delta z)^{-b}\:\:,\qquad|\lambda|\gg 1\:,|\delta|\leq 1\:. \tag{105}\] To simplify the further expressions, we define \[\epsilon_{32}=\frac{\epsilon_{3}}{\epsilon_{2}}\:,\:\:\lambda=\frac{v_{ \mathrm{uv}}\epsilon_{32}}{1+v_{\mathrm{uv}}\epsilon_{32}}\:,\:\:Y(r)=\frac{ \lambda e^{-\epsilon_{2}r}}{1-\lambda e^{-\epsilon_{2}r}}\:, \tag{106}\] in terms of which \(\chi(r)\) can be written as \[\chi(r)=-\frac{\alpha_{\mathrm{ir}}}{4}e^{4(r-r_{\mathrm{ir}})}+\frac{1}{ \epsilon_{32}}Y(r)\:. \tag{107}\] We now expand the integrand in eq. (B.7), use eqs. (B.8), (B.9) for simplification and the notation of eq. (B.10). Calling the four terms from expanding eq. (B.7) as \(t_{1},t_{2},t_{3},t_{4}\), we have \[t_{1} =-\frac{2}{3}\epsilon_{3}\,\int_{0}^{r_{\rm ir}}{\rm d}r\,e^{-4r} \left(-\frac{\alpha_{\rm ir}}{4}e^{4(r-r_{\rm ir})}\right)^{3}=\frac{1}{96} \epsilon_{3}\alpha_{\rm ir}^{3}e^{-12r_{\rm ir}}\int_{0}^{r_{\rm ir}}{\rm d}r \epsilon^{8r}\] \[=\frac{1}{768}\epsilon_{3}\alpha_{\rm ir}^{3}\left(\sigma^{4}- \varphi^{12}\right)\,.\] (B.12) \[t_{2} =-\frac{2}{3}\epsilon_{3}\,\int_{0}^{r_{\rm ir}}{\rm d}r\,e^{-4r }\,3\,\left(-\frac{\alpha_{\rm ir}}{4}e^{4(r-r_{\rm ir})}\right)\left(\frac{1 }{\epsilon_{32}}Y(r)\right)^{2}\] \[=\frac{1}{2}\frac{\epsilon_{2}^{2}}{\epsilon_{3}}\alpha_{\rm ir }\,e^{-4r_{\rm ir}}\,\int_{0}^{r_{\rm ir}}{\rm d}r\frac{\lambda^{2}e^{-2 \epsilon_{2}r}}{(1-\lambda e^{-\epsilon_{2}r})^{2}}=\frac{1}{2}\frac{ \epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir}\,e^{-4r_{\rm ir}}\,\left(-Y(r)+ \log(1+Y(r))\right)\biggr{|}_{0}^{r_{\rm ir}}\] \[=\frac{1}{2}\frac{\epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir} \varphi^{4}\left(-Y(r_{\rm ir})+\log(1+Y(r_{\rm ir}))+\frac{\lambda}{1-\lambda }+\log(1-\lambda)\right)\,.\] (B.13) \[t_{3} =-\frac{2}{3}\epsilon_{3}\,\int_{0}^{r_{\rm ir}}{\rm d}r\,e^{-4r }\,3\,\left(-\frac{\alpha_{\rm ir}}{4}e^{4(r-r_{\rm ir})}\right)^{2}\left( \frac{1}{\epsilon_{32}}Y(r)\right)\] \[=-\frac{1}{8}\epsilon_{2}\alpha_{\rm ir}^{2}e^{-8r_{\rm ir}}\int _{0}^{r_{\rm ir}}{\rm d}r\frac{\lambda e^{(4-\epsilon_{2})r}}{(1-\lambda e^{- \epsilon_{2}r})}=\frac{1}{32}\epsilon_{2}\alpha_{\rm ir}^{2}e^{-8r_{\rm ir}} \,\left(e^{4r}{}_{2}F_{1}(1,\frac{4}{\epsilon_{2}},1+\frac{4}{\epsilon_{2}}; \lambda^{-1}e^{\epsilon_{2}r})\right)\biggr{|}_{0}^{r_{\rm ir}}\] \[=\frac{1}{32}\epsilon_{2}\alpha_{\rm ir}^{2}e^{-8r_{\rm ir}}\, \left(-e^{4r}Y(r)\right)\biggr{|}_{0}^{r_{\rm ir}}=\frac{1}{32}\epsilon_{2} \alpha_{\rm ir}^{2}\left(-\varphi^{4}Y(r_{\rm ir})+\varphi^{8}\left(\frac{ \lambda}{1-\lambda}\right)\right)\,.\] (B.14) \[t_{4} =-\frac{2}{3}\epsilon_{3}\,\int_{0}^{r_{\rm ir}}{\rm d}r\,e^{-4r }\left(\frac{1}{\epsilon_{32}}Y(r)\right)^{3}=-\frac{2}{3}\frac{\epsilon_{2} ^{3}}{\epsilon_{3}^{2}}\int_{0}^{r_{\rm ir}}{\rm d}r\frac{\lambda^{3}e^{-(4+3 \epsilon_{2})r}}{(1-\lambda e^{-\epsilon_{2}r})^{3}}\] \[=-\frac{1}{6}\frac{\epsilon_{2}^{3}}{\epsilon_{3}^{2}}\left(e^{-4 r}{}_{2}F_{1}(3,-\frac{4}{\epsilon_{2}},1-\frac{4}{\epsilon_{2}};\lambda^{-1}e^{ \epsilon_{2}r})\right)\biggr{|}_{0}^{r_{\rm ir}}=-\frac{1}{6}\frac{\epsilon_{ 2}^{3}}{\epsilon_{3}^{2}}\,\left(-\,e^{-4r}Y^{3}(r)\right)\biggr{|}_{0}^{r_{ \rm ir}}\] \[=\frac{1}{6}\frac{\epsilon_{2}^{3}}{\epsilon_{3}^{2}}\left(\varphi ^{4}Y^{3}(r_{\rm ir})-\left(\frac{\lambda}{1-\lambda}\right)^{3}\right)\,.\] (B.15) We have used \(|\epsilon_{2}|\ll 1\) in the above to simplify the hypergeometric functions. Collecting everything together, keeping to leading order in \(\varphi\), and dropping overall constants, the integral in eq. (B.7) is given as: \[\varphi^{4}\left(A+BY+CY^{3}+D\log(1+Y)\right)\,,\,\,Y=\frac{ \lambda\varphi^{\epsilon_{2}}}{1-\lambda\varphi^{\epsilon_{2}}}\] \[A=\frac{1}{768}\epsilon_{3}\alpha_{\rm ir}^{2}-\frac{\lambda}{1- \lambda}+\log(1-\lambda)\,,\,\,B=-\frac{1}{32}\epsilon_{2}\alpha_{\rm ir}^{2}- \frac{1}{2}\frac{\epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir}\,,\,\,C=\frac{1}{6 }\frac{\epsilon_{2}^{3}}{\epsilon_{3}^{2}}\,,\,\,D=\frac{1}{2}\frac{\epsilon_{ 2}}{\epsilon_{3}}\alpha_{\rm ir}\,.\] (B.16) In addition to all the terms in eq. (B.6), there is a potential generated by the detuning of the IR brane tension, and is given by \[V_{\rm detune}(\varphi)=\widetilde{\tau}\,\varphi^{4}\,,\] (B.17) where \(\widetilde{\tau}\) is a free parameter at this point. Together with above, including all the contributions in eq. (B.6), and keeping to linear order in \(\epsilon_{2}\), the radion potential is given as \[V(\varphi)=\varphi^{4}\left(a_{1}+a_{2}Y+a_{3}\log(1+Y)\right)\,,\,\,Y =\frac{\lambda\varphi^{\epsilon_{2}}}{1-\lambda\varphi^{\epsilon_{2}}}\] \[a_{1}=\tau\,,\,\,\,a_{2}=-\frac{1}{32}\epsilon_{2}\alpha_{\rm ir }^{2}-\frac{\epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir}+2\alpha_{\rm ir}\,,\, \,\,a_{3}=\frac{1}{2}\frac{\epsilon_{2}}{\epsilon_{3}}\alpha_{\rm ir}\,, \tag{111}\] where we have absorbed terms to define an overall \(\tau\) and used exact values for \(\chi(0)\) and \(\chi^{\prime}(r_{\rm ir})\) (these are the specified boundary conditions and are only approximately satisfied by the approximate solution). The \(\epsilon_{3}\to 0\) limit is finite, and in this limit \(a_{2}Y+a_{3}\log(1+Y)\) reduces to \(-(1/2)\alpha_{\rm ir}v_{\rm uv}\varphi^{\epsilon_{2}}\). Note that for \(Y\) to stay finite, we need \(r_{\rm ir}<(1/\epsilon_{2})\log\lambda\). To make it clear what are the reasonable values for parameters, and to keep the \(N\) dependence of the dual theory clear, we define \(\tau\equiv 24M_{5}^{3}\kappa^{4}\). Since \(M_{5}^{3}\sim\frac{N^{2}}{16\pi^{2}}\), this makes it clear that \(\tau\sim\frac{N^{2}}{16\pi^{2}}\kappa^{4}\) (which is the glueball normalization) and we need \(\kappa\lesssim 1\) for small back-reaction. The radion potential can be rewritten as \[V(\varphi)=24M_{5}^{3}\,\kappa^{4}\,\varphi^{4}\,v(\varphi/\varphi_{\rm min}; \epsilon_{2},\epsilon_{3},\alpha_{\rm ir},v_{\rm uv})\,, \tag{112}\] where the function \(v\) does not have an analytical expression (since \(\varphi_{\rm min}\) does not have an analytical expression), but is easily obtained numerically. The dimensionless function \(v\) encodes the information about the breaking of the CFT, and plays a crucial role when calculating the bounce action. Putting everything together, the radion action is \[S=24M_{5}^{3}\int{\rm d}^{4}x\left(-\frac{1}{2}(\partial\varphi)^{2}-\kappa^{ 4}\,\varphi^{4}\,v(\varphi/\varphi_{\rm min};\epsilon_{2},\epsilon_{3},\alpha _{\rm ir},v_{\rm uv})\right)\,. \tag{113}\] ## Appendix C Bounce action in the thin and thick wall limits Starting with the action \[S=\int{\rm d}^{4}x\left(-12M_{5}^{2}(\partial\varphi)^{2}-V(\varphi)\right)\,, \tag{114}\] the bounce action is obtained by looking for a solution to \(\varphi\) that minimizes the Euclidean action \[S_{b}=\int{\rm d}^{4}x\left(12M_{5}^{2}(\partial\varphi)^{2}+\widetilde{V}( \varphi)\right)\,,\qquad\widetilde{V}(\varphi)=V(\varphi)-C\,, \tag{115}\] and evaluating the action on the solution. The constant \(C\) is chosen to subtract the contribution from the false vacuum, and is given by \(-2\pi^{4}M_{5}^{3}T_{c}^{4}=V(\varphi_{\rm min})\) for the thin wall case, and \(-2\pi^{4}M_{5}^{3}T^{4}\) for the thick wall case. At zero temperature, \(\varphi\) is assumed to be a function of the combination \(\rho^{2}=\vec{x}\cdot\vec{x}+t_{E}^{2}\) and this is referred as the \(O(4)\) symmetric solution. At finite temperature, there is another saddle that can dominate. For inverse temperature \(\beta=1/T\), the Euclidean time is made periodic with period \(\beta\). The field \(\varphi\) is assumed to be a function of the combination \(r^{2}=\vec{x}\cdot\vec{x}\) and this is referred to as the \(O(3)\) solution. In rest of the discussion we focus on the \(O(3)\) solution only. For \(O(3)\) symmetric solutions, the action is given more explicitly as (ignoring temperature corrections to the potential) \[S_{b}=\int_{0}^{\beta}\mathrm{d}t_{E}\int\mathrm{d}^{3}x\left(12M_{5}^{2}(\partial \varphi)^{2}+\widetilde{V}(\varphi)\right)=\frac{4\pi}{T}\int\mathrm{d}r\,r^{2 }\left(12M_{5}^{2}\left(\frac{\mathrm{d}\varphi}{\mathrm{d}r}\right)^{2}+ \widetilde{V}(\varphi)\right)\,. \tag{100}\] The equations of motion are \[24M_{5}^{3}\left(\frac{\mathrm{d}^{2}\varphi}{\mathrm{d}r^{2}}+\frac{2}{r} \frac{\mathrm{d}\varphi}{\mathrm{d}r}\right)=\frac{\mathrm{d}\widetilde{V}( \varphi,T)}{\mathrm{d}\varphi}\approx\frac{\mathrm{d}\widetilde{V}(\varphi)}{ \mathrm{d}\varphi}\;. \tag{101}\] One of the boundary conditions is \(\varphi^{\prime}(r=0)=0\) and we will have more to say about the second boundary condition later. ### Thin wall case In the limit of the two vacua being very degenerate, the solution for \(\varphi\) is such that it has a large region where it is constant (and equal to its value in the true vacuum) and then changes quickly to the value in the false vacuum. In this limit, the wall is small compared to the size of the bubble, hence this is called the thin-wall solution. In this limit, one can ignore the first derivative term in the equations of motion. Using the identity \(2\mathrm{d}^{2}y/\mathrm{d}x^{2}=\mathrm{d}/\mathrm{d}y(\mathrm{d}y/\mathrm{d }x)^{2}\), the equations can be solved to give \[12M_{5}^{3}\left(\frac{\mathrm{d}\varphi}{\mathrm{d}r}\right)^{2}=\widetilde{ V}(\varphi)\;, \tag{102}\] where we used the boundary condition \(\varphi^{\prime}(r=0)=0\) and the fact that at \(r=0\), \(\varphi=\varphi_{\mathrm{min}}\) and \(\widetilde{V}(\varphi_{\mathrm{min}})=0\). To evaluate the action on this solution, a somewhat indirect approach is more intuitive. First note that for a bubble of size \(R\), the field is mostly constant for \(r\lesssim R\), changes quickly in the vicinity of \(r=R\), and is again constant afterwards. We can split the action into these three regions. In the region \(r\lesssim R\), we can drop the derivative, \(\widetilde{V}(\varphi)\) is a constant, and the integrand is proportional to \(r^{2}\). For \(r\sim R\), the factor of \(r^{2}\) can be approximated to be \(R^{2}\), and we have to keep both the derivatives and the potential inside the integral. For \(r\gtrsim R\) there is no contribution. We therefore have \[S_{b}=\frac{4\pi}{T}\left(\widetilde{V}(\varphi)\int_{0}^{R}\mathrm{d}r\,r^{2 }+R^{2}\int_{r\sim R}\mathrm{d}r\left(12M_{5}^{3}\left(\frac{\mathrm{d}\varphi }{\mathrm{d}r}\right)^{2}+\widetilde{V}(\varphi)\right)\right)\;. \tag{103}\] Using \(\widetilde{V}(\varphi)=\Delta F=F_{\mathrm{FV}}-F_{\mathrm{TV}}>0\) as the difference in the free energies between the two vacua, we get \[S_{b}=\frac{4\pi}{T}\left(\frac{1}{3}\Delta FR^{3}+R^{2}S_{1}\right)\,,\;\;S_{ 1}=\int_{r\sim R}dr\left(12M_{5}^{3}\left(\frac{\mathrm{d}\varphi}{\mathrm{d }r}\right)^{2}+\widetilde{V}(\varphi)\right)\;. \tag{104}\] Changing variables from \(r\) to \(\varphi\), \(S_{1}\) can be written as \[S_{1}=\sqrt{48M_{5}^{3}}\int_{0}^{\varphi_{\mathrm{min}}}\mathrm{d}\varphi \sqrt{\widetilde{V}(\varphi)}\,, \tag{105}\] and is independent of \(R\). Since \(\varphi\) changes from \(0\) to \(\varphi_{\rm min}\) in the \(r\sim R\) region, this fixes the limits of the integration in eq. (102).5 Equation (102) gives \(S_{b}\) as a function of \(R\), which is minimized at \(R=-2S_{1}/\Delta F\), at which the bounce action is given as Footnote 5: Technically the lower limit is \(\varphi\sim T\), since we are estimating the action from region \(iii)\) (see discussion in main text). Taking the lower limit to be approximately zero does not change the estimate. \[S_{b}=\frac{16\pi}{3T}\frac{S_{1}^{3}}{(\Delta F)^{2}}\,. \tag{103}\] The free energy difference \(\Delta F\) can be written in terms of the critical temperature \(T_{c}\) as \[\Delta F=F_{\rm FV}-F_{\rm TV}=2\pi^{4}M_{5}^{3}\left(T^{4}-T_{c}^{4}\right)\;, \tag{104}\] using which, the bounce action becomes \[S_{b}=\frac{4}{3\pi^{7}M_{5}^{6}}\left(\frac{S_{1}}{T_{c}^{3}}\right)^{3} \frac{T_{c}/T}{\left(1-T^{4}/T_{c}^{4}\right)^{2}}\;. \tag{105}\] One can also calculate the \(\varphi\) profile for thin-wall case, using eq. (102) as \[-\sqrt{12M_{5}^{3}}\int_{\varphi_{\rm min}}^{\varphi}\frac{{\rm d}\varphi}{ \sqrt{\widetilde{V}(\varphi)}}=r\;, \tag{106}\] where we have chosen the negative sign of the square root, and used the boundary condition \({\rm d}\varphi/{\rm d}r=0\) at \(r=0\), which by eq. (102) is at \(\varphi=\varphi_{\rm min}\). ### Thick wall case In the thick wall case, we minimize the action \[S_{b}=\int{\rm d}^{4}x\left(12M_{5}^{2}(\partial\varphi)^{2}+V(\varphi)+2\pi^{ 4}M_{5}^{3}T^{4}\right)\;, \tag{107}\] One of the boundary conditions to be satisfied is the standard one: \(\varphi^{\prime}(0)=0\) (\(r\) being a radial coordinate). The second boundary condition is more subtle. The usual second boundary condition is to require \(\varphi(r\to\infty)=\varphi_{\rm false\ vacuum}\). In the case at hand, \(\varphi\) is not a dynamical variable in the other phase. Inside the bubble, close to the boundary, we have \(\varphi\sim T\approx 0\), and the energy is only in the gradient. Outside the bubble and close to the boundary, the energy is proportional to \(T^{4}\). Requiring the energies to match at the bubble boundary we get [28] \[12M_{5}^{3}\left.\left(\frac{{\rm d}\varphi}{{\rm d}r}\right)^{2}\right|_{ \varphi\approx 0}=2\pi^{4}M_{5}^{3}T^{4}\;. \tag{108}\] We choose the negative sign in the square root because \(\varphi\) starts near \(\varphi_{\rm min}\) at the center of the bubble and decreases to zero at the boundary, thereby the derivative is negative when \(\varphi\to 0\). Putting everything together, the equations and the boundary conditions are \[24M_{5}^{3}\left(\frac{{\rm d}^{2}\varphi}{{\rm d}r^{2}}+\frac{2}{r}\frac{{\rm d }\varphi}{{\rm d}r}\right)=\frac{{\rm d}\widetilde{V}(\varphi)}{{\rm d} \varphi}\;,\qquad\left.\frac{{\rm d}\varphi}{{\rm d}r}\right|_{r=0}=0,\ \left.\frac{{\rm d}\varphi}{{\rm d}r}\right|_{\varphi\approx 0 }=-\frac{\pi^{2}}{\sqrt{6}}T^{2}\;. \tag{109}\] Since the location at which \(\varphi\approx 0\) is not known a priori, one has to approach the problem indirectly. The strategy is to solve the differential equation numerically with \(\varphi(0)=\varphi_{0},\varphi^{\prime}(0)=0\), calculate \(r_{*}\) such that \(\varphi(r_{*})\approx 0\), and adjust \(\varphi_{0}\) till \(\varphi^{\prime}(r_{*})\) has the appropriate value. For numerical convenience we define \[\frac{1}{24M_{5}^{3}}\widetilde{V}(\varphi)\equiv\kappa^{4}\, \varphi^{4}\,v(\varphi)\,,\] \[y=\kappa\,\varphi\,T^{-1}\,,\,\,x=\kappa\,r\,T\,. \tag{113}\] With this rescaling, the action in eq. (104) looks like \[S_{b}=\frac{96\pi M_{5}^{3}}{\kappa^{3}}\int\mathrm{d}x\,x^{2}\left(\frac{1}{2 }\left(\frac{\mathrm{d}y}{\mathrm{d}x}\right)^{2}+y^{4}\,v(Ty/\kappa)+\frac{ \pi^{4}}{12}\right)\,. \tag{114}\] The differential equation and the boundary conditions in the rescaled coordinates are \[\frac{\mathrm{d}^{2}y}{\mathrm{d}x^{2}}+\frac{2}{x}\frac{\mathrm{d}y}{\mathrm{ d}x}=\frac{\mathrm{d}}{\mathrm{d}y}y^{4}v(Ty/\kappa)\,,\,\,\left.\frac{ \mathrm{d}y}{\mathrm{d}x}\right|_{x=0}=0\,,\,\,\left.\frac{\mathrm{d}y}{ \mathrm{d}x}\right|_{y\approx 0}=-\frac{\pi^{2}}{\sqrt{6}}\,. \tag{115}\] As discussed before, since the value of \(x\) at which \(y\approx 0\) is not known before solving the equation, we trade that boundary condition with \(y(0)=y_{0}\), calculate \(x_{*}\) such that \(|y(x_{*})|<\delta=10^{-1}\) and adjust \(y_{0}\) till \(y^{\prime}(x_{*})+\pi^{2}/\sqrt{6}=0\). Note that eq. (115) depends on \(T/T_{c}\) and one has to solve it for different values of \(T/T_{c}\) to get the temperature dependence of \(S_{b}\). Figure 7 compares the results for the thin and thick wall cases for a generic choice of parameters: left shows the bounce action as a function of \(T/T_{c}\) for the thin and the thick wall cases, right shows the scalar profile for several values of \(T/T_{c}\) for the thick wall case, and for the thin wall case where \(T\approx T_{c}\) (the two vacua being almost degenerate). As the thin-wall limit is approached (i.e. the bubble radius gets bigger), the results for the thick-wall calculation converge to the thin-wall results. When computing the bounce action and the bubble profiles for \(\epsilon_{3}\neq 0\), one technical difficulty is to be kept in mind. As discussed earlier, the radion potential \(V(\varphi)\) has a singularity at a finite but non-zero \(\varphi_{s}=(1/\lambda)^{1/\epsilon_{2}}\), where \(\lambda=v_{\mathrm{uv}}(\epsilon_{3}/\epsilon_{2})/(1+v_{\mathrm{uv}}(\epsilon _{3}/\epsilon_{2}))\). Figure 7: Thin vs thick wall for \(\tau=2,\alpha_{\mathrm{ir}}=5,v_{\mathrm{uv}}=1/5,\epsilon_{2}=-1/10, \epsilon_{3}=0\). Left shows the bounce action and right shows the bubble profile. Given that \(T_{c}\lesssim\varphi_{\min}\), if the thick wall profile is computed at \(T/T_{c}\ll 1\), the relevant \(\varphi\) probed is of the order \((T/T_{c})\varphi_{\min}\ll\varphi_{\min}\), and for small enough \(T\), one can be sensitive to the singular point \(\varphi=\varphi_{s}\). Since \(\varphi_{s}\ll\varphi_{\min}\), one can calculate for intermediate temperatures without running into this issue. In this work we consider only values of \(T/T_{c}\) such that we are not sensitive to the singularity. Further, note that the boundary condition requires imposing a condition when \(y\approx 0\) or equivalently \(\varphi\approx 0\). Numerically we require \(|y(x_{*})|\leq\delta=10^{-1}\) or equivalently \(|\varphi|\leq T(\delta/\kappa)\sim 10^{-1}T\). As long as \(T\) is not too small, we do not probe the singular region of the potential.
2309.08807
Optimal Ensemble Control of Matter-Wave Splitting in Bose-Einstein Condensates
We present a framework for designing optimal optical pulses for the matter-wave splitting of a Bose-Einstein Condensate (BEC) under the influence of experimental inhomogeneities, so that the sample is transferred from an initial rest position into a singular higher diffraction order. To represent the evolution of the population of atoms, the Schroedinger's equation is reinterpreted as a parameterized ensemble of dynamical units that are disparately impacted by the beam light-shift potential in a continuous manner. The derived infinite-dimensional coupled Raman-Nath equations are truncated to a finite system of diffraction levels, and we suppose that the parameter that defines the inhomogeneity in the control applied to the ensemble system is restricted to a compact interval. We first design baseline square pulse sequences for the excitation of BEC beam-splitter states following a previous study, subject to dynamic constraints for either a nominal system assuming no inhomogeneity or for several samples of the uncertain parameter. We then approximate the continuum state-space of the ensemble of dynamics using a spectral approach based on Legendre moments, which is truncated at a finite order. Control functions that steer the BEC system from an equivalent rest position to a desired final excitation are designed using a constrained optimal control approach developed for handling nonlinear dynamics. This representation results in a minimal dimension of the computational problem and is shown to be highly robust to inhomogeneity in comparison to the baseline approach. Our method accomplishes the BEC-splitting state transfer for each subsystem in the ensemble, and is promising for precise excitation in experimental settings where robustness to environmental and intrinsic noise is paramount.
Andre Luiz P. de Lima, Andrew K. Harter, Michael J. Martin, Anatoly Zlotnik
2023-09-15T23:02:37Z
http://arxiv.org/abs/2309.08807v2
# Optimal Ensemble Control of Matter-Wave Splitting ###### Abstract We present a framework for designing optimal optical pulses for the matter-wave splitting of a Bose-Einstein Condensate (BEC) under the influence of experimental inhomogeneities, so that the sample is transferred from an initial rest position into a singular higher diffraction order. To represent the evolution of the population of atoms, the Schrodinger's equation is reinterpreted as a parameterized ensemble of dynamical units that are disparately impacted by the beam light-shift potential in a continuous manner. The derived infinite-dimensional coupled Raman-Nath equations are truncated to a finite system of diffraction levels, and we suppose that the parameter that defines the inhomogeneity in the control applied to the ensemble system is restricted to a compact interval. We first design baseline square pulse sequences for the excitation of BEC beam-splitter states following a previous study, subject to dynamic constraints for either a nominal system assuming no inhomogeneity or for several samples of the uncertain parameter. We then approximate the continuum state-space of the ensemble of dynamics using a spectral approach based on Legendre moments, which is truncated at a finite order. Control functions that steer the BEC system from an equivalent rest position to a desired final excitation are designed using a constrained optimal control approach developed for handling nonlinear dynamics. This representation results in a minimal dimension of the computational problem and is shown to be highly robust to inhomogeneity in comparison to the baseline approach. Our method accomplishes the BEC-splitting state transfer for each subsystem in the ensemble, and is promising for precise excitation in experimental settings where robustness to environmental and intrinsic noise is paramount. ## I Introduction Numerous challenges in quantum science and technology involve design, observation, or control of bilinear dynamical systems [1]. A compelling application in metrology involves cold atom interferometry, in which optical standing waves are used to split and recombine matter waves [2]. In this setting, a time-varying laser pulse is applied to steer a quantum system between states of interest. This experimental approach has demonstrated significant sensitivity to accelerations and rotations, and has the potential to provide real improvement in fundamental physics experiments [3]. However, experimental uncertainties are inherent in these systems, and this reduces the precision in the splitting of matter waves, which can lead to inconsistent diffraction fidelity [4]. Robustness to experimental inhomogeneities, such as laser intensity, has been investigated in atom interferometry applications. Previous approaches have attempted to minimize or nullify the influence of these dynamical disturbances, for example adiabatic transfer techniques, which do not depend on exact laser intensity [5]. Other methods seek to optimally design light pulses to improve tolerance to systematic inhomogeneities by using specific control profile shapes, such as composite pulses [6], or large-scale optimization, such as GRadient Ascent Pulse Engineering (GRAPE) [7]. A practical means to represent inhomogeneities is through the light amplitude parameterized description of the system time-evolution dynamics. This mathematical description enables the state evolution of the sample to be represented by a collection of ordinary differential equations, each of which describes the possible evolution taken by an individual atom in the experiment subject to the inexact effect of the applied laser pulse. Ensemble control theory was developed to address the need to understand the evolution of systems comprised by a very large (potentially infinite) number of similar dynamical units, such as molecular, atomic and quantum systems, when manipulated by a single universally applied control field [8]. This area of mathematical systems theory focuses on defining controllability conditions [9], measurement processing [10], and control design approaches [11] for parameterized dynamical systems. Ensemble control theory has been used to address challenging control tasks for large-scale dynamical problems, including brain dynamics [12] and Magnetic Resonance Imaging (MRI) [13]. Furthermore, ensemble control was shown to be effective in pulse design for Nuclear Magnetic Resonance (NMR) accounting for the influence of systematic disturbances [14, 13]. The mathematical setting used for control synthesis for bilinear ensemble systems [13] can be applied to develop laser pulses for the atom interferometry procedure, and indeed the simplification of the Raman-Nath equations truncated to a two-level system results in the Bloch equations that appear in NMR [15, 16]. Ensemble systems are challenging because of infinite-dimensionality that results from a parameter that varies continuously on a compact interval. Whereas the state of each individual ensemble element is finite-dimensional, the state of an ensemble system lies on a Hilbert space. In this context, methods designed for unified representations have been studied with the goal of reducing computational complexity. Moment representations have been shown to be effective in significantly reducing the size of ensemble system dynamics [17]. Moment dynamics are essentially a spectral approximation of the ensemble in Hilbert space, with the spectral accuracy property that ensures exponential convergence to the continuum state with increasing approximation order. This representation enables design of control sequences for ensemble systems in a space of greatly reduced dimension without significant loss of precision [18]. In this paper, we revisit the problem of matter-wave splitting involving a Bose-Einstein condensate (BEC) in a standing light wave potential [15]. We consider optical standing-wave beam splitters for which the effect of an optical pulse is subject to experimental inhomogeneities, which we represent by an uncertain factor applied to the light wave amplitude envelope. This results in an ensemble system, whose evolution we describe using the Legendre moment dynamics. In this reduced space, we design pulse sequences that transfer the state of the ensemble to a desired set-point, and evaluate the improved fidelity of the BEC momentum transfer within the quantum state space. This project is organized as follows. In Section II, we derive the mathematical representation of the BEC-splitting system from the Schrodinger's equation, and characterize the source of inhomogeneity. In Section III, we then reproduce a state-of-the-art method of using square pulse sequences for optical splitting of matter-waves without compensating for inherent inhomogeneity [2]. In Section IV, we describe the Legendre moment representation of the BEC splitting dynamics, our proposed method for light pulse design within the moment space, and the resulting optimal control formulation. We present the computed optimal controls and simulations of the resulting momentum transfers in Section V for several diffraction orders, which show a significant improvement in fidelity with respect to the benchmark method. Finally, we conclude in Section VI. ## II Ensemble System for BEC Splitting We consider a dilute BEC composed of an initially stationary population of atoms, which is exposed to a standing-wave light field that excites the cluster of particles with similar, yet distinct intensities. The governing dynamics for this system are described by a one-dimensional Schrodinger's equation for a single atom. Inhomogeneity in the light beam is represented by a multiplicative parameter \(\varepsilon\) that scales the light shift potential amplitude \(\Omega(t)\), which leads to a wave function \[i\dot{\psi}(x,t,\varepsilon)\!=\!\left(-\frac{\hbar}{2m}\frac{d^{2}}{dx^{2}} \!+\!\varepsilon\Omega(t)cos\left(2k_{0}x\right)\right)\psi(x,t,\varepsilon). \tag{1}\] The time-dependent parameter \(\Omega(t)\) serves as the control input to the system available to the experimenter, and \(k_{0}\) is the wave vector of the light field. We suppose that \(\varepsilon\in K\) is the physical representation of the unequal influence of the control pulse on individual atoms in the BEC sample, where \(K\equiv[1-\delta,1+\delta]\) is a compact interval. Each atomic particle can be viewed as corresponding to a unique \(\varepsilon\), which permits us to use this parameter as an identifier to index the individual systems in the sample. Equation (1) can be further developed into a sequence of coupled Raman-Nath equations by describing the wave function within the Bloch basis. The resulting differential equations dictate the time evolution of the beam-splitter state populations (represented by ground state \(C_{0}\), and symmetric and anti-symmetric superpositions of momentum states \(C_{2n}^{+}\) and \(C_{2n}^{-}\), for \(n=0,1,2,\ldots\)) and the manner in which these interact. This infinite-dimensional system of ordinary differential equations (ODEs) is [15] \[i\dot{C_{2n}}(k,t)=\frac{\hbar}{2m}(2nk_{0}+k)^{2}C_{2n}(k,t)\\ +\frac{\varepsilon\Omega(t)}{2}[C_{2n-2}(k,t)+C_{2n+2}(k,t)]. \tag{2}\] We make several assumptions in order to further simplify these infinite-dimensional Raman-Nath equations: * The dependence on parameter \(k\) is dropped by assuming the system to be narrowly distributed around the value of \(k=0\). * The coupling of the states \(C_{0}\) and \(C_{2n}^{+}\) with their anti-symmetric counterparts \(C_{2n}^{-}\) is considered negligible by assuming that \(k\ll k_{0}\). * A value \(N^{+}\) is defined such that all levels above \(C_{2N^{+}}^{+}\) are permanently unpopulated. This is achievable by assuming that all atoms are at rest at the beginning and that \(\Omega(t)/2\ll(2N^{+})^{2}\hbar k_{0}^{2}/2m=(2N^{+})^{2}\omega_{r}\), where \(\omega_{r}\) is the photon recoil frequency. This mathematical setting has been developed in a variety of previous studies on matter-wave splitting techniques [19, 15, 2]. The resulting state-space system is described by \[\frac{d}{dt}\begin{bmatrix}C_{0}(t,\varepsilon)\\ C_{2}^{+}(t,\varepsilon)\\ \vdots\\ C_{2N}^{+}(t,\varepsilon)\end{bmatrix}=-i\omega_{r}A(\varepsilon,\Omega(t)) \begin{bmatrix}C_{0}(t,\varepsilon)\\ C_{2}^{+}(t,\varepsilon)\\ \vdots\\ C_{2N}^{+}(t,\varepsilon)\end{bmatrix}, \tag{3}\] where \(A(t,\varepsilon)\) is a real \((N^{+}+1\times N^{+}+1)\) symmetric matrix defined by \[A(\varepsilon,\Omega(t))=\begin{bmatrix}0&0&0&\ldots&0\\ 0&4&0&\ldots&0\\ 0&0&16&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&(2N^{+})^{2}\end{bmatrix}\\ +\frac{\varepsilon\Omega(t)}{2\omega_{r}}\begin{bmatrix}0&\sqrt{2}&0&0& \ldots&0\\ \sqrt{2}&0&1&0&\ldots&0\\ 0&1&0&1&\ldots&0\\ 0&0&1&0&\ddots&\vdots\\ \vdots&\vdots&\vdots&\ddots&\ddots&1\\ 0&0&0&\ldots&1&0\end{bmatrix}. \tag{4}\] For the system in Equations (3)-(4), we define the state vector \(C^{+}=[C_{0},C_{2}^{+},...,C_{2N^{+}}^{+}]\in\mathbb{C}^{N^{+}+1}\). A goal of this study is to enable constrained optimization approaches for pulse design, for which the complex dynamics in Equations (3)-(4) must be converted to a real-valued system. We thus define \(C_{\mathbb{R}}^{+}\in\mathbb{R}^{2(N^{+}+1)}\) such that \(C_{\mathbb{R}}^{+}=[C_{0,Re},C_{0,Im},C_{2,Re}^{+},C_{2,Im}^{+},...,C_{2N^{+}, Re}^{+},C_{2N^{+},Im}^{+}]\), in terms of which the time evolution dynamics are \[\frac{d}{dt}C_{\mathbb{R}}^{+}=\omega_{r}A(\varepsilon,\Omega(t))\otimes \begin{bmatrix}0&-1\\ 1&0\end{bmatrix}C_{\mathbb{R}}^{+}, \tag{5}\] where \(\otimes\) is the Kronecker product. In the above representation, the momentum dynamics for each atom are truncated to a finite-dimensional quantum system of \(N^{+}+1\) levels. Nonetheless, due to the presence of parameter \(\varepsilon\), there is a collection of infinitely many systems that must be steered simultaneously by a common control input. Next, we examine methods to design control pulses for momentum transfer of the system (5) that are invariant to a continuum of values of the uncertain parameter \(\varepsilon\). ## III Splitting matter using square pulses A previously proposed approach for splitting matter waves uses a sequence of two square pulses for the envelope of an optical input that is applied to a dilute BEC in a standing light wave potential. The approach has been shown to have a high degree of precision in simulation, and can be applied in practice [15, 20]. The drawback of this method is its use of a single nominal system to define the dynamic response of each atom in the BEC to the optical pulse, so there is no compensation for inherent inhomogeneities. We describe the square-pulse design procedure below, and extend the original methodology with pulses of equal amplitude to the design of sequences of pulses with different amplitudes, which has been shown to enable higher fidelity [2]. ### _Square optical pulse design for a single system_ The square pulse optimal design problem involves the optimization of five parameters, which are illustrated in Figure 1. The decision variables are the time durations \(\tau_{1}\) and \(\tau_{3}\) of the pulses, the time interval \(\tau_{2}\) between the pulses, and the amplitudes \(\Omega_{1}\) and \(\Omega_{2}\) of the first and second square pulses, respectively. These parameters determine the final state of the system after the pulse is employed, which can be described by continuously applying the variation of parameters formula to Equation (3). The state at the terminal time \(\tau_{ps}=\tau_{1}+\tau_{2}+\tau_{3}\) of the pulse sequence is \[\begin{split} C^{+}(\tau_{ps},\varepsilon)&=e^{-i \omega_{r}\tau_{3}A_{3}}e^{-i\omega_{r}\tau_{2}A_{2}}e^{-i\omega_{r}\tau_{1}A_ {1}}C^{+}(0,\varepsilon)\\ &\text{s.t.}\;\;A_{1}=A(\varepsilon,\Omega_{1}),\;A_{2}=A( \varepsilon,0),\\ A_{3}&=A(\varepsilon,\Omega_{2}),\;\tau_{ps}=\tau_{1}+ \tau_{2}+\tau_{3}.\end{split} \tag{6}\] In the baseline scenario, we suppose that \(\varepsilon=1\). The control design goal is to transfer the momentum state of the sample from an initial rest state \(C^{+}(0)=[1,0,0,\ldots,0]^{T}\) to a final desired state \(C_{f}^{+}\). This desired state is typically defined with respect to the \(L^{2}\)-norm (in quantum momentum space), in which an energy level \(2n\) is reached at the end of the sequence of pulses, so that \(\|C_{f,2n}^{+}\|_{2}=\|C_{f}^{+}\|_{2}=1\). For this control task, the optimization problem can be formulated as \[\begin{split}\min_{\Omega_{1},\Omega_{2},\tau_{1},\tau_{2},\tau_{ 3}}&\|C_{f}^{+}|-|C^{+}(\tau_{ps},1)|\|_{2}\text{ as in Eq.}\eqref{eq:C_f}\\ s.t.&\text{Dynamics in Equations (4)-(5)}.\end{split} \tag{7}\] The procedure described above follows the original methodology for the design of square pulse sequences for matter wave splitting. Because a nominal parameter \(\varepsilon\) is used, the approach does not account for non-uniform influence of the pulse on the dynamical behavior of individual atoms in the BEC, and we expect that control performance is not robust to values of \(\varepsilon\) away from unity. We extend the formulation below by adding dynamic constraints for samples of \(\varepsilon\subset K\) as an attempt to account for the dynamics of an ensemble of atoms that are affected inhomogeneously by the applied pulse sequence. ### _Square optical pulse design for ensemble system_ We extend the optimization problem (7) defined in the previous section to account for multiple values of \(\varepsilon\). Because it is intractable to define dynamic constraints for each system in an infinite-dimensional ensemble, we define an objective function as a summation of minimum \(L_{2}\) norm objectives for \(m\) values of \(\varepsilon\) sampled directly from the parameter domain \(K\). The optimization is conducted subject to dynamic constraints for each of the same \(m\) values of \(\varepsilon\). We define the finite collection of parameters \(\varepsilon_{1:m}=\{\varepsilon_{1},\varepsilon_{2},\ldots,\varepsilon_{m}\} \subset K\), such that \(\varepsilon_{1}=\arg\min_{K}\varepsilon\), \(\varepsilon_{m}=\arg\max_{K}\varepsilon\), with intermediate samples uniformly spaced on \([\varepsilon_{1},\varepsilon_{m}]\). The simultaneous optimization for all systems indexed by \(i\in\{1,2,\ldots,m\}\) is formulated as \[\begin{split}\min_{\Omega_{1},\Omega_{2},\tau_{1},\tau_{2},\tau_{ 3}}&\sum_{i=1}^{m}\||C_{f}^{+}|-|C^{+}(\tau_{ps},\varepsilon_{i})| \|_{2}\\ s.t.&\text{Dynamics in Equations (4)-(5)}\; \varepsilon=\varepsilon_{i},\\ &\qquad\forall i=1,\ldots,m.\end{split} \tag{8}\] The control pulse sequences that are obtained by solving the optimization problems (7) and (8) will be used as benchmarks in Section V to demonstrate the performance improvement gained by way of the proposed moment dynamics method. Fig. 1: Illustration of a square pulse sequence control \(\Omega(t)\) and parameters. ## IV Constrained Ensemble Problem Using Moment Dynamics We present a novel approach to improve light pulse design methods for matter-wave splitting by compensating for the inherent inhomogeneities that are present in physical systems. In this section, we derive a polynomial moment based representation of the ensemble dynamics of the matter wave splitting process, and formulate an ensemble control problem for optimal state transfer of a BEC sample in momentum space. The polynomial moment ensemble approach has been applied recently to design open-loop controls to transfer ensemble systems between initial and target states in Hilbert space, demonstrating low terminal state error [17, 21, 9]. The method enables a control problem for an infinite-dimensional continuum to be reduced to a finite-dimensional control problem by transforming the state space using a polynomial approximation that can then be truncated at a sufficiently high order. The transformation facilitates the design of a single control input that steers the moments to a desired representation in the spectral moment space, which is equivalent to steering each individual element of the ensemble to its target in the state space [14]. The moment-based approach for ensemble control systems can be defined using any basis from a diverse collection of orthogonal polynomials. For this project, we have chosen to use Legendre polynomials as the basis of the moment system, because this particular basis has desirable properties for approximation of continuous functions on compact intervals [18]. This representation has also been used previously for controlling the Bloch equations that represent a two-level quantum system in the context of NMR [14]. ### _Moment Systems using Legendre Polynomials_ The concept of moment space representation begins with a function defined within a separable Hilbert space \(\mathcal{H}\). If a basis \(\{|\psi_{k}\rangle:k\in\mathbb{N}^{d}\}\) can be defined (for simplicity, we will use \(d=1\)), then the moment of order \(k\) for a function \(x(t)\) can be defined by \[m_{k}(t)=\langle x(t)|\psi_{k}\rangle. \tag{9}\] For an ensemble of systems indexed by a parameter varying in the compact interval \(K=[-1,1]\), it is possible to obtain the equivalent Legendre moments by using the respective orthogonal polynomials \(P_{k}(\varepsilon)\), obtained using the recursive relation given, after normalization, by \[\varepsilon P_{k}=c_{k-1}P_{k-1}+c_{k}P_{k+1}, \tag{10}\] where \(P_{0}(\varepsilon)=1/\sqrt{2}\), \(P_{1}(\varepsilon)=\sqrt{3/2}\varepsilon\), \(\ldots\), are the normalized Legendre polynomials and \(c_{k}=(k+1)/\sqrt{(2k+1)(2k+3)}\). In this setting, the Legendre moment of order \(k\) for the vector of quantum states \(C^{+}(t,\varepsilon)\) is defined by \[m_{k}(t)=\langle C^{+}(t),P_{k}\rangle=\int_{-1}^{1}C^{+}(t, \varepsilon)P_{k}(\varepsilon)d\varepsilon. \tag{11}\] Taking the time derivative of Equation (11) yields the equivalent moment dynamics of the ensemble system. The use of the Legendre polynomial basis to define moment spaces was shown to possess a variety of advantages for representing ensemble systems. For instance, the Legendre polynomials form an orthogonal basis on the Hilbert space \(\mathcal{H}\), which induces an isometry between the moment space and the quantum ensemble system. The preservation of metrics through this transformation has advantages for defining tractable optimization formulations to represent optimal control problems for ensemble systems. The recursive relation in Equation (10) also enables the description of moment dynamics defined by bounded operators. Finally, the control profile is preserved with transformation between the moment space and state space, so that practical control design is possible using the moment dynamics method. ### _BEC Splitting in the Moment Space_ We apply the transformation in Equation (11) to the time-evolution model for the quantum system described by Equations (4)-(5) to obtain the dynamics of the corresponding Legendre moments. The quantum ensemble system is defined for the ensemble parameter \(\varepsilon\in K=[1-\delta,1+\delta]\), such that \(\delta\in(0,1)\), which is a different compact interval than the domain of the Legendre polynomial basis. Therefore, we define a parameter \(\varepsilon^{*}\) such that \(\varepsilon=1+\delta\varepsilon^{*}\), meaning that \(\varepsilon^{*}\in[-1,1]\), and the moments are calculated using the Hilbert space related to parameter \(\varepsilon^{*}\). In this setting, the moment dynamics for \(m(t)=[m_{0}(t),m_{1}(t),\ldots,m_{N-1}(t)]\) are described by \[\dot{m(t)} =\omega_{r}\left[I(N)\otimes A(1,\Omega(t))\otimes\begin{bmatrix} 0&-1\\ 1&0\end{bmatrix}\right] \tag{12}\] \[+\mathcal{C}(N)\otimes(A(\delta,\Omega(t))-A(\delta,0))\otimes \begin{bmatrix}0&-1\\ 1&0\end{bmatrix}]\cdot m(t),\] where \(I(N)\) is an identity matrix of dimension \(N\times N\) and \(\mathcal{C}(N)\) is defined by \[\mathcal{C}(N)=\begin{bmatrix}0&c_{0}\\ c_{0}&0&c_{1}\\ &c_{1}&\ddots&\ddots\\ &&\ddots&0&c_{N-2}\\ &&&c_{N-2}&0\end{bmatrix}. \tag{13}\] With the Legendre moment dynamics as defined in Equation (12), we can state an optimization problem that approximates the optimal control problem for the ensemble in state-space. The orthogonality property of the Legendre polynomial basis defines an isometry between the two studied spaces, which enables a metric to be used to specify an objective for the optimization problem. Moreover, it is also possible to include inequality constraints in the optimization problem, which we will apply in the control design. We are interested specifically in enforcing limits on the absolute amplitude and the rate of change of the control function. Specifying the final state in moment space requires an additional reformulation. In the state-space, the optimization objective is defined in terms of the absolute value of the final state, as in Equation (7). To express an objective function that depends on the state of an ensemble system, we formulate the objective in moment space. To that end, we minimize the magnitude of the undesired energy levels by aiming to nullify their states, i.e., \(\|C_{f}^{+}\circ(\mathbf{1}_{2(N^{+}+1)}-e_{n+1}\otimes\mathbf{1}_{2})\|_{2}=0\). The resulting optimization problem in moment space is \[\min_{\Omega(t)} \|m(T)\circ[\mathbf{1}_{N}\otimes(\mathbf{1}_{2(N^{+}+1)}-e_{n+ 1}\otimes\mathbf{1}_{2})]\|_{2}\] \[s.t. \text{ Dynamics in Equations (\ref{eq:11})-(\ref{eq:11})},\] \[\Omega_{min}\leq\Omega(t)\leq\Omega_{max},\] \[\Delta\Omega_{min}\leq\dot{\Omega}(t)\leq\Delta\Omega_{max},\] where \(T\) is a predefined time horizon, similar to the parameter \(\tau_{ps}\) used in problems (7) and (8). ## V Control Synthesis Results The Legendre moments method for matter-wave splitting ensemble control design is evaluated by comparing its performance with that of the square pulse sequence method. We first compare the Legendre moments approach with the controls obtained by solving the problem in Equation (7). A similar comparison is then done with respect to controls obtained by solving the problem in Equation (8), which seeks to account for variation in \(\varepsilon\) by sampling values directly in the ensemble space \(K\). To compare the performance of the various control designs in achieving the goal of transferring the ensemble to the target in Hilbert space on \(K\), we define an index \(I_{e}\) based on the objective function for the pulse design in the quantum state space. This performance index, which we seek to minimize, is defined as \[I_{e}=\int_{1-\delta}^{1+\delta}\||C_{f}^{+}|-|C^{+}(T,\varepsilon)|\|_{2}d\varepsilon. \tag{15}\] ### _Computational Approach_ We solve the optimal control problem (14) using a general iterative scheme for computing optimal control inputs for nonlinear systems [22]. For a general continuous-time nonlinear control system of the form \(\dot{x}(t)=f(x(t),u(t))\), where \(x(t)\) and \(u(t)\) are state and control vectors, respectively, a zero-order hold assumption is used to define the evolution of the system using a piece-wise constant control input. Beyond using a traditional first-order discretization and linearization, we use an approach that takes advantage of a higher-order Taylor series expansion to accurately represent linearization of nonlinear dynamics and their Jacobian at each iteration [23]. Beginning with an initial guess of the control function, the algorithm solves an approximation of the optimization problem in (14) to yield the optimal vector \(\Delta U\) of time-discretized variations in the control input that improves the objective value. This results in a sequence of quadratic programs, which, though each can be efficiently solved, require highly complex algebraic formulation of the dynamics in Equations (12)-(13) when truncated at a high momentum level \(N^{+}\) and sufficiently high polynomial moment order to represent the quantum and ensemble dynamics in their respective Hilbert spaces. In contrast to previous applications of this approach to the control of quantum ensemble systems [18], the size and complexity of the dynamics examined here requires the iterative use of symbolic algebra to compute the Taylor expansion of the nonlinear dynamics, which becomes extremely complex. Once the moment dynamics and the respective symbolic expressions are obtained, the pulse is designed by the iterative application of quadratic programming to solve the optimization problem in Equation (14). To apply the methodology defined in [23], the time domain \([0,T]\) is finely discretized by using a small time step. We use a nominal value of \(\Delta t=0.001\) over a total time horizon of \(T=3\), and this results in a total of 3000 variables and 12000 constraints for optimizing the variation \(\Delta U\in\mathbb{R}^{3000}\) that is added to the control function at each iteration. The same momentum level truncation at \(N^{+}=9\) and moment order \(N=20\) are used in all of the computations. ``` 0:\(n\), Initial pulse guess \(U\), Moment dynamics \(M(t,m(t),u(t))\), number of time steps \(N_{steps}\), \(\lambda\), \(f(m(t))=m(t)\circ[\mathbf{1}_{N}\otimes(\mathbf{1}_{2(N^{+}+1)}-e_{n+1} \otimes\mathbf{1}_{2})]\) 0: Cost Function in Equation 14\(\leq\) Tolerance 1:while\(\|f(m(T))\|\geq\) Tolerance do 2:\(m(0),m(\Delta t),\ldots,m(T)\gets F(t,m(t),U)\) 3:\(i\gets 1\) 4:while\(i\leq N_{steps}\)do 5:\(A_{i}\leftarrow\frac{\partial}{\partial m}F(i)\Delta t,m((i)\Delta t),U_{i+1})\) 6:\(B_{i}\leftarrow\frac{\partial}{\partial U}F(i)\Delta t,m((i)\Delta t),U_{i+1})\) 7:\(i\gets i+1\) 8:endwhile 9:\(H\leftarrow[A_{N_{steps}}\ldots A_{2}B_{1}|A_{N_{steps-1}}\ldots A_{3}B_{2}|\ldots\) \(|A_{N_{steps}}B_{N_{steps-1}}|B_{N_{steps}}]\) 10:\(\Delta U\leftarrow\min_{\Delta U}\Delta U^{T}(H^{T}H+\lambda I)\Delta U\) \(+f(m(T))^{T}H\Delta U\) 11:\(U\gets U+\Delta U\) 12:endwhile ``` **Algorithm 1** Iterative Moment Pulse Design Algorithm The computational procedure used to synthesize ensemble controls is described in Algorithm 1, in which a cost function is iteratively reduced until a desired error tolerance is reached. The methodology can be described as consisting of three stages: (1) propagation of moment states using the current control profile; (2) linearization of time-localized dynamics; (3) and an update of the control profile. Both the propagation of moment states and respective linearizations are performed using a Taylor series expansion of higher order over the symbolic expression that represents the dynamics in Equation (12). This approach is adopted for improved precision in the linearized approximation defined by the set of matrices \(A_{k}\) and \(B_{k}\), which are coupled in a matrix \(H\), used to estimate the variation of the moment state at the final time step. The optimization problem can be represented using a quadratic program with a quadratic objective function and linear constraints, which we solve using a quadratic programming solver. This function has a penalty term scaled using a parameter \(\lambda\), which is used in order to avoid high variations in the defined control variation \(\Delta U\), which would invalidate previously obtained linearized transformations. The related code is developed and executed on MATLAB(tm), version R2023a. In the following computational studies, the constraint bound values defined in Equation (14) are set to \(\Omega_{max}=100\), \(\Delta\Omega_{min}=-500s^{-1}\), and \(\Delta\Omega_{min}=500s^{-1}\). We examine control designs for various target states in momentum space with real controls where \(\Omega_{min}=-100\), as well as with strictly non-negative real-valued control functions where \(\Omega_{min}=0\). Our computational results are described in the following section. Table I contains a list of control synthesis problems specified by different values of parameters \(n\) and \(\Delta\Omega_{min}\), and the resulting time required to compute the ensemble controls. We observe that the computation time is affected by the increase in complexity required to achieve a higher momentum level \(n\). All simulations were performed on a 2023 MacBook Pro with Apple(R) M2 Pro Processor at 3.4 GHz and 16GB of memory. ### _Moment Ensemble and Square Pulse Design_ To demonstrate the performance improvement of the Legendre moment dynamics method with respect to the square pulse sequence obtained by solving problem (7), we compare controls computed for an ensemble defined by \(\delta=0.1\). The square pulses are designed using a maximum momentum level \(N^{+}=24\), and the parameters that define the optimal pulses for target momentum states \(n=1,2,3,4\) are given in Table II. For control synthesis using the proposed moment ensemble method, we use a moment order \(N=20\) and truncate the momentum level at \(N^{+}=9\). The resulting controls are validated by applying them to simulations of the entire ensemble with \(\varepsilon\in[1-\delta,1+\delta]\) with the momentum levels truncated above \(N^{+}=24\). The real-valued and positive ensemble controls and terminal state error functions for the ensemble controls and square pulse sequences are compared in Figure 2 for target momentum levels \(n=1\), \(2\), \(3\), and \(4\). The square pulse sequence performs as expected in each case, with error that is negligible at the nominal value of \(\varepsilon=1\), but which quickly increases as \(\varepsilon\) diverges from unity. In contrast, we see that the terminal state error for the simulation in which the controls obtained by solving problem (14) are applied to the ensemble remains quite low for all values of the ensemble parameter \(\varepsilon\in[0.9,1.1]\) for target momentum levels \(n=1\), \(2\), \(3\), and still shows a significant performance advantage in the case of a target state \(C_{f}^{+}=C_{f,2n}^{+}\) with \(n=4\). The proposed method shows clear improvement with respect to all tested square pulse sequences in overall performance for momentum transfer of the ensemble of systems for positively-constrained controls as well as real-valued ones. Observe that the amplitude and variation of the controls obtained for \(n=4\) are greater than for the lower valued momentum state targets, which can be explained by the need to manipulate higher frequency dynamics. Indeed, the constraints on amplitude and variation for control function are binding at many times during the optimization horizon, which necessarily limits the degree to which the algorithm can meet the optimization objective. There are inherent trade-offs between the constraint bound values, the optimization horizon, and the complexity of the control task, which we have illustrated here. The performance achieved by the proposed method is quantified in Table III, in which the index \(I_{e}\) defined in Equation (15) is given for the square pulse controls and the positive and real valued ensemble controls as shown in Figure 2, for which the index values are denoted by \(I_{e,sp}\), \(I_{e,+}\) and \(I_{e,\mathbb{R}}\), respectively. As seen in the top row of Figure 2, the results in Table III show that the positive valued ensemble controls result in a significant decrease in terminal error with respect to the square pulse sequence, and the real-valued ensemble controls are even better. We note that in practice, the control represents the amplitude envelope of an optical pulse, so \(\Omega(t)\) must remain positive in the present setting. Reformulation of the Raman-Nath equations and adjustment of the experimental setting may enable real-valued control inputs, however. ### _Moment Ensemble and Ensemble Square Pulse Design_ We also compare the ensemble control method to the square pulse controls obtained by solving the optimization problem in Equation (8). The comparison is made for three control design cases. First, the target state is \(n=1\) with \(\delta=0.1\); the second case has target state \(n=4\) with \(\delta=0.1\); and the third aims for \(n=1\) with \(\delta=0.4\). The parameter values obtained by solving the square pulse design problem (8) \begin{table} \begin{tabular}{||c|c||c||c||} \hline \(n\) & \(\Omega_{1}/\omega_{r}\) & \(\Omega_{2}/\omega_{r}\) & \(\omega_{r}\tau_{1}\) & \(\omega_{r}\tau_{2}\) & \(\omega_{r}\tau_{3}\) \\ \hline \hline 1 & 3.9865 & 2.2849 & 0.4744 & 0.9427 & 0.4181 \\ \hline 2 & 13.0036 & 9.5440 & 1.000 & 0.7190 & 1.000 \\ \hline 3 & 32.4012 & 34.7591 & 0.1905 & 0.5523 & 0.1913 \\ \hline 4 & 41.4215 & 41.4263 & 1.2653 & 0.8002 & 1.9952 \\ \hline \end{tabular} \end{table} TABLE II: Optimal square pulse design parameters. \begin{table} \begin{tabular}{||c|c||c||} \hline \(n\) in \(C_{f}^{+}\) & \(\Delta\Omega_{min}\) & Computation time (s) \\ \hline \hline 1 & 0 & 866.4169 \\ \hline 2 & 0 & 827.9643 \\ \hline 3 & 0 & 1079.8969 \\ \hline 4 & 0 & 1039.5165 \\ \hline 1 & -100 & 700.5696 \\ \hline 2 & -100 & 845.3960 \\ \hline 3 & -100 & 1001.5283 \\ \hline 4 & -100 & 1088.8991 \\ \hline \end{tabular} \end{table} TABLE I: Computation times for moment pulse design. \begin{table} \begin{tabular}{||c|c c c c||} \hline \(n\) & \(\Omega_{1}/\omega_{r}\) & \(\Omega_{2}/\omega_{r}\) & \(\omega_{r}\tau_{1}\) & \(\omega_{r}\tau_{2}\) & \(\omega_{r}\tau_{3}\) \\ \hline \hline 1 & 3.9865 & 2.2849 & 0.4744 & 0.9427 & 0.4181 \\ \hline 2 & 13.0036 & 9.5440 & 1.000 & 0.7190 & 1.000 \\ \hline 3 & 32.4012 & 34.7591 & 0.1905 & 0.5523 & 0.1913 \\ \hline 4 & 41.4215 & 41.4263 & 1.2653 & 0.8002 & 1.9952 \\ \hline \end{tabular} \end{table} TABLE III: Performance results for moment ensemble controls for \(\delta=0.1\). using multiple \(\varepsilon_{\rm i}\) samples for \(i=1,\ldots,m\) are given in Table IV. The results for these scenarios are shown in Figure 3, and the values of the performance index \(I_{\varepsilon}\) for the compared controls are shown in Table V. The table includes values \(I_{e,sp}\), \(I_{e,+}\), \(I_{e,\mathbb{R}}\), \(I_{e,3}\), and \(I_{e,10}\) for the nominal parameter square pulse, the positive ensemble control, the real-valued ensemble control, the square pulse with \(m=3\) samples, and the square pulse with \(m=10\) samples, respectively, for all three control design cases described above. The results show the limitations of the square pulse design, even when multiple samples are used over the ensemble parameter domain. For the first case shown at left in Figure 3, the ensemble square pulse approach (Problem (8)) provides little improvement in relation to the nominal square pulse (Problem (7)). The few degrees of freedom of the method and the small domain \(K\) limits the improvement gained by sampling in the ensemble space. In the second and third cases, however, the improvement over using only one nominal value of \(\varepsilon\) is noticeable. Nonetheless, the ensemble moment method exhibits significant improvements over the square pulse designs based on sampling. \begin{table} \begin{tabular}{||c c c|c c c c c||} \hline \(n\) & \(\delta\) & \(m\) & \(\Omega_{1}/\omega_{r}\) & \(\Omega_{2}/\omega_{r}\) & \(\omega_{r}\tau_{1}\) & \(\omega_{r}\tau_{2}\) & \(\omega_{r}\tau_{3}\) \\ \hline \hline 1 & 0.1 & 3 & 4.6404 & 4.2278 & 0.1943 & 0.9618 & 0.5286 \\ \hline 1 & 0.4 & 3 & 10.6354 & 6.6525 & 1.2038 & 3.5000 & 0.5420 \\ \hline 4 & 0.1 & 3 & 64.3286 & 27.5580 & 0.0924 & 3.4724 & 0.6647 \\ \hline 1 & 0.1 & 10 & 4.0446 & 4.2140 & 0.3769 & 0.6386 & 0.6767 \\ \hline 1 & 0.4 & 10 & 60.0000 & 4.6034 & 0.0159 & 2.6005 & 0.5322 \\ \hline 4 & 0.1 & 10 & 60.8008 & 30.2755 & 0.0967 & 3.4694 & 0.6798 \\ \hline \end{tabular} \end{table} TABLE IV: Ensemble Square pulse design parameters. Fig. 3: Comparison of control performance, including the expanded square pulse design method for a collection of systems. The plots represent the design parameters (from left to right): \(\delta=0.1\) and \(n=1\); \(\delta=0.1\) and \(n=4\); and\(\delta=0.4\) and \(n=1\). Fig. 2: Comparison of the performance of optimal square pulse sequences with moment ensemble controls. For all target states, we design for \(\delta=0.1\), and use a time horizon of \(T=3\). The desired momentum levels for these computations are (from left to right) \(C_{2}^{+}\), \(C_{4}^{+}\), \(C_{6}^{+}\), and \(C_{8}^{+}\). The plots show (from top to bottom): the final error achieved for the square pulse and the moment ensemble method as a function of the ensemble parameter \(\varepsilon\); the optimal control function for strictly non-negative values of \(\Omega(t)\); and the optimal control when allowing real-valued inputs. Note that the skew-rate limit is binding in the control solutions computed for \(n=4\) in the right most column, and this is what limits the terminal state fidelity. We conclude that directly sampling the ensemble parameter space \(K\) for the optimal control problem is not an effective means to compensate for inhomogeneities in the experimental setting. In contrast, the proposed ensemble control method promises to achieve improved and homogeneous performance for an entire BEC ensemble in a matter wave interferometry experiment. ## VI Conclusion We present a method for designing constrained optical pulses for optimal matter-wave splitting of a Bose-Einstein condensate in the presence of experimental inhomogeneities, which induce a parameter uncertainty within the momentum evolution of the collection of atoms. An ensemble of systems is parameterized by the factor that modulates the optical pulse, and Legendre moments are used to represent the optimal control problem for an infinite dimensional system as an optimization problem in moment space. The results demonstrate that the fidelity achieved by the proposed method has advantages over recently developed state-of-the-art optical pulse sequences. In particular, the method achieves precise momentum transfer of the BEC to high diffraction orders with inherent robustness to inhomogeneity in the effect of the optical pulse on atoms in the sample, and yields continuously-varying pulses that can be tuned to maximize the effectiveness of equipment in experimental settings. We expect that future research will extend our results to more detailed models of BEC-splitting, as well as control protocols for additional experimental steps for state preparation and measurement. The moment method can be extended to incorporate more parameters to improve robustness to additional sources of uncertainty and inhomogeneity. Our computational approach could be improved and further analyzed to characterize the trade-offs in this setting between truncations in the moment and quantum Hilbert spaces, constraint bound values, pulse duration, and control performance. Based on the successful validation of our approach in simulation, we expect that optimal pulses can be tested in experimental settings to confirm anticipated improvements in performance of matter-wave diffraction metrology.
2309.12877
FairComp: Workshop on Fairness and Robustness in Machine Learning for Ubiquitous Computing
How can we ensure that Ubiquitous Computing (UbiComp) research outcomes are both ethical and fair? While fairness in machine learning (ML) has gained traction in recent years, fairness in UbiComp remains unexplored. This workshop aims to discuss fairness in UbiComp research and its social, technical, and legal implications. From a social perspective, we will examine the relationship between fairness and UbiComp research and identify pathways to ensure that ubiquitous technologies do not cause harm or infringe on individual rights. From a technical perspective, we will initiate a discussion on data practices to develop bias mitigation approaches tailored to UbiComp research. From a legal perspective, we will examine how new policies shape our community's work and future research. We aim to foster a vibrant community centered around the topic of responsible UbiComp, while also charting a clear path for future research endeavours in this field.
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Tong Xia, Niels van Berkel
2023-09-22T14:04:51Z
http://arxiv.org/abs/2309.12877v1
# FairComp: Workshop on Fairness and Robustness ###### Abstract. How can we ensure that Ubiquitous Computing (UbiComp) research outcomes are both ethical and fair? While fairness in machine learning (ML) has gained traction in recent years, fairness in UbiComp remains unexplored. This workshop aims to discuss fairness in UbiComp research and its social, technical, and legal implications. From a _social perspective_, we will examine the relationship between fairness and UbiComp research and identify pathways to ensure that ubiquitous technologies do not cause harm or infringe on individual rights. From a _technical perspective_, we will initiate a discussion on data practices to develop bias mitigation approaches tailored to UbiComp research. From a _legal perspective_, we will examine how new policies shape our community's work and future research. We aim to foster a vibrant community centered around the topic of responsible UbiComp, while also charting a clear path for future research endeavours in this field. fairness, bias, discrimination, responsible AI, ethical AI + Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [ community needs to stay vigilant, ensuring that technological advancements are designed and deployed in a responsible and ethical manner. To this end, we aim to spark a discussion about the ethical, social, technical, and legal issues relevant to fair and ethical UbiComp research. From a social perspective, we look into how fairness research can be translated into this domain and identify pathways for ensuring that ubiquitous technologies do not cause any harm or infringe on any individual rights (Bahdan et al., 2015). From a technical perspective, we intend to take a closer look at the community's data collection, processing, and modeling practices to ideate fairness enhancement and bias mitigation targeted at UbiComp work. From a regulatory perspective, we set out to understand how proposed policies, such as the European AI Act, will frame the work of the community and drive future research. This balance between performance and fairness is envisaged as a viable way forward--an ideal compromise. These perspectives raise numerous challenges and questions that we seek to address in this workshop. How can we leverage existing fairness research and adapt it to the UbiComp domain? How can we define and quantify fairness in prevalent data (e.g., time-series) and model (e.g., regression, multi-class classification) modalities? How can data and labels be acquired ethically? How can we generate fair synthetic data or recruit representative real-world samples? How do we incorporate fairness into our technology development processes and deployment monitoring by design? Essentially, how do we better equip our community to deal with unfairness? Besides these questions, which hold significance for the UbiComp community, several challenges extend to other disciplines, including Philosophy, Sociology, Law, Psychology, or any of the broad range of subjects contributing to this area. Which ethical challenges arise when technologies are interwowoven into everyday life until they are indistinguishable from it? Which are the historical and systemic biases that frame the domain's research? How can ubiquitous applications be regulated without stifling innovation? ## 2. FairComp Workshop We aim FairComp to be an interdisciplinary forum beyond publications' solicitation that brings together academia and industry. Notably, we seek to bring together researchers and practitioners whose work lies within the ACM SIGCHI domains (e.g., UbiComp, HCI, CSCW), as well as EAcCT, ML & AI, Social sciences, Philosophy, Law, Psychology, and others. Workshop organizers are actively engaged in the aforementioned themes and will encourage their network of colleagues and students to participate in the workshop. In particular, the goal of this workshop is to collaboratively: * _Asses_ the evolving socio-technical themes and concerns in relation to fairness across ubiquitous technologies, ranging from health, behavioural, and emotion sensing to human-activity recognition, mobility, and navigation. * _Map_ the space of ethical risks and possibilities regarding technological interventions (e.g., input modalities, learning paradigms, design choices). * _Envision_ new sensing and data-acquisition paradigms to fairly and accurately gather ubiquitous physical, physiological, and experiential qualities. * _Explore_ novel methods for bias mitigation and investigate their suitability for diverse ubiquitous case studies. * More generally, _start_ a discourse around the future of "ubiquitous fairness" and co-create research agenda(s) for meaningfully addressing it. * Finally, _consolidate_ an international network of researchers to further develop these research agendas through funding proposals and through steering future funding instruments. **Relevance and Impact to UbiComp.** With its strong community engaged in several themes (e.g., sensing, HCI, AI/ML), and its synergistic coalitions across varied domains (Sociology, Philosophy, Health Informatics, Law), UbiComp has a crucial role in paving the way for responsible, robust, and fair technological advancements. Coupled with the unique characteristics of ubiquitous technology, these advancements demand the development of distinct definitions, metrics, and methodologies to counteract the effects of bias. This calls for the creation of a subcommunity focused on fairness issues in the domain's technology. By raising awareness and advocating decentralized work, we should encourage every member of the community to integrate fairness considerations in their research. Collaborating with other disciplines, this workshop aims to promote scientific exchange and jointly create a comprehensive and effective framework for ensuring fairness in UbiComp technology. **Long-term Objectives.** This workshop will contribute to a deeper understanding of ethical challenges and opportunities surrounding the robust, and fair use of ubiquitous technology. Under such efforts, we plan to build an active and long-lasting community around the workshop's theme. Finally, we intend to use the workshop's momentum, as well as the developed research agendas and their collaborative follow-ups, to prepare a special issue of a journal (e.g., IEEE Pervasive) after the conclusion of the workshop. We plan to make an open call for this issue but will especially invite workshop participants to submit their work. Furthermore, we will consolidate our workshop's insights (including discussion) into an article. ## 3. Workshop Structure **Workshop Topics.** The workshop aims to provide a platform for exchanging ideas that can shape the future of ubiquitous computing fairness and beyond and to rethink the role of UbiComp as an enabler of pervasive experiences free from biases. The main topics of interest include, but are not limited to: * New definitions, metrics, and criteria of fairness and robustness, tailored for ubiquitous computing * Indirect notions of fairness on devices (e.g., unfair resource allocation, energy, connectivity) * New methods for bias identification and mitigation * Bias, discrimination, and measurement errors in data, labels, and under-represented input modalities * New benchmark datasets for fairness and robustness evaluation (e.g., sensor data with protected attributes) * Geographical equity across datasets and applications (e.g., WEIRD research, Global South) * New user study methodologies beyond conventional protocols (e.g., Fairness-by-Design) * Robustness (e.g., out-of-distribution generalization, uncertainty quantification) of ML models in high-stake and real-world applications * Investigation of fairness trade-offs (e.g., fairness vs. accuracy, privacy, resource efficiency) * Implications of regulatory frameworks for UbiComp **Workshop Format.** We plan for an open, full-day workshop with 2 invited keynotes and 10 accepted papers that include completed and ongoing original empirical works, case studies, reviews, as well as position papers (subject to # of submissions). All papers will be presented as talks, including Q&A to allow researchers to engage in discussion with the workshop attendees. To further engage workshop participants, FairComp will include two interactive activities: a collaborative ideation session, where participants will be split into small groups to discuss the ethical, social, technical, and legal perspectives of UbiComp fairness under the guidance of invited experts and driven by questions provided by the organizers; and an interactive panel on _"Ethical & Responsible UbiComp: A Case for Fairness and Robustness"_ with keynote speakers and industry experts to further discuss reflections on their work with an open Q&A session with the audience, moderated by one of the co-organizers. The workshop will take place on-site, with accommodation for exceptional cases' remote attendance via Zoom ([https://zoom.us/](https://zoom.us/)). Additionally, Slack ([https://slack.com/](https://slack.com/)) will be used for facilitating social interaction. The entire workshop is estimated to last around 8 hours as illustrated in Table 1. Session I will begin with the first keynote. We contacted Prof. Flora Salim (University of New South Wales, Australia), who kindly agreed to share lessons about fairness and robustness on UbiComp. The first half will continue with paper presentations. It will conclude with an interactive activity aimed at sparking a discussion around UbiComp fairness perspectives among the participants while building future collaborations. Session II will start with the second keynote. Prof. Ricardo Baeza-Yates (Institute for Experiential AI of Northeastern University, USA) agreed to talk about computational fairness and human-centric computing. The remaining paper presentations will follow, along with a panel discussion (Dr Akhil Mathur from Meta AI agreed to be the industry expert pansellist) on the topic to consolidate ideas into an executable research agenda. The workshop will wrap up with the best paper award. **Estimated Number of Participants.** The growing interest in the workshop's theme is demonstrated by the proliferation of similar workshops at other prestigious conferences, including FairUMAP at UMAP, Trustworthy and Socially Responsible ML (TSRML) at NeurIPS, and Trustworthy ML in Healthcare at ICLR. Yet, none of these workshops focused on the particularities of ubiquitous technologies, nor did they target the UbiComp community. We consider 30-40 attendees an appropriate size for our workshop, allowing us to shape a comprehensive future research agenda, build collaborations, and consolidate an active community around the workshop theme. **Paper Selection and Publication.** Submissions will be reviewed by at least two or three reviewers (including organizers and external reviewers) and will be published by ACM. Our acceptance criteria will be a mixture of relevance, novelty, provocativeness, and research quality. Given the timely theme of this workshop, we are confident of attracting a large number of paper submissions which will enable us to organise a high-quality workshop. **Important Dates.** * Call for Papers (CfP): 18 April 2023 * Paper submission: 23 June 2023 * Notification to authors: 8 July 2023 * Camera-ready deadline: 31 July 2023 * Workshop Day: 9 October 2023 **Pre-workshop Activities.** We will disseminate the call for papers (CfP) through diverse channels, including mailing lists, our social and professional networks, local ACM chapters, the dedicated workshop website, and our respective institutional communication channels. The website ([https://faircomp-workshop.github.io/2023/](https://faircomp-workshop.github.io/2023/)) will be a key platform to disseminate information, including the CfP, crucial deadlines, profiles of the co-organizers, Technical Program Committee (TPC), workshop schedule, and activities. Moreover, the website will also serve as an archive of the workshop outcomes, containing the workshop's summary and other outputs. Upon acceptance, we will reach out to experts from academia and industry to compose a TPC to review and select author contributions and facilitate preparing a diverse and thematically-rich program. In constituting the TPC, we will aim to find a balance among the themes relevant to the workshop. We will invite submissions of different kinds, ranging from technical papers, work-in-progress, and reviews to position papers, provocations, and case studies. The submissions will be 4-6 pages long, including references, and organizers will provide a template on the website. \begin{table} \begin{tabular}{l l l} \hline \hline & **Time** & **Activity** \\ \hline \multirow{4}{*}{**UbiComp**} & 09:00–09:15 & **Welcome:** Introduce organizers, participants, workshop objectives, and schedule. \\ & 09:15–10:15 & **Keynote \#1:** Presentation by an invited expert \\ & (4:5-min talk followed by 15-min Q&A). \\ & 10:15–10:30 & **Paper presentations \#1:** 2 paper presentations \\ & (5-min talk followed by 2-min Q&A). \\ & 10:30–10:45 & **Short Break** \\ & 10:45–12:00 & **Interactive Activity:** Collaborative ideation session in small groups about the ethical, social, technical, and legal perspectives of UbiComp fairness. \\ \hline \multirow{4}{*}{**UbiComp**} & 12:00–13:00 & Lunch Break \\ & 13:00–14:00 & **Keynote \#2:** Presentation by an invited expert \\ & (45-min talk followed by 15-min Q&A). \\ \cline{1-1} & 14:00–14:30 & **Paper presentations \#2:** 3 paper presentation \\ \cline{1-1} & (5-min talk followed by 2-min Q&A). \\ \cline{1-1} & 14:30–14:45 & **Short Break** \\ \cline{1-1} & 14:45–15:30 & **Paper presentations \#3:** 5 paper presentation (5-min talk followed by 2-min Q&A). \\ \cline{1-1} & 15:30–15:45 & **Short Break** \\ \cline{1-1} & 15:45–16:30 & **Panel Discussion:** Keynote speakers and invited industry experts discuss the topic of “Ethical \& Responsible UbiComp: a case for fairness and robustness”. \\ \cline{1-1} & 16:30–17:00 & **Wrap Up:** Closing remarks and best paper award. \\ \hline \hline \end{tabular} \end{table} Table 1. Proposed schedule for the FairComp workshop. Post-workshop ActivitiesDuring and following the workshop, the results and outcomes will be blogged on the workshop website and disseminated in ACM Interactions. Drawing on the workshop submissions and interactive activities, we will propose a journal special issue (e.g., IEEE Pervasive) and encourage participants to collaborate on submissions around the developed research agendas. Diversity Statement and AccessibilityOur workshop is committed to creating a welcoming and inclusive environment for all attendees, regardless of race, gender, sexual orientation, religion, or ability. This belief will be implemented by ensuring the diverse selection of organizers, TPC members, and speakers, disseminating the CPI in mailing lists targeting under-represented communities in computing, promoting inclusive language, and disseminating the conference's code of conduct. We aim to make our workshop inclusive to diverse participants, including access to materials. We plan on ensuring accessibility throughout the workshop. Prior to submission, the authors will be asked to adhere to UbiComp's Accessible Submission Guidelines.1 Finally, in the weeks leading up to the workshop, we will conduct a survey with attendees to identify the accessibility needs for in-person participation to accommodate during the workshop in collaboration with the JEDI Chairs. Footnote 1: [https://www.ubicomp.org/ubicomp-iwe-2023/accessibilityAccessibility-guide.diaser/](https://www.ubicomp.org/ubicomp-iwe-2023/accessibilityAccessibility-guide.diaser/) ## 4. Organizers Sofia Yfantidot is an early-stage researcher at the Aristotle University of Thessaloniki, and a Marie Sklodowska-Curie fellow at the Innovative Training Network "Real-time Analytics for the Internet of Sports". She works at the intersection of UbiComp and ML fairness. Her current research focuses on defining, quantifying, and mitigating biases in data and ML models for health and well-being. She is a Heidelberg Laureate Forum alumina and a Grace Hopper scholar.Website: [https://www.linkedin.com/in/sofiayfantidot/](https://www.linkedin.com/in/sofiayfantidot/) Dimitris Spathisisisis a research scientist at Nokia Bell Labs, Cambridge (UK) and a visiting researcher at the University of Cambridge. His work enables AI to make the most out of real-world multimodal and sequential data through label-efficient and robust ML. He previously worked at Microsoft Research, Telefonica Research, Oeado, and Questudio. His experience as an organizer of scientific meetings includes WellComp at Ubicomp '22, CHIL '23, Federated Sensing tutorial at MobiCom '21, and ML4H co-located with NeurIPS '22.Website: [https://www.cl.cam.ac.uk/~ds806/](https://www.cl.cam.ac.uk/~ds806/) Marios Constantinidesisis a Senior Research Scientist at Nokia Bell Labs, Cambridge (UK) and a visiting researcher at the University of Cambridge. He works in the areas of human-computer interaction, UbiComp, and responsible AI. His current research focuses on building AI-based technologies that augment people's interactions and communication, with a particular focus on the workplace. He has been a member of the organizing committee of the Sensi-Blend workshop at UbiComp '21, and co-organized two Special Interests Groups (SIG) at CHI '23 on future of work and responsible AI.Website: [https://comarios.com/](https://comarios.com/) Tong Xia is a third-year PhD candidate at the University of Cambridge. Her research interests lie in data mining, ML, and UbiComp for public health and human well-being. Particularly, she is keen to develop data-efficient, high-performance, uncertainty-aware, and privacy-preserving mobile health systems. She previously worked at Recent, and she has been a committee member of the UK-Tsinghua association. She also served as the Posters&Demos session chair in UbiComp '22.Website: [https://xtxiatong.github.io](https://xtxiatong.github.io) Niels van Berkel is an Associate Professor at Aalborg University. His work focuses on the design and evaluation of intelligent computing systems, particularly in real-world contexts, publishing in HCI, Social Computing, and Ubiquitous Computing. He has previously served as organizers of workshops at UbiComp (UbiTention '20, Mobile Human Contributions' '18, Sensors & Behaviour '18) and CHI (2VT '21, Emergent Interaction '21), and served on the editorial board for IJHCS (2019-present) and ACM Tits Special Issue on Human-Centered Explainable AI Website: [https://www.nielsvanberkel.com/](https://www.nielsvanberkel.com/)
2303.17954
DARKSIDE: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training
On-chip DNN inference and training at the Extreme-Edge (TinyML) impose strict latency, throughput, accuracy and flexibility requirements. Heterogeneous clusters are promising solutions to meet the challenge, combining the flexibility of DSP-enhanced cores with the performance and energy boost of dedicated accelerators. We present DARKSIDE, a System-on-Chip with a heterogeneous cluster of 8 RISC-V cores enhanced with 2-b to 32-b mixed-precision integer arithmetic. To boost performance and efficiency on key compute-intensive Deep Neural Network (DNN) kernels, the cluster is enriched with three digital accelerators: a specialized engine for low-data-reuse depthwise convolution kernels (up to 30 MAC/cycle); a minimal overhead datamover to marshal 1-b to 32-b data on-the-fly; a 16-b floating point Tensor Product Engine (TPE) for tiled matrix-multiplication acceleration. DARKSIDE is implemented in 65nm CMOS technology. The cluster achieves a peak integer performance of 65 GOPS and a peak efficiency of 835 GOPS/W when working on 2-b integer DNN kernels. When targeting floating-point tensor operations, the TPE provides up to 18.2 GFLOPS of performance or 300 GFLOPS/W of efficiency - enough to enable on-chip floating-point training at competitive speed coupled with ultra-low power quantized inference.
Angelo Garofalo, Yvan Tortorella, Matteo Perotti, Luca Valente, Alessandro Nadalini, Luca Benini, Davide Rossi, Francesco Conti
2023-03-31T10:33:49Z
http://arxiv.org/abs/2303.17954v1
Darkside: A Heterogeneous RISC-V Compute Cluster for Extreme-Edge On-Chip DNN Inference and Training ###### Abstract On-chip DNN inference and training at the Extreme-Edge (_TinyML_) impose strict latency, throughput, accuracy and flexibility requirements. Heterogeneous clusters are promising solutions to meet the challenge, combining the flexibility of DSP-enhanced cores with the performance and energy boost of dedicated accelerators. We present Darkside, a System-on-Chip with a heterogeneous cluster of 8 RISC-V cores enhanced with 2-b to 32-b mixed-precision integer arithmetic. To boost performance and efficiency on key compute-intensive Deep Neural Network (DNN) kernels, the cluster is enriched with three digital accelerators: a specialized engine for low-data-reuse depthwise convolution kernels (up to 30 MAC/cycle); a minimal overhead datamover to marshal 1-b to 32-b data on-the-fly; a 16-b floating point Tensor Product Engine (TPE) for tiled matrix-multiplication acceleration. Darkside is implemented in 65nm CMOS technology. The cluster achieves a peak integer performance of 65 GOPS and a peak efficiency of 835 GOPS/W when working on 2-b integer DNN kernels. When targeting floating-point tensor operations, the TPE provides up to 18.2 GFLOPS of performance or 300 GFLOPS/W of efficiency - enough to enable on-chip floating-point training at competitive speed coupled with ultra-low power quantized inference. Heterogeneous Cluster; Tensor Product Engine; Ultra-Low-Power AI ## I Introduction The recent mega-trend aiming at deploying Machine Learning (ML) and Deep Learning (DL) at the extreme edge of the Internet-of-Things (IoT), usually referred to as Tiny Machine Learning (_TinyML_), has reached outstanding results. For example, MobileNets [1] have rapidly become state-of-the-art compute workloads used for classification and object detection inference tasks, but also as a flexible template for tasks not related to vision [2, 3, 4]. Next-generation TinyML IoT devices, however, will likely require also the capability to adapt the deployed DL model to new data directly in the field. Re-training the model on data centers with data collected on-field from the distributed IoT end-nodes might be expensive in terms of latency and power and inconvenient from the privacy and security viewpoints. Therefore, a common direction of TinyML is to rethink the deployed Deep Neural Network (DNN) as a dynamic model that can adapt by learning from newly sensed data directly on the device. Recent progress in this research area concerns DNN model tuning, partial on-chip training [5] or unsupervised continual learning [6], which have been applied successfully to many IoT applications, such as anomaly detection tasks [7]. Satisfying both the needs of TinyML inference and on-device adaptation requires devices that are highly flexible and efficient simultaneously on these two very different tasks. Inference in TinyML devices typically adopts low-bitwidth integer arithmetic, relying on well-established Quantization-aware training [8] and post-training quantization techniques [9]. Mixed-precision approaches [10, 11], where the activations and the weights of all DNN layers can be quantized with different precisions, are State-of-the-Art (SoA) solutions to reduce the accuracy drop compared to full-precision models (e.g., within a 3 to 6% range in ImageNet Top-1), while cutting the model footprint by a significant factor (\(\sim\)7\(\times\) on MobileNets [10]). Specialized digital accelerators like [12, 13, 14, 15] achieve outstanding performance (1-50 TOPS/W) and energy efficiency (10-100 TOPS/W) on DNN kernels by exploiting low-bitwidth integer arithmetic. Recently this approach has also been adopted in analog-digital mixed-signal solutions [16, 17], boosting energy efficiency up to hundreds of TOPS/W. However, these hardware units are highly specialized in terms of supported functionality and numerical precision and leak the flexibility needed to adapt to rapidly evolving TinyML models. A different solution, exploiting clusters of parallel fully programmable architectures, would ensure the highest flexibility while still achieving competitive efficiency by leveraging instruction extensions supporting multiple formats to cover multiple data precision combinations in arithmetic instructions. Garofalo et al. [18] propose parallel RISC-V cores with SIMD sum-of-dot-product instructions and custom mac-load operations to achieve ASIC-like efficiency on symmetric DNN convolutions. To reduce the overhead of instruction decoding for multiple-precision combinations, Ottavi et al. [19] proposed lightweight status-based mixed-precision computing support to a RISC-V processor, showing two orders of magnitude better efficiency than existing commercial microcontroller solutions. Supporting multiple, mixed-precision computation is not the only flexibility challenge. Unlike the previous generation of TinyML DNN models, SoA MobileNets and derived networks feature more heterogeneous workloads, with standard convolutions combined with point-wise and depth-wise kernels. Although they have less computation complexity and smaller memory footprint, depthwise layers are characterized by low intrinsic data reuse [20]. For this reason, they are harder to accelerate with massive arrays of processing elements. As a result, in the DNN processing pipeline, Amdahl's effect moves the acceleration bottleneck toward depth-wise kernels. Likewise, data marshalling operations (e.g., low-bitwidth transpose) commonly used in DNNs heavily rely on sub-byte swap operations, which also contribute to reducing utilization of the arithmetic units. Introducing on-device training to the picture imposes yet different constraints in terms of performance and footprint, as training has stricter requirements on the data representation: integer arithmetic can not be used due to its limited dynamic range. To develop extreme-edge novel learning algorithms, a decisive effort is underway to adapt learning algorithms to lower-precision like Floating Point (FP)16 and FP8 [21, 22]. Despite this, _TinyML_ on-chip training workload is still 10-100\(\times\) larger than inference [5], and the performance requirements remain very high. Accelerating these workloads with general-purpose processors would require massive cores, blowing up the SoC's area and power consumption unacceptably. Hence, fixed-function custom designs still are the most suitable solutions to deliver significantly high performance within a TinyML compatible area and power budget. We argue that a single _catch-it-all_ solution is infeasible with all these competing requirements. Instead, boosting end-to-end AI-enhanced applications will require _heterogeneous systems_ combining different acceleration engines for different kernels, coping with strict power and cost constraints [23]: multiple programmable cores provide flexible and efficient execution for generic parallel kernels, while specialized hardware accelerators provide extra performance and efficiency boost on essential kernels that dominate the computational workload. In this work, we present Darkside, a Parallel Ultra-Low-Power (PULP)-based [23, 24] heterogeneous computing System on Chip (SoC) that targets emerging TinyML inference and on-chip training applications. We introduce four main innovations in Darkside: _1)_ RISC-V cores with advanced low-bitwidth mixed-precision integer computing capabilities; _2)_ a Depth-Wise Convolution Engine (DWE), _3)_ a low-overhead DataMover for marshalling operations, and _4)_ a low-power Tensor Product Engine (TPE) for efficient FP16 matrix multiplications. The cores and accelerators are tightly integrated into a shared-L1 cluster to enable advanced hardware/software cooperation. The chip has been fabricated in TSMC 65nm technology and achieves peak integer performance (2-bit) of 65 GOPS at 1.2V with an efficiency of 835 GOPS/W at 0.75V. On TPE-accelerated FP16 workloads, it achieves up to 18.2 GFLOPS at 1.2V and a peak efficiency of 300 GFLOPS/W and 2.6 GFLOPS at 0.75V, achieving peak performance and efficiency similar to 8-bit integer operations. ## II Darkside SoC Architecture Fig. 1 shows the architecture of the Darkside cluster. It is built around eight 81kGE RISC-V-based 32-bit processors (_RVNN_ cores), described in detail in Sec. II-B, and three specialized digital accelerators, the _TPE_, the _DWE_ and the _DataMover_. The heterogeneous cluster can be used to support complex ML models, such as those depicted in Fig. 2, through cooperation among its hardware compute units. To achieve high computing efficiency on a wide range of workloads, the key goal is to minimize the area and power impact of the specialized accelerators integrated into Darkside's cluster and improve their efficiency on data movements. To save area, we design the three accelerators to have small internal buffers, the minimal necessary to guarantee datapath utilization rate close to 100%, while they use the 128 kB scratch-pad multi-banked L1 Tightly-Coupled Data Memory (TCDM) of the cluster as primary data buffer. Moreover, to minimize their power consumption, especially when the accelerators are not used, we added clock gating cells and operand isolation gates. This strategy cut the dynamic power consumption of the _idle_ accelerators with minimal additional logic. To improve the performance and the energy efficiency of the accelerators in data movements operations, and to east their integration into the cluster, each of the three accelerators is incorporated as a Hardware Processing Engine Hardware Processing Engine (HWPE), using a standardized interface 1. Such an interface exposes to the rest of the cluster a wide data Fig. 1: Overview of the Darkside cluster architecture, featuring 8 RVNN cores, the TPE, the DWE and the DataMover accelerators. Fig. 2: Darkside heterogeneous operation. Figure shows a DNN with 3 mixed-precision quantized layers and a final fully-connected layer using all the cluster blocks and communicating through software-managed buffers allocated on the shared L1 memory. transfer port (typically much wider than 32-bit) to optimize the access to the primary data buffer and a control port that allows the RISC-V cores to program the accelerator through memory-mapped control registers, as visible in Fig. 1. In each HWPE, specialized internal _Streamers_ move data between the accelerators and the L1 TCDM memory through the data port, converting the memory accesses into data streams to feed the accelerator's datapath. The TCDM is divided into 32 4-kB SRAM banks, capable of serving 32 requests in parallel and it is shared among the three specialized accelerators and the 8 KVNN general-purpose cores. The memory requests of all the cluster's compute units are routed through a one-cycle latency hierarchical Heterogeneous Cluster Interconnect (HCI) (see Section II-A), which leverages a request/grant protocol and a world-level interleaving scheme to evenly distribute the requests, minimizing the access contentions toward the SRAM banks. The cluster also features a two-level hierarchical instruction cache (IS), implemented with latch-based SCM to improve the energy efficiency over energy-expensive SRAM cuts. It includes 8 512-B private per-core plus 4kB of two-cycle shared cache to maximize the efficiency with the data-parallel code. A dedicated DMA controller, featuring a similar size as the cores (\(\sim\)84 kGE), efficiently manages the data transfer between the L2 (off-the-cluster) and L1 memory. The DMA supports 2-D data transfers and up to 16 outstanding transactions, hiding the latency of L2-L1 data transfers on data-intensive kernels [25], while saving energy compared to cached-based systems. The cluster integrates also a small Hardware Synchronization Unit (\(\sim\)30 kGE) which manages fine-grained parallel thread dispatching and clock-gating of idle cores waiting for synchronization, enabling low-overhead and fine-grained parallelism, thus high energy efficiency. The cluster resides in a dedicated power and clock domain. It is surrounded by other IPs integrated into a different power and clock domain, namely the _Fabric_ domain. The latter includes a controlling RISC-V processor, FLLs for clock generation, a standard set of I/O peripherals and 256 kB of L2 memory containing the code executed by both the compute cluster and the controlling RISC-V core. In the context of this work, the _Fabric_ domain serves as a programmable testbench for the cluster, which is the main architectural contribution of this work. The communication bus between the _Fabric_ and the cluster domain is AXI4 based, and dual clock first-in-first-out (FIFO) buffers are used for clock domain crossing. ### _Heterogeneous Cluster Interconnect (HCI)_ To reduce area and simplify the arbitration scheme, the HCI is organized hierarchically in three different levels. At the first level, the TPE and the DWE are _statically multiplexed_ to share the same physical HWPE 288-bit data port, which is sized to meet the bandwidth requirements of the two accelerators. Since in our computing model, reported in Fig. 2, the DWE and the TPE are never used concurrently, the static multiplexing strategy is not a concern from a performance perspective. On the contrary, it allows exposing the accelerators' data interface toward the higher levels of the HCI with limited area and power costs. The second level of the HCI is organized in two branches, _logarithmic_ and _shallow_, as shown in Fig. 1: the cores, the cluster DMA and the DataMover access the L1 banks from the _logarithmic_ branch through 9 32-bit initiator ports. This branch allows all-to-all single-cycle access from the initiator ports to each word interleaved memory bank. Conflicts are handled by granting one initiator per bank at a time through a round-robin scheme. Instead, the 288-bit muxed data port is connected to the dedicated _shallow_ branch, routed to 9 adjacent memory banks without arbitration. Considering a total of \(N\) TCDM banks, routing works by splitting the address of the 288-bit wide word in an _index_ (bits \(2\) through \(\log_{2}(N)+2\)) and an _offset_ part (upper bits). The index is used to select which TCDM banks are targeted, while the offset is used to compute the bank level address, considering the possibility that the wide word "rolls over" the set of banks (if the index corresponds to one of the last banks). The third level of the HCI is at the memory side, where the TCDM banks are connected to the two HCI branches via multiplexers, granting access to one branch or the other according to a configurable-latency starvation-free rotation scheme. Ports from the logarithmic branch are stalled individually, whereas those from the shallow branch are stalled collectively (a single collision will result in no grant for the whole branch) to reflect the fact that they are actually a single access. Priority is given to a branch configurable via a memory-mapped register, and switched for one access after a configurable number of cycles. Fig. 3 showcases this mechanism in an example. The heterogeneous organization of the interconnect serves two purposes. On the one hand, the HCI can be configured in software (by writing a memory-mapped register) to prioritize either the _shallow_ or the _logarithmic_ branch and guarantee a minimum quality of service (in terms of consecutive stall cycles) to the non-priority branch (by setting a register with the maximum number of stalls that the less priority branch can tolerate). This enables to control and tune the interconnect's performance at a fine granularity. For example, setting priority to the _shallow_ branch and maximum stall of 10 cycles in the _logarithmic_ branch means that after 10 collisions the priority will be switched to _logarithmic_ side for one cycle, hence guaranteeing a 9.1% collision rate, delivering up to 20.9 GB/s at 290 MHz in the configuration used in Darkside (i.e. 288-bit wide shallow branch and 9 32-bit initiator ports in the Fig. 3: Simplified example of HCI _shallow_ routing and arbitration between _shallow_ and _logarithmic_ branches, considering \(N\) TCDM banks. The example shows a _shallow_ 128-bit (4-ports) wide access starting on bank 1 and two 32-bit accesses from the _logarithmic_ side. logarithmic branch), even on data-intensive kernels. On the other hand, the scalability of the _logarithmic_ interconnect is limited: attaching the accelerators to a non-hierarchical interconnect would result in a much more complex, larger, and power-hungry interconnect circuit, leading to poor cluster-level performance-per-area and per power. The HCI occupies 7.3% (\(\sim\)220kGE) of the total cluster area; our synthesis trials have shown that the overall the complexity of the interconnect is reduced by 15% with respect to a purely logarithmic interconnect, which combined with easier timing closure and extended functionality led to the choice of this design. ### _Dynamic Bit-Scalable Fused Mac-Load SIMD Operations_ The Darkside cluster's core, namely RVNN, is a 4-stage in-order single-issue pipeline, depicted in Fig. 4a, that implements the RV32IMCF RISC-V Instruction Set Architecture (ISA), plus custom mixed-precision SIMD instructions operating on vector elements with power-of-two precision formats from 2-bit to 32-bit and all their possible permutations, supported through a dynamic bit-scalable execution [19]: the instruction encoded into the ISA identifies only the type of SIMD operations to be performed (denoted as _Virtual Instruction_), while its format (i.e. the precision of the operands) is specified at run-time by reading the content of a specific _Control&Status Register_ (CSR) of the core, which is writable by the programmer to set the desired precision, including mixed-formats. The SIMD instructions include _dot-product_ (_dotp_) based operations relevant to speed-up low-bitwidth compute-intensive kernels like Matrix-Matrix and Matrix-Vector multiplications. The micro-architecture of RVNN is built on the baseline of the RISCY [26] core and is reported in Fig. 4a: we extend the ALU and the _Dot-product_ Unit to process 2-bit and 4-bit SIMD operations which are not supported by RISCY, we add extra CSR registers to store the instruction formats' information and we integrate the _Mixed-Precision Controller_ (MPC) into the ID-STAGE of the pipeline. When a mixed-precision _dotp_ SIMD operation is performed, the decoder issues the _Virtual Instruction_ to select the specific compute unit to be used in the EX-STAGE of the pipeline, the format of the operands is specified by the CSR, while other control signals required for the execution are provided by the MPC. The _Dot-Product_ Unit, as shown in Fig. 4a, is preceeded by a _Slicer and Router_ network, controlled by the MPC, which slices the registers according to the format (FMT) specified by the MPC; it selects the sub-portion of the vector RS2 to be used in the current operation and sign-extends (or zero-extends) it to match the size of the vector in RS1; afterwards, the network routes the operands to the appropriate set of multipliers. To minimize the logic necessary to implement the new extensions, the first operand of the mixed-precision operations (RS1) is designated to be always the highest precision operand, without loss of generality given the commutative property of add and multiply operators. The extended pipeline entails 17% area overhead and 3% power overhead compared to RI5CY, but it improves the performance on sub-byte and mixed-precision kernels by a significant factor (up to 7.7\(\times\)). The key enhancement of RVNN is a fused MAC-load (M&L) operation that applies to any mixed-precision SIMD _dotp_ instruction supported. The design of the M&L collapses the SIMD MAC and the load operations into a single one-cycle latency instruction since the datapath activated by the MAC operation would not interfere with the Load-Store Unit of the processor, and the two units can run in parallel. Fig. 4a shows the micro-architectural modifications to the cores' datapath to enable the M&L. When the M&L is executed, the two operands for the _dotp_ operation are fetched from a dedicated register file, namely the Neural Network Register File (NN-RF), and routed to the _Dotp_-Unit through a multiplexer controlled by the MPC. At the same time, the accumulators reside in the GP-RF. The NN-RF consists of 6 32-bit registers and is sized to maximize the innermost loops performance of the PULP-NN [27] convolution routines, dedicating 4 out of 6 registers to layer's weights and 2 out of 6 registers to input activations. As visible in Fig. 4a, this choice constraints the activations of the convolution layers always to feature higher precision than the weights in mixed-precision Fig. 4: a). Pipeline extension to the RI5CY core to support mixed-precision and M&L instructions. b). Example of a _MatMul_ kernel using M&L instruction, compared to the same kernel implemented without M&L instruction. Thanks to the M&L operating on the dedicated NN-RF, we can implement larger layouts of _MatMul_ kernels (right-sided) with a significant gain in terms of throughput. operations, which however is the common case in current state-of-the-art software solutions to deploy DNN models at the extreme-edge of the IoT [28]. Since the M&L operates on the NN-RF, the occupancy of the 32 32-bit registers of the General-Purpose Register File (GP-RF) is reduced by a significant factor, since it would only host the accumulators of the _dotp_ operations and the addresses for the memory accesses. As a consequence, we can implement compute kernels with a higher amount of data reuse without incurring overheads to move data back and forth from the stack in the innermost hot loop (Fig. 4, right-sided kernel). This solution guarantees up to 1.7\(\times\) performance improvements over the execution of the same kernel without M&L, with an extra area overhead of only 5%, necessary to integrate the NN-RF in the EX-STAGE of the core pipeline. When a M&L instruction is executed, one of the two source operands from the Neural Network Register File (NN-RF) can eventually be updated with new data fetched from memory by the LSU, extended to operate on the NN-RF with negligible area overhead. However, the data stored in the NN-RF registers can be kept until necessary to allow a higher degree of flexibility for data reuse strategies: a second M&L instruction encoded into the ISA performs only the _dotp_ branch, with no register update. From an instruction count perspective, the M&L brings significant advantages, as shown in Fig.4b. After out-of-the-loop initialization of the dedicated Neural NN-RF, we perform 16 SIMD _dotp_-like operations and only a single explicit load (with no concurrent MAC) instruction. Therefore, we reduce the number of pure load instructions in the innermost loop of the kernel by a factor of 6, at the same time doubling the throughput, with an overall dot-product/cycle improvement of 57% compared to the same core not featuring the M&L. On the contrary, the impact of the M&L on the Performance, Power, and Area (PPA) metrics of the RVNN core is minimal. Overall, the M&L implies a gate count increase of just 8.3%, without deteriorating the critical path of the core and with negligible power overhead. On the other hand, the core enhanced with the M&L achieves up to 94% dot-product unit utilization, compared with 58% of the RI5CY baseline. ### _Tensor Product Engine_ The Tensor Product Engine (TPE) [29] accelerates matrix multiplications (MatMuls) of the kind \(Z~{}=X\cdot W\). It is designed to use the IEEE 754 binary-16 representation (FP16 in the following) since it is understood that FP16 can be used to train Neural Networks without significant accuracy loss, but reducing the power consumption and time to computation [30]. Fig. 5 shows the implementation of the TPE. The datapath consists of 32 FP16 Fused Multiply-Add (FMA) units [31] organized in 8 rows, each of 4 columns. This configuration guarantees at the same time a good speed-up (from \(9\times\) up to \(22\times\)) with respect to the software parallel execution, and an area overhead bounded to 44.8% of the area occupied by all the RV-NN cores. The FMAs along each row are cascaded; each FMA passes the intermediate result as input to the unit to its right. To internally handle the computation of matrices larger than the array size, and to avoid intermediate store operations, the FMAs in each row are closed in a feedback loop so that the right-most FMA feeds back the computed partial product as accumulation input to the left-most FMA of the same row. Using this approach, the TPE can exploit maximum reuse of both the **X**-matrix elements and the intermediate product, so that it stores the computed sub-blocks of the **Z**-matrix to the memory only at the very end of their computation. To match the critical path of the cores, each FMA features three internal pipeline registers. To maximize throughput, the **X**-matrix elements of each FMA are held steady for the number of cycles necessary to the FMAs of each row to compute the partial results. On the other hand, **W**-matrix operands are streamed-in at each cycle and broadcasted to all the FMAs of the same column. The memory accesses are scheduled so the load and store phases do not introduce overhead during the computation. This way, the TPE can reach an overall 98.8% utilization of the internal FMAs with a near-to-ideal performance (31.6 out of 32 MAC/cycle). The computed sub-blocks of the **Z**-matrix are stored in the memory only at the end of their computation, maximizing internal data reuse. The TPE, as well as the other accelerators integrated in Darkside, features a non-blocking event-based execution mode: the cores of the cluster, after programming the accelerator and starting its execution, can either go in sleep mode or resume software code execution. This mechanism enables complex execution models where the accelerator can be used in parallel with the general-purpose cores to boost the performance of the target kernel. This scenario is made possible also thanks to the dynamic arbitration mechanism provided by the HCI, which allows the requests to the memory by the TPE and the cores to be served simultaneously if there are no bank conflicts. ### _Depth-Wise Convolution Engine_ The Depth-Wise Convolution Engine (DWE) can process the low-reuse depth-wise component of the depth-wise + point-wise kernels often used in recent neural network models Fig. 5: Architecture of the TPE, with a focus on the interconnection of the FMAs within the datapath. for mobile applications, leaving the much better-parallelizable point-wise kernels to the M&L-accelerated software. The DWE processes 8-bit signed input and weight tensors stored in L1 memory using the Height-Width-Channel (HWC) layout, the same used by the cores to execute the point-wise kernels and therefore requiring no time-consuming intermediate on-the-fly marshalling operations. The 8-bit output tensors are generated after applying re-quantization steps. The architecture is shown in Fig. 5(a). It employs a weight-stationary data flow to maximize the data reuse and targets 3\(\times\)3 depth-wise layers, the one most commonly encountered in DNNs. Although its datapath is optimized for 3\(\times\)3 depth-wise convolutions, the DWE can be used to run kernels with different sizes using the same approach presented in [32], at the cost of additional data manipulation on the intermediate results, hence less computation efficiency. The execution flow is depicted in Fig. 5(b). The weights from 16 3\(\times\)3 filters are loaded into the _Weights Buffer_1, before the execution starts. The weights are kept in the buffer until they have been used to scan the whole input tensor. The input image is filtered through a vertical sliding window on the spatial dimensions, using a _window buffer_ of 4\(\times\)3\(\times\)16 registers. The first three rows are loaded at the beginning of the iteration 2 and consumed in 4 cycles by the datapath of the DWE consisting of 36 MAC units. The intermediate results are accumulated over 16 32-bit buffers, accessed 4 at a time in the 4-cycle operation loop 3a. Afterwards, non-linear activation functions and ancillary operations such as shifting and clipping are applied to re-quantize the results to 8-bit precision 4. After 4 cycles of operation, the 16-channel 8-bit pixels stored in the _output buffer_ are streamed out of the accelerator 5. Meanwhile, the streamer uses three cycles (overlapped with the computation) to fill the fourth row of the _window buffer_3b, needed to implement the sliding window mechanism. The DWE is designed to keep the datapath always active, in all stages and to fully exploit the memory bandwidth of 36B per cycle available on the cluster, achieving the overall performance of 30 MAC/cycle, more than 10\(\times\) better than a software execution of the depth-wise kernels, on the 8 RVNN cores. ### _DataMover_ On-the-fly data marshalling operations can dramatically reduce the performance of DNN workloads, during both inference and training tasks. To perform efficiently on-the-fly data transposition, the Darkside's cluster is enhanced with a DataMover unit, exposed to the HCI with an additional master port on the _logarithmic_ branch. The architecture is depicted in Fig. 7. It consists of a tiny accelerator of only 54 kGE, capable of transposing DNN 3-dimensional tensors stored in the L1 memory, with 1.5-100\(\times\) less time than eight RVNN cores and increased energy efficiency up to 50\(\times\) (the lower the precision of chunks to transpose the more significant the advantages). The accelerator works on data with configurable precision, \(d\), in the range from 32-bit down to 1-bit. It splits incoming data streams from memory into chunks of size \(d\), internally buffered into the _Shuffle Buffer_ which features 32\(\times\) 32-bit register with the transposed output format; only 32/\(d\) of them are used, depending on the data size configuration \(d\). After 32/\(d\) input stream transactions, the transposed words are streamed out to the L1 memory and the accelerator continues the operations with new input streams. Fig. 8: Chip micrograph and specifications. Area breakdown of the cluster. Fig. 6: (a) Architecture overview of the Depth-wise digital accelerator, enclosed in the HWPE. (b) Execution flow of the depth-wise operation. Fig. 7: Overview of the DataMover architecture and execution flow. ## III Chip Implementation and Measurements Fig. 8 shows the chip micrograph of Darkside. The SoC is implemented with TSMC 65nm technology, targeting a clock frequency of 250 MHz in worst-case operating conditions. The die area, including the _Fabric_ domain, is 12 mm\({}^{2}\), while the cluster area is 4.84 mm\({}^{2}\), partitioned as shown in Fig. 8. The majority of the cluster's area is occupied by cores, hierarchical IS and 128kB of L1 memory, while the accelerators account only for 15.3% of the total area. The measurements of the Darkside's cluster are performed using an Advantest SoC hp9300 integrated circuit testing device, which precisely regulates the supply voltages delivered to the SoC and allows accurate current measurements of the SoC's cluster power domain. Fig. 9 reports the maximum operating frequency and the power consumption of the cluster over the 0.75V to 1.2V voltage range. The operating frequency increases linearly with the supply voltage up to 290 MHz at 1.2V. The power is measured on the silicon prototype, running integer and floating-point compute-intensive kernels (MatMuls). Offloading FP16 MatMuls on the TPE saves 30% of the power compared to the execution on the 8 cores. Fig. 10 shows the cluster's performance and efficiency on integer and floating-point MatMuls, sweeping all the main supported data formats. The measurements are taken at the maximum operating frequency, sweeping the supply voltage from 0.75V to 1.2V. On ML workloads (i.e., integer 8-4-/2-bits) deployed on RV-NN cores, Darkside delivers up to 65 GOPS with a peak efficiency of 835 GOPS/W. The TPE boosts the performance and the efficiency of FP16 MatMuls to 18.2 GFLOPS and 300 GFLOPS/W, respectively, 17.7\(\times\) and 21.8\(\times\) higher compared to a software scalar execution on the 4 FPUs of the Darkside cluster. ## IV Benchmarking To demonstrate the capabilities of Darkside on SoA DNN workloads, we benchmark mixed-precision convolution kernels, end-to-end inference of the MobileNetV2 and one TinyML training use-case. ### _Single DNN Kernels_ To highlight the features of the RVNN cores, we show improvements over the baseline, benchmarking several convolution kernels. To measure the computing performance, the layers operate on data stored in the L1 memory, with 64 3\(\times\)3\(\times\)32 filters applied on a 32\(\times\)16\(\times\)16 input feature map, spanning different integer data formats (from 8- down to 2-bit) for the inputs and the weights, including mixed-precision cases. Results are reported in Fig. 11 in terms of normalized execution cycles (with respect to the RVNN core with M&L). As shown, the M&L instruction improves the performance by up to 1.7\(\times\) compared the execution with baseline mixed-precision SIMD _dtop_ and load instructions (_Mixed_ in the figure). Overall, ISA and micro-architecture design of RVNN leads to a cumulative performance improvement of up to 13\(\times\) with respect to RI5CY, which supports only 8-bit SIMD operations and no M&L mechanisms. Analyzing the depth-wise kernels, in Fig. 11 we show that this workload achieves at least 5.8\(\times\) better performance by offloading it to the dedicated digital accelerator presented in this work, the Depth-wise engine (_DWE_), instead of running it on 8 RVNN cores. This conclusion is also strengthened in Sec. IV-B on a real-life _Bottleneck_ layer use-case. Furthermore, we show that the DataMover can reduce by more than 3.7\(\times\) execution cycles on 8-bit data marshalling operations, compared to the same task offloaded to the 8 cores. On 16-bit floating-point (FP16) matrix-multiplication workloads, the TPE boosts the performance by up to 10.3\(\times\) with respect to the software execution of the same kernels Fig. 11: Left: Normalized single-core performance of mixed-precision convolutions. RVNN core (_Mixed w M&L_) is compared against a core featuring only mixed-precision _dtop_ operations but no M&L (_Mixed_) and against _RI5CY_, which features no extensions for quantized neural networks. Right: performance improvement of execution on cluster-coupled accelerators over software (8-cores). Fig. 10: Performance and Energy Efficiency of the Darkside Cluster. The measurements are performed at the max frequency, sweeping the supply voltage between 0.75V and 1.2V. Fig. 9: Voltage sweep vs. max frequency vs. power consumption. exploiting the FP16 SIMD instructions available on the 4 floating-point units (FPUs) present in the cluster [31]. ### _End-to-End MobileNetV2_ First, we present the results of benchmarking the _Bottleneck_ layer, the core building block of the MobileNetV2. We demonstrate our improvements incrementally by comparing our architectural solutions over a reference cluster that features 8 RISCY cores (without the mixed-precision SIMD operations and the M&L custom ISA extensions proposed in this work) and no dedicated accelerator. To implement the software to execute the _Bottleneck_ we use the PULP-NN library (which we use as-is to benchmark the reference cluster), extended to include additional kernels to exploit the new ISA instructions implemented in the RV-NN cores and a set of hardware-abstraction-layer (HAL) functions to program and start the accelerators that the programmer can easily insert into the C code. We adopt the 8-bit signed integer representation for all the tensors of the _Bottleneck_. The results, in terms of execution cycles and energy efficiency, are reported in Fig. 12. The M&L improves the execution of point-wise and depth-wise layers on 8 RVNN cores by 1.31\(\times\) compared to the execution on 8 RISCY cores. Additional 1.13\(\times\) improvements are given by the data transposition (i.e. HWC to CHW data marshalling) performed by the DataMover, instead of transposing data via software. Finally, the DWE allows to speed-up the execution of depth-wise convolution by 4.4\(\times\) compared to the execution on 8 RV-NN cores, with a final performance improvement of 1.85\(\times\) on the whole _Bottleneck_ layer compared to the RI5CY baseline. To put the previous results in perspective, we benchmark Darside on the end-to-end inference task of the MobileNetV2 model. We employed the standard MobileNetV2 with depth multiplier 1.0 and input size 224\(\times\)224, composed of 16 stacked _Bottleneck_ layers. The input and weight tensors feature 8-bit precision for all the depth-wise and Conv2d layers. The point-wise layers feature 8-bit input tensors, while the weights are represented with reduced precision, i.e. 4-bits. This mixed-precision configuration of the model achieves similar Top-1 accuracy (only a 2.5% drop compared to the 8-bit version) while ensuring almost 2\(\times\) less memory footprint (1.07 MB) for the network weights [11]. To enable the computation on the cluster, both weights and activations of the model must be divided into tiles that fit 128 kB of the L1 SRAM. Therefore, we assume the weights and the feature maps for all the network layers to be stored in the off-the-cluster L2 memory, and we adopt the data and execution flow presented in Dory [25]. Dory is used to calculating the data tiling solutions fitting the L1 memory constraints and to schedule the data transfers from L2 to L1 and vice-versa, performed through the cluster DMA in double-buffering. The described software pipeline is represented in Fig. 13. For cases where the execution is not memory-bound, data movements overlap with the computation, with negligible overhead (\(\leq 5\%\)) to the execution latency. However, since Darside's _Fabric_ domain has the only purpose of acting as a programmable testbench for the cluster, it features a small L2 memory which is insufficient to host the entire MobileNetV2. Therefore, to benchmark the computing capabilities of the cluster on real-life end-to-end DNN models, we exploit our previous experience on explicit memory management, data tiling techniques [25] and on the deployment of real-sized DNN models on application chips such as Vega [23] to build a model of the system, with larger L2 memory, on which we run the experiments. The hardware-oriented description of the SoC is integrated into our open-source 2 event-based emulator, called GVSOC [33]; to run the experiments, the following measurements and considerations are taken: Footnote 2: [https://github.com/pulp-platform/gysoc](https://github.com/pulp-platform/gysoc) 1. We assume to have a L2 memory of 2MB, necessary to host the entire MobileNetV2 model and to store the program code; 2. We analyze the traffic between L2 and L1 memories by running end-to-end simulations of the MobileNetV2 on Fig. 12: Comparison of the _Bottleneck_ layer execution, off-loaded to different hardware compute units. a) reports the execution latency (in terms of compute cycles), b) reports the energy efficiency (in GOPS/W). The cluster is running at the best performance operating point: \(f_{CLU}=290\,MHz\), \(V_{DD}=1.2\,V\). Fig. 13: DNN tiling software pipeline. The figure shows the concurrent execution of weights and activations transfer (from L2 to L1 memory, indicated with _IN_), the computation of the kernel (indicated with _C_) and the copy of produced output tiles from L1 to L2 memory (indicated with _O_). the GVSOC; as expected, during the execution of the inference task we are never memory-bound; therefore, the contribution of the L2 to L1 (and vice-versa) data movements is relevant only for the total energy consumption; 3. We conduct silicon measurements, in terms of latency and energy, on all the L2 to L1 data transfers (and vice-versa) necessary to compute each tile and determined by the GVSOC simulations; we then include the measurements in the model; 4. We conduct silicon measurements, in terms of latency and the energy, on all the kernels necessary to compute each tile generated by the Dory framework; we then include the measurements in the model. The layer-wise compute time and energy of the inference task are shown in Figure 14. Darkside can perform the entire end-to-end task with a performance of more than 20 frame/s, with an energy budget of 11mJ. The performance is 2\(\times\) better than the one achieved on the Vega's cluster running at 250 MHz [23], thanks to our architectural contributions, the M&L extensions that accelerate the point-wise kernels and the dedicated DWE that boosts the execution of depth-wise convolutions. Despite Vega being implemented in the 22nm technology node, our end-to-end energy consumption of 11 mJ remains still comparable, in the same order of magnitude. ### _TinyML On-Chip Training_ The TPE enhances the Darkside cluster to support efficient FP matrix-matrix multiplications, enabling de-facto on-chip TinyML training workloads. To benchmark the SoC in terms of execution latency and energy on real-sized problems, we execute the Autoencoder (AE) DNN model [7], commonly used within the TinyML scenario for unsupervised anomaly detection tasks. The TinyML AE consists of Encoder and Decoder layers (made by 128 unit Fully Connected layers with BatchNorm and ReLu activation functions) and a latent space layer of size 8. The input and the output size is 640. We benchmark the whole training stage (forward and backward steps within one training _epoch_), adopting a batch size of 16, which is a reasonable trade-off between performance and memory occupation for IoT multi-core microcontroller-class devices. The tensors are represented with the FP16 format, and we adopt the same data flow explained above, which uses tilings and double-buffering. To highlight the boost given by the TPE and the DataMover, we first implement the AE on the 8 general-purpose RVNN cores (we call this configuration _SW_), which share 4 floating-point units supporting FP16 formats, using a software library optimized for on-chip training [34]. Then, we implement the AE offloading the matrix-multiplication workload to the TPE (_TPE_ configuration), still performing on the cores control tasks (e.g. programming the DMA for double buffering, programming the TPE control units) and matrix transpositions. As a third execution mapping, we use the DataMover to speed-up also the matrix transpositions (_TPE + DataMover_). The results are reported in Fig. 15 in terms of execution latency and energy consumption. As expected, the TPE delivers at least 10\(\times\) speed-up with respect to a pure SW execution on all the layers of the AE except for the latent space layers, where the performance improvement is reduced to 4-7 \(\times\) due to lower arithmetic intensity of those layers. The matrix transposition performed with the tiny DataMover accelerator contributes to an additional 1-2 \(\times\) of speed-up. Overall, combining the TPE and the DataMover, the entire training epoch runs in 1.8 ms with an energy consumption of \(345\,\mu J\), 13 \(\times\) faster than the _SW_ execution of the AE on the 8 RV-NN cores, with 14\(\times\) lower energy consumption. ## V Comparison with the State-of-the-Art Tab. I compares Darkside with a wide range of programmable embedded computing platforms that exploit either parallelism or heterogeneity to address the computing requirements of emerging TinyML applications. Compared to a traditional low-power programmable IoT system such as [35], representative of a wide range of low Fig. 14: Layer-wise execution latency and energy of the MobileNetV2 on Darkside’s cluster running at \(f_{clk}=290\,MHz\) and \(V_{DD}=1.2\,V\). Fig. 15: a) Layer-wise execution time of the AutoEncoder (AE) TinyML model. b) Execution Energy of the AE, including the energy for L2 to L1 (and vice-versa) data movements. Cluster running at 290MHz and 1.2. cost microcontrollers embedding CortexM0, Darkside delivers several orders of magnitude better integer (8-bit) peak performance and also 1.9\(\times\) better energy efficiency, despite _SleepRunner_[35] is implemented in a more scaled technology node (28nm FD-SOI). Contrarily to Darkside's cluster, the implementation strategy of SleepRunner is highly optimized to operate at very low voltage (i.e. down to 0.4V). Its architecture features a simple memory hierarchy and interconnects scheme, which consumes very low power but poses severe limitations during the execution of complex near-sensor data analytic applications, which are efficiently sustained on Darkside. With respect to hardware-accelerated IoT end-nodes such as _SamuraiA1_[36], implemented in 28nm FD-SOI technology, our SoC achieves similar energy efficiency on DNN workloads (only 1.2\(\times\) less efficient despite the less scaled technology node used to implement Darkside, 65nm) but with a significant gain of 10\(\times\) in terms of peak performance. This gain is primarily due to the custom extensions of RV-NN cores and the parallel computing cluster over the sequential solution presented in [36]. Finally, we compare Darkside with two SoCs that exploit a similar architectural template: Dustin [37] and Vega [23] implement a multi-core RISC-V compute cluster in 65nm and 22nm, respectively. Compared to Vega [23], Darkside delivers better performance on 8-bit integer workloads thanks to the M&L instruction. Contrarily to Vega, Darkside can support also mixed and lower-precision (than 8-bit) integer workloads thanks to the enhanced mixed-precision ISA, enabling the computation of emerging DNN models that employ asymmetric quantization schemes [10]. On 32-bit FP workloads, Vega surpasses our solution in performance and energy efficiency due to the higher frequency operating mode and the much more scaled technology node. However, despite the previously mentioned advantages of Vega, the TPE of Darkside ensures 2.32\(\times\) better energy efficiency on FP16 workloads, with a considerable performance gain of up to 5.6\(\times\). Compared to Dustin, featuring a cluster with 16 processors with mixed-precision extensions implemented in the same technology node, the proposed cluster shows slightly less energy efficiency due to the power reduction achieved by Dustin, thanks to the Vector Lockstep Execution Mode (VLEM)3. However, Darkside still achieves 1.13\(\times\) better performance with half of the cores, thanks to the M&L extension. Footnote 3: The VLEM is not compatible with the cores unused in Darkside (RVNN) and the two optimizations could be eventually combined to improve computing energy efficiency. ## VI Conclusion We presented Darkside, a low-power heterogeneous compute cluster for _TinyML_ DNN inference and on-chip training. The cluster features 8 RISC-V cores, enhanced with 2-bit to 32-bit mixed-precision integer SIMD instructions and fused mac-load operations. It also features specialized accelerators to boost the performance of integer depth-wise convolutions, reduce the latency of data marshalling operations, and enhance the performance and efficiency of FP16 kernels. The proposed SoC, implemented in TSMC 65nm technology, can achieve up to 65 GOPS peak performance on ML workloads, with 835 GOPS/W of energy efficiency. On FP16 kernels offloaded to the TPU, the SoC achieves 18.2 GFLOPS with 300 GFLOPS/W, surpassing the efficiency and performance of state-of-the-art SoCs implemented in much more scaled and expensive technology nodes.
2301.13393
Probably Anytime-Safe Stochastic Combinatorial Semi-Bandits
Motivated by concerns about making online decisions that incur undue amount of risk at each time step, in this paper, we formulate the probably anytime-safe stochastic combinatorial semi-bandits problem. In this problem, the agent is given the option to select a subset of size at most $K$ from a set of $L$ ground items. Each item is associated to a certain mean reward as well as a variance that represents its risk. To mitigate the risk that the agent incurs, we require that with probability at least $1-\delta$, over the entire horizon of time $T$, each of the choices that the agent makes should contain items whose sum of variances does not exceed a certain variance budget. We call this probably anytime-safe constraint. Under this constraint, we design and analyze an algorithm {\sc PASCombUCB} that minimizes the regret over the horizon of time $T$. By developing accompanying information-theoretic lower bounds, we show that under both the problem-dependent and problem-independent paradigms, {\sc PASCombUCB} is almost asymptotically optimal. Experiments are conducted to corroborate our theoretical findings. Our problem setup, the proposed {\sc PASCombUCB} algorithm, and novel analyses are applicable to domains such as recommendation systems and transportation in which an agent is allowed to choose multiple items at a single time step and wishes to control the risk over the whole time horizon.
Yunlong Hou, Vincent Y. F. Tan, Zixin Zhong
2023-01-31T03:49:00Z
http://arxiv.org/abs/2301.13393v2
# Probably Anytime-Safe Stochastic Combinatorial Semi-Bandits ###### Abstract Motivated by concerns about making online decisions that incur undue amount of risk at each time step, in this paper, we formulate the probably anytime-safe stochastic combinatorial semi-bandits problem. In this problem, the agent is given the option to select a subset of size at most \(K\) from a set of \(L\) ground items. Each item is associated to a certain mean reward as well as a variance that represents its risk. To mitigate the risk that the agent incurs, we require that with probability at least \(1-\delta\), over the entire horizon of time \(T\), each of the choices that the agent makes should contain items whose sum of variances does not exceed a certain variance budget. We call this probably anytime-safe constraint. Under this constraint, we design and analyze an algorithm PASCombUCB that minimizes the regret over the horizon of time \(T\). By developing accompanying information-theoretic lower bounds, we show under both the problem-dependent and problem-independent paradigms, PASCombUCB is almost asymptotically optimal. Our problem setup, the proposed PASCombUCB algorithm, and novel analyses are applicable to domains such as recommendation systems and transportation in which an agent is allowed to choose multiple items at a single time step and wishes to control the risk over the whole time horizon. Machine Learning, ICML ## 1 Introduction Audrey, a burgeoning social media influencer, makes profits by posting advertisements (ads) under her account. The advertiser pays her only if an ad is clicked. Having taken a class in online optimization, Audrey aims to leverage the theory of bandit algorithms to design an exploration-exploitation strategy to ensure that the expected number of clicks of the ads she has posted is maximized. Since the platform is space-limited, Audrey can only post no more than \(K\) out of \(L\) available ads everyday. Some of these ads, however, include an innocuous-looking lottery or voucher that asks the viewer of the social media platform to provide personal information that may lead to fraud or information leakage. If a user clicks it and becomes a victim of fraud, this may damage Audrey's reputation. Audrey thus has to be circumspect in which and how many ads she posts. On the one hand, Audrey wants to post as many ads with what she believes have high click-through rates as possible; the expected reward she obtains is then the sum of expected rewards of the individual ads. On the other hand, she should balance this with the total risk of the ads that are posted over a period of time; similarly, the risk of a set of ads posted is modeled as the sum of the risks of the individual ads. How should Audrey plan the posts of her ads over a period of time to learn their individual expected rewards and risks to ensure that her total expected reward is maximized and, at the same time, with high probability, the risk incurred _at any point in time_ in her exploration-exploitation strategy is bounded by some fixed permissible threshold? In addition to influencers like Audrey, online platforms that make profits by advertising such as YouTube and TikTok also encounter similar problems. We are therefore motivated to formulate the _probably anytime-safe stochastic combinatorial semi-bandits_ problem which is a _regret minimization_ problem with an anytime safety constraint. More precisely, we aim to design and analyze the performance of an algorithm that, with high probability, ensures that the risk (as measured by the variance) _at any time_ step is below a given threshold and whose regret is minimized. **Literature review.** There is a large body of works that take risk into account while conducting the exploration and/or exploitation of the unknown reward distributions in the stochastic multi-armed bandits (MABs) literature. Under the risk-constrained pure exploration framework, Hou et al. (2023) and David et al. (2018) attempted to identify the optimal arm within those low-risk (based on their variances or \(\alpha\)-quantiles) arms with probability at least \(1-\delta\). Under the _risk-aware_ regret minimization setup, Sani et al. (2012), Vakili and Zhao (2016) and Zhu and Tan (2020) consider the mean-variance as the measure to be minimized over a fixed time horizon. Cassel et al. (2018) provided a general and systematic instruction to analyzing risk-aware MABs, i.e., the risk was incorporated in the _Empirical Distribution Performance Measure_ and the U-UCB algorithm is adopted to perform "proxy regret minimization". While these risk-aware algorithms reduce the overall risk during the exploration and exploitation process, the risk is not strictly enforced to be below a prescribed threshold; rather the risk measure is penalized within the objective function, similarly to a Lagrangian. Another setup similar to the risk-aware setup is the _constrained_ bandits regret minimization. Mahdavi et al. (2012) required that the number of times the constraint can only be violated is at most sublinear in the horizon \(T\). Kagrecha et al. (2023) proposed a CVaR constraint and performed exploration on the feasible arm, followed by exploration among the feasible arm set. Unlike our formulation, these algorithm are permitted to sample risky arms during exploration. A more stringent constraint can be found in the literature on _conservative bandits_(Wu et al., 2016), which requires the cumulative return at any time step to be above a constant fraction of the return resulting from repeatedly sampling the base arm. Kazerouni et al. (2017) extended this setup to conservative contextual linear bandits and this was further improved by Garcelon et al. (2020). A similar problem is _bandits with knapsacks_(Badanidiyuru et al., 2018), which imposes a budget on the cumulative consumed resources and the algorithm stops when the budget is depleted. The most stringent constraint can be found in the _safe bandits_ problem. Khezeli and Bitar (2020) and Moradipari et al. (2020) presented the SEGE, SCLUCB, and SCLTS algorithms to tackle this problem. This problem demands that the expected reward of the pulled arm at each time step to be greater than a prescribed threshold with high probability, also known as the _"stagewise safety constraint"_. The authors utilized the convexity (and continuity) of the arm set and performed exploration around the explored arms, starting from a baseline arm. This correlation among the arms generally does not hold under the combinatorial semi-bandits setup. For the (unconstrained) combinatorial semi-bandits (CSB) setup, Chen et al. (2013) presented a UCB-type algorithm ComUCB1 to balance the trade-off between exploration and exploitation. Kveton et al. (2015b) improved the analysis of ComUCB1 and achieved a tight upper bound (within a specific set of instances). Kveton et al. (2014) introduced matroid structure to CSB and leveraged the matroid structure to design and analyze a greedy algorithm OMM. The risk-aware CSB problem is less studied by the community. Ayyagari and Dukkipati (2021) utilized CVaR as the risk-aware measure within the CSB problem, where the risk constraint was not explicitly specified. We observe that the existing literature mentioned above are not directly applicable to Audrey, while our setting (described formally below) dovetails neatly with her problem. Audrey can utilize our algorithm to sequentially and adaptively select different sets of ads everyday and almost always (i.e., with high probability) avoids sets of ads with unacceptably high risks. Beyond any specific applications, we believe that this problem is of fundamental theoretical importance in the broad context of regret minimization in combinatorial multi-armed bandits. **Main Contributions.** In probably anytime-safe stochastic combinatorial semi-bandits, there are \(L\) items with different reward distributions. At each time step, a random reward is generated from each item's distribution. Based on the previous observations, the learning agent selects a _solution_ at each time step. A solution consists of at most \(K\) items. The expected return (variance) of a solution is the summation of the reward (variance) of its constituents. Given \(T\in\mathbb{N}\), the agent aims to maximize the cumulative return over \(T\) time steps and ensure that with probability \(1-\delta\) the variance of all selected solutions are below a given threshold. The key challenge of regret minimization under the probably anytime-safe stochastic combinatorial semi-bandits lies in handling two distinct tasks--we seek optimality in the mean and safeness in the variance of each chosen solution. Our first contribution is design and analysis of the Probably Anytime-Safe Combinatorial UCB (or PASCombUCB) algorithm. We also derive a problem-dependent upper bound on its regret, which involves a _hardness_ parameter \(H(\Delta(\Lambda))\). We see that \(H(\Delta(\Lambda))\) characterizes the effectiveness of ascertaining the safety of potential solutions in the regret. To assess the optimality of PASCombUCB, we prove an accompanying problem-dependent lower bound on the regret of any variance-constrained consistent algorithm. The upper and lower problem-dependent bounds match in almost all the parameters (except in \(K\)). Additionally, we show that if \(\delta_{T}\) decays exponentially fast in \(T\), the problem-dependent regret cannot be logarithmic in \(T\). We further present a problem-independent upper bound on the regret of PASCombUCB and a lower bound for any algorithm. Just as the problem-dependent bounds, these bounds also match in almost all the parameters. In summary, this paper is the first to explore the regret minimization problem in the combinatorial bandits with an _anytime_ constraint on the variance. When \(\delta\to 1\) and \(\bar{\sigma}^{2}\) is large (so that the optimal safe solution is the one with the highest mean regardless of safety considerations), our problem reduces to the standard combinatorial semi-bandits (Kveton et al., 2015), and the regret incurred by the safety constraint vanishes, resulting in the same upper bound as the unconstrained case. Furthermore, the framework and analysis of PASCombUCB can be extended to other risk measures as long as there are appropriate concentration bounds, e.g., Bhat and Prashanth (2019) or Chang and Tan (2022) enables us to use CVaR or certain continuous functions as risk measures within the generic PASCombUCB framework. ## 2 Problem Setup Given a positive integer \(m\), we let \([m]:=\{1,2,\ldots,m\}\). An instance of a _variance-constrained stochastic combinatorial semi-bandit_ is a tuple \(\Lambda=(E,\mathcal{A}_{K},\nu,\bar{\sigma}^{2})\). We describe the four elements of \(\Lambda\) in the following. Firstly, the finite set \(E=[L]\) is known as the _ground set_ in which each \(i\in E\) is known as an _item_. Secondly, the family \(\mathcal{A}_{K}\subset\{S\in 2^{E}:|S|\leq K\}\) is a collection of subsets of \(E\) with cardinality at most \(K\). Each element \(S\in\mathcal{A}_{K}\) is known as a _solution_ and \(\mathcal{A}_{K}\) satisfies the condition that all subsets of \(S\in\mathcal{A}_{K}\) remain solutions, i.e., \(\mathcal{A}_{K}\) is downward-closed. Thirdly, the vector of probability distributions \(\nu=(\nu_{1},\nu_{2},\ldots,\nu_{L})\) contains \(\sigma^{2}\)-sub-Gaussian distributions \(\{\nu_{i}\}_{i\in E}\) with means \(\{\mu_{i}\}_{i\in E}\) and variances \(\{\sigma_{i}^{2}\}_{i\in E}\). The final element of an instance \(\bar{\sigma}^{2}>0\) denotes the permissible upper bound on the variance. To avoid trivialities, we assume that \(\bar{\sigma}^{2}>\sigma^{2}\) and \(K\geq 2\). The _return_ of item \(i\in E\) is the random variable \(W_{i}\) with distribution \(\nu_{i}\). The _(stochastic) return_ of a solution \(S\in\mathcal{A}_{K}\) is \(\sum_{i\in S}W_{i}\) where \(W\sim\nu\). The _expected return_ and _variance_ of \(S\in\mathcal{A}_{K}\) are \[\mu_{S}:=\sum_{i\in S}\mu_{i}\quad\text{and}\quad\sigma_{S}^{2}:=\sum_{i\in S }\sigma_{i}^{2}\] respectively. We further assume that every instance \(\Lambda\) satisfies \(\sigma_{S}^{2}\neq\bar{\sigma}^{2}\) for all \(S\in\mathcal{A}_{K}\) and each distribution \(\nu_{i}\) is supported in the interval \([0,1]\). We define \(\mathcal{S}:=\{S\in\mathcal{A}_{K}:\sigma_{S}^{2}<\bar{\sigma}^{2}\}\) to be the _safe set_ which contains all the _safe_ solutions. Let the complement of \(\mathcal{S}\) be the _unsafe set_\(\mathcal{S}^{c}\). Denote the _optimal safe solution_ as \(S^{\star}:=\arg\max\{\mu_{S}:S\in\mathcal{S}\}\) with return \(\mu^{\star}\). For simplicity, we assume that \(S^{\star}\) is unique. Denote the _suboptimal set_\(\mathcal{B}:=\{S\in\mathcal{A}_{K}:\mu_{S}<\mu^{\star}\}\) and the _risky set_\(\mathcal{R}:=\{S\in\mathcal{A}_{K}:\mu_{S}\geq\mu^{\star},S\neq S^{\star}\}\). For a solution \(S\), let the mean gap \(\Delta_{S}:=\mu^{\star}-\mu_{S}\) and the variance gap \(\Delta_{S}^{\star}:=|\sigma_{S}^{2}-\bar{\sigma}^{2}|\). An instance \(\Lambda\), time horizon \(T\in\mathbb{N}\) and confidence parameter \(\delta\in(0,1)\) are specified. An agent, who knows \(E,\mathcal{A}_{K}\) and \(\bar{\sigma}^{2}\) but not the vector of probability distributions \(\nu\), interacts adaptively with the instance over \(T\) time steps as follows. At time step \(t\in[T]\), the agent uses a stochastic function \(\pi_{t}\) that selects a solution \(S_{t}\in\mathcal{A}_{K}\) based on the observation history \(\mathcal{H}_{t-1}:=((S_{s},\{W_{i}(s)\}_{i\in S_{s}}))_{s\in[t-1]}\). In other words, \(S_{t}=\pi_{t}(\mathcal{H}_{t-1})\) is a stochastic function of the history \(\mathcal{H}_{t-1}\). The agent receives the random return \(\sum_{i\in S_{t}}W_{i}(t)\), where \(\{W(s)=\{W_{i}(s)\}_{i\in E}\}_{s\in[T]}\) are i.i.d. according to \(\nu\) across time. The weights of the selected items \(\{W_{i}(t):i\in S_{t}\}\) are observed by the agent at each time \(t\in[T]\). The collection of stochastic functions \(\pi=\{\pi_{t}\}_{t\in[T]}\) is known as the agent's _policy_. The goal of the agent is to minimize _the expected cumulative regret_ (or simply _regret_) \(\operatorname{Reg}(T)\) over the horizon \(T\), subject to a certain risk constraint. More precisely, the _regret_ suffered by a policy \(\pi\) employed by the agent is defined as \[\operatorname{Reg}^{\pi}(T):=\mathbb{E}_{\pi}\left[\sum_{t=1}^{T}\left(\sum_{i \in S^{\star}}W_{i}(t)-\sum_{i\in S_{t}}W_{i}(t)\right)\right]\] The policy \(\pi\) should satisfy the condition that all the solutions chosen \(\{S_{t}^{\pi}\}_{t\in[T]}\subset\mathcal{A}_{K}\) are safe with probability at least \(1-\delta\), i.e., \[\mathbb{P}_{\pi}\big{[}\forall\,t\in[T],S_{t}^{\pi}\in\mathcal{S}\big{]}\geq 1 -\delta. \tag{1}\] This is referred to as the _probably anytime-safe_ constraint. In the problem-dependent lower bounds, we will refer to a certain class of "good" policies that operate as the time horizon \(T\to\infty\) and the probability of being safe in the sense of (1) tends to \(1\). This is formalized in the following. **Definition 2.1**.: _Fix an instance \(\nu\) and a vanishing sequence \(\{\delta_{T}\}_{T=1}^{\infty}\subset(0,1)\). An policy \(\pi=\{\pi_{t}\}_{t=1}^{\infty}\) is said to be a \(\{\delta_{T}\}_{T=1}^{\infty}\)-variance-constrained consistent algorithm if_ * \(\operatorname{Reg}^{\pi}(T)=o(T^{a})\) _for all_ \(a>0\) _and_ * \(\operatorname{\mathbb{P}}_{\pi}\bigl{[}\forall\,t\in[T],S_{t}^{\pi}\in \mathcal{S}\bigr{]}\geq 1-\delta_{T}\)_._ We often omit the superscripts \(\pi\) in \(\operatorname{Reg}^{\pi},S_{t}^{\pi}\) (or \(A_{t}^{\pi}\) and \(A_{t,r}^{\pi}\) in PASCombUCB) and the subscripts \(\pi\) in the probabilities and expectations if there is no risk of confusion. ## 3 Our Algorithm: PASCombUCB Our algorithm Probably Anytime-Safe Combinatorial UCB (or PASCombUCB) is presented in Algorithm 1. PASCombUCBis delicately designed to satisfy the probably anytime-safe constraint. In particular, we apply (and analyze) the Greedy-Split subroutine in Line \(11\); this subroutine has not been involved in an algorithm designed for standard combinatorial semi-bandits such as CombUCB1 (Chen et al., 2013). ``` 1:Input: An instance \(\Lambda\) (with unknown \(\nu\)), the horizon \(T\) and the confidence parameter \(\delta\in(0,1)\). 2:Set phase counter \(p=1\) and time step counter \(t=1\). 3:while\(\exists\,i\in E\) such that \(T_{i}(p-1)<2\)do 4: Pull \(A_{p}\!=\!\arg\max_{S:|S|\leq q}|\{i\!\in\!S:T_{i}(p-1)\!<\!2\}|\). 5:\(p\gets p+1\), \(t\gets t+1\). 6:endwhile 7:Update the sample mean, sample variance and confidence bounds according to (4). 8:Update the empirically safe set \(\mathcal{S}_{p}\) and possibly safe set \(\bar{\mathcal{S}}_{p}\) according to (5) and (6) respectively. 9:while\(t<T\)do 10: Find a solution \(A_{p}\!=\!\arg\max_{A\in\bar{\mathcal{S}}_{p-1}}U_{A}^{\mu}(p\!-\!1)\). 11: Invoke Greedy-Split to split the solution \(A_{p}\) into \(n_{p}\) sub-solutions \(\{A_{p,1},\ldots,A_{p,n_{p}}\}\subset\mathcal{S}_{p-1}\). 12: Set \(n_{p}\leftarrow\min\{n_{p},T-\text{count}\}\). 13: Choose solution \(\{A_{p,1},\ldots,A_{p,n_{p}}\}\). 14: Update the statistics of all solutions based on (4). 15: Update the empirical sets based on (5) and (6). 16: Set \(t=t+n_{p}\) and \(p=p+1\), 17:endwhile ``` **Algorithm 1**PASCombUCB **Statistics.** Since each item \(i\in E\) is \(\sigma^{2}\)-sub-Gaussian, any solution that contains at most \(q:=\lfloor\frac{\hat{\sigma}^{2}}{\sigma^{2}}\rfloor\) items is safe with probability (w.p.) \(1\). We call such a solution _absolutely safe_. Algorithm 1 (PASCombUCB) is conducted in _phases_, where each phase consists of multiple time steps and each item can be pulled at most once during each phase. Thus we adopt a different notation "\(A\)" to denote the solution in our algorithm. Define \(T_{i}(p):=\sum_{s=1}^{p}\mathbbm{1}\left\{i\in A_{p}\right\}\) as the number of times item \(i\) is pulled up to and including phase \(p\). Denote the sample mean and sample variance of item \(i\) at phase \(p\) as \[\hat{\mu}_{i}(p) :=\frac{1}{T_{i}(p)}\sum_{s=1}^{p}W_{i}(s)\cdot\mathbbm{1}\left\{ i\in A_{s}\right\},\quad\text{ and}\] \[\hat{\sigma}_{i}^{2}(p) :=\frac{1}{T_{i}(p)}\sum_{s=1}^{p}\left(W_{i}(s)-\hat{\mu}_{i}(p) \right)^{2}\cdot\mathbbm{1}\left\{i\in A_{s}\right\}.\] The bound based on the Law of Iterated Logarithms (LIL) is used to construct the confidence radii. For a fixed \(\epsilon\in(0,1)\), define \(\operatorname{ilil}(t,\rho):=(1+\sqrt{\epsilon})\left(\frac{1+\epsilon}{2t} \ln\left(\frac{\ln((1+\epsilon)t)}{\rho}\right)\right)^{1/2}\) and denote the confidence radius for the mean as \[\alpha(t):=\operatorname{ilil}(t,\omega_{\mu}), \tag{2}\] where \(\omega_{\mu}\) is a parameter to be chosen. The confidence radii for the variance are asymmetric about the empirical variance and are parameterized by \(\omega_{\nu}\) and \(\omega_{\nu}^{\prime}\) that may not necessarily be the same. They are defined as \[\beta_{\mu}(t):=3\cdot\operatorname{il}(t,\omega_{\nu})\quad\text{and}\quad \beta_{l}(t):=3\cdot\operatorname{il}(t,\omega_{\nu}^{\prime}). \tag{3}\] We denote the _upper_ and _lower confidence bounds_ (UCB and LCB) for the mean of item \(i\) as \[U_{i}^{\mu}(p) :=\hat{\mu}_{i}(p)+\alpha(T_{i}(p))\quad\text{and}\] \[L_{i}^{\mu}(p) :=\hat{\mu}_{i}(p)-\alpha(T_{i}(p))\] respectively. The UCB and LCB for the variance of item \(i\) are defined as \[U_{i}^{\text{v}}(p) :=\min\{\hat{\sigma}_{i}^{2}(p)+\beta_{\text{u}}(T_{i}(p)),\sigma^ {2}\}\quad\text{and}\] \[L_{i}^{\text{v}}(p) :=\max\{\hat{\sigma}_{i}^{2}(p)-\beta_{\text{l}}(T_{i}(p)),0\}\] respectively. With the sample mean, sample variance, and confidence bounds for the items, we define the following statistics for all solution \(S\in\mathcal{A}_{K}\): \[\hat{\mu}_{S}(p) =\sum_{i\in S}\hat{\mu}_{i}(p),\quad\hat{\sigma}_{S}^{2}(p)=\sum _{i\in S}\hat{\sigma}_{i}^{2}(p),\] \[U_{S}^{\mu}(p) =\sum_{i\in S}U_{i}^{\mu}(p),\quad L_{S}^{\mu}(p)=\sum_{i\in S}L_ {i}^{\mu}(p), \tag{4}\] \[U_{S}^{\text{v}}(p) =\sum_{i\in S}U_{i}^{\text{v}}(p),\quad L_{S}^{\text{v}}(p)=\sum _{i\in s}L_{i}^{\text{v}}(p).\] Denote the _empirically safe set_ as \[\mathcal{S}_{p}:=\{S\in\mathcal{A}_{K}:U_{S}^{\text{v}}(p)<\bar{ \sigma}^{2}\} \tag{5}\] and the _possibly safe set_ as \[\bar{\mathcal{S}}_{p}:=\{S\in\mathcal{A}_{K}:L_{S}^{\text{v}}(p)< \bar{\sigma}^{2}\}. \tag{6}\] The solutions in \(\mathcal{S}_{t}\) and \(\bar{\mathcal{S}}_{t}\) are called _empirically safe_ and _possibly safe_ solutions respectively. **Dynamics.** In the _initialization stage_ (lines \(3\) to \(6\)), PASCombUCB greedily pulls the absolutely safe solutions. When each item has been pulled at least twice, this stage is terminated. After initialization, during phase \(p\), PASCombUCB **firstly** identifies a solution \(A_{p}=\arg\max_{A\in\bar{\mathcal{S}}_{p}}U_{A}^{\mu}(p-1)\) via an _optimization oracle_ (Line \(10\)). It **then** calls a subroutine GreedySplit to greedily partition the solution \(A_{p}\) into empirically safe sub-solutions (Line \(11\), see Figure 1 for illustration). **Subsequently**, these solutions are chosen and the stochastic rewards from the corresponding items are observed (Line \(13\)). **Lastly**, the empirical estimates, the confidence bounds, and the empirical sets are updated (Lines \(14\) and \(15\)). ``` 1:Input: A solution \(A_{p}\) and the upper confidence bound on the variance \(U^{\text{v}}(p-1)\) at phase \(p-1\). 2:Set \(n_{p}=1,s=1\) and \(A_{p,1}=\emptyset\). 3:Index the items in \(A_{p}\) by \(i_{1},\ldots,i_{|A_{p}|}\). 4:while\(s\leq|A_{p}|\)do 5:if\(U_{A_{p,n_{p}}}^{\text{v}}(p-1)+U_{i_{s}}^{\text{v}}(p-1)\leq\bar{\sigma}^{2}\)then 6: Set \(A_{p,n_{p}}\gets A_{p,n_{p}}\cup\{i_{s}\}\). 7:else 8:\(n_{p}\gets n_{p}+1\) and \(A_{p,n_{p}}=\{i_{s}\}\). 9:endif 10:\(s\gets s+1\). 11:endwhile 12:return\(\{A_{p,1},\ldots,A_{p,n_{p}}\}\). ``` **Algorithm 2** GreedySplit **Illustration.** Figures 2 and 3 illustrate the regret accumulated during phase \(p\) and over the whole \(T\) horizon respectively. As shown in Figure 2, the regret accumulated during phase \(p\) can be decomposed into two parts \[\sum_{r=1}^{n_{p}}(\mu^{\star}-\mu_{A_{p,r}})=\Delta_{A_{p}}+\mu^{\star}(n_{p}-1)\] where \(\Delta_{A_{p}}\) is _the phase-wise (instantaneous) regret due to suboptimality_ and \(\mu^{\star}(n_{p}-1)\) is _the regret due to safeness-checking_; the latter term results from the safeness constraint. At the beginning, since the upper confidence bounds of the variances of all solutions are large, each solution will be split into up to \(2Q\) sub-solutions, where \(Q:=\lceil\frac{K}{q}\rceil\), and hence the regret due to safeness checking can be large. As the algorithm progresses, we obtain more observations of items and get more confident about their variances (\(U_{t}^{\nu}(p)\) decreases). Hence, during some later phase, it suffices to split some solutions into fewer sub-solutions and the regret due to safeness-checking reduces. Furthermore, when most items are sampled sufficiently many times, the unsafe solutions are excluded from the possibly safe set \(\bar{\mathcal{S}}_{p}\), and the only contribution to the regret is via the suboptimality of the solution \(A_{p}\). **Remark 3.1**.: * _The confidence parameter_ \(\omega_{\mathrm{v}}^{\prime}\) _is solely a parameter of_ PASCombUC_; its choice does not rely on the confidence parameter_ \(\delta\) _and only affects_ \(L_{\mathrm{v}}^{\prime}(p)\)_, the lower confidence bound of the variance, which determines when we ascertain a solution to be unsafe. The choice of_ \(\omega_{\mathrm{v}}\) _depends on_ \(\delta\) _and it influences_ \(U_{\mathrm{v}}^{\prime}(p)\)_, the upper confidence bound of the variance, which guides_ PASCombUC_ _to split the solution to satisfy the probably anytime-safe constraint. The other parameters_ \(\omega_{\mathrm{v}}\) _and_ \(\omega_{\mathrm{v}}^{\prime}\) _determine the confidence radii of variances and do not necessarily have to be the same._ * _Indexing the items in Line_ \(3\) _of_ Greedy-Split _can be done arbitrarily, i.e., it does not require any specific order of the items. As such,_ Greedy-Split _is an efficient greedy algorithm. We note that finding the optimal order that leads to the minimum number of sub-solutions_ \(n_{p}\) _is a combinatorial problem which is generally hard to solve._ ## 4 Problem-dependent Bounds For simplicity, when a time horizon \(T\) and a confidence parameter \(\delta=\delta_{T}\) are given, we set the confidence parameters \(\omega_{\mu}=\omega_{\mathrm{v}}^{\prime}=\frac{1}{T^{2}}\) and \(\omega_{\mathrm{v}}=\frac{\delta_{T}}{T^{2}}\). We introduce various suboptimality gaps that contribute to the regret due to the suboptimality. Figure 1: A diagram of a split to a solution \(A\) containing \(5\) items. Figure 2: Solution \(A_{p}\) is split into \(n_{p}=3\) sub-solutions, the instantaneous regret at phase \(p\) can be divided into the instantaneous regret due to suboptimality and the instantaneous regret due to safeness checking. * for \(i\in E\setminus S^{\star}\), let the _minimum safe-suboptimal gap_ be \[\Delta_{i,\mathcal{S}\cap\mathcal{B},\min}:=\min_{S\ni i,S\in\mathcal{S}\cap \mathcal{B}}\Delta_{S};\] * for \(i\in E\), let the _minimum unsafe-suboptimal gap_ be \[\Delta_{i,\mathcal{S}^{c}\cap\mathcal{B},\min}:=\min_{S\ni i,\;S\in\mathcal{S} ^{c}\cap\mathcal{B}}\Delta_{S};\] and let the _tension parameter between the mean gap \(\Delta_{S}\) and variance gap \(\Delta_{S}^{\mathrm{v}}\)_ be \[c_{i}:=\max_{S\ni i,\;S\in\mathcal{S}^{c}\cap\mathcal{B}}\left(\frac{\Delta_{ S}}{\max\{\Delta_{S},\Delta_{S}^{\mathrm{v}}/3\}}\right)^{2}.\] We also define following safeness gaps that induce the conservative sampling strategy to guarantee the probably anytime-safe constraint. For \(i\in E\), and * for the risky set \(\mathcal{R}\), define _the minimum unsafeness gap_: \(\Delta_{i,\mathcal{R}}^{\mathrm{v}}:=\min_{S\ni i,S\in\mathcal{R}}\Delta_{S}^ {\mathrm{v}}\). * for the safe and suboptimal set \(\mathcal{S}\cap\mathcal{B}\), let \[\Psi_{i,\mathcal{S}\cap\mathcal{B}}:=\max_{S\ni i,\;S\in\mathcal{S}\cap \mathcal{B}}\min\left\{\frac{\ln T}{\Delta_{S}^{2}},\frac{9\ln(T/\delta_{T})}{ (\Delta_{S}^{\mathrm{v}})^{2}}\right\}\] which characterizes the order of the number of times this item \(i\) needs to be sampled in order to identify the suboptimality of all safe and suboptimal solutions \(A\ni i\) while satisfying the safeness constraint. We further define a variant of \(\Psi_{i,S\cap\mathcal{B}}\) as \[\Psi_{i,\mathcal{S}\cap\mathcal{B}}^{\prime}:=\max_{S\ni i,S\in\mathcal{S}\cap \mathcal{B}}\min\left\{\frac{\ln T}{\Delta_{S}^{2}},\frac{9\ln(1/\delta_{T})}{ (\Delta_{S}^{\mathrm{v}})^{2}}\right\}\] which will be used to characterize the lower bound. * for the unsafe and suboptimal set \(\mathcal{S}^{c}\cap\mathcal{B}\), let \[\Phi_{i,\mathcal{S}^{c}\cap\mathcal{B}}:=\max_{S\ni i,\;S\in\mathcal{S}^{c} \cap\mathcal{B}}\min\left\{\frac{\ln T}{\Delta_{S}^{2}},\frac{9\ln T}{(\Delta _{S}^{\mathrm{v}})^{2}}\right\}\] which characterizes the hardness of identifying the unsafeness of suboptimality of all unsafe and suboptimal solutions that contain item \(i\). Define \(\xi(\omega):=\frac{2+\varepsilon}{\varepsilon}\big{(}\frac{\omega}{\ln(1+ \varepsilon)}\big{)}^{1+\varepsilon}\), where \(\varepsilon\in(0,1)\) is fixed. Figure 3: An illustration of the instantaneous regret yielded by PASCOMBUCB. As the variances of the items are more determined, less regret due to safeness-checking is generated. ### Problem-dependent Upper Bound **Theorem 4.1** (Problem-dependent upper bound).: _Let \(\Lambda=(E,\mathcal{A}_{K},\nu,\bar{\sigma}^{2})\) be an instance and let \(\{\delta_{T}\}_{T=1}^{\infty}\in o(1)\) be a sequence that satisfies \(\ln(1/\delta_{T})=o(T^{b})\) for all \(b>0\) (i.e., \(\{\delta_{T}\}\) is not exponentially decaying). Then, PASCombUCB is a \(\{\delta_{T}\}_{T=1}^{\infty}\)-variance-constrained consistent algorithm. More precisely, given a time budget \(T\), the probably anytime-safe constraint is satisfied and the regret of PASCombUCB \(\operatorname{Reg}(T)\) is upper bounded by_ \[\min\left\{T\mu^{\star},\operatorname{Reg}_{1}(T)+\operatorname{Reg}_{2}(T) \right\}+\operatorname{Reg}_{3}(T),\] _where_ \[\operatorname{Reg}_{1}(T) =O\bigg{(}\sum_{i\in E\setminus S^{\star}}\frac{K\ln T}{\Delta_ {i,S\cap\mathcal{B},\min}}+\sum_{i\in E}\frac{c_{i}K\ln T}{\Delta_{i,S^{c} \cap\mathcal{B},\min}}\bigg{)}\] \[\operatorname{Reg}_{2}(T) =2\mu^{\star}H\left(\Delta(\Lambda)\right),\quad\operatorname{ Reg}_{3}(T)=2\mu^{\star}(L+1)\] _where \(\Delta(\Lambda)=\{\Delta_{S^{\star}}^{\mathrm{v}}\}\cup\{\Delta_{i,\mathcal{R }}^{\mathrm{v}},\Psi_{i,S\cap\mathcal{B}},\Phi_{i,S^{c}\cap\mathcal{B}}\}_{i \in E}\) and \(H\left(\Delta(\Lambda)\right):=H(1,\Lambda)\) is defined in (26) in App. B.4._ **Remark 4.2**.: _If the gaps in \(\Delta(\Lambda)\) are sufficiently small and \(\delta_{T}=T^{-\lambda}\) for a fixed \(\lambda>0\),_ \[H\left(\Delta(\Lambda)\right)=O\bigg{(}\frac{(\lambda+1)K^{2}\ln T}{(\Delta_{ S^{\star}}^{\mathrm{v}})^{2}}+K\sum_{i\in E}\Big{(}\frac{\ln T}{(\Delta_{i, \mathcal{R}}^{\mathrm{v}})^{2}}+\max_{\stackrel{{ S\oplus i \Rightarrow\mathcal{B}}}{{\underset{S\in\mathcal{B}}{\underset{S\in \mathcal{B}}{\underset{S\in\mathcal{B}}{\underset{S\in\mathcal{B}}{\underset{S \in\mathcal{B}}{\underset{S\in\mathcal{B}}{\underset{S\cap\mathcal{B}}{ \underset{S}}{\underset{S\in\mathcal{B}}{\underset{S\cap\mathcal{B}}{ \underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S\cap \mathcal{B}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{ \underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{ \underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{ \underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{ \underset{S}}{\underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{ \underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S \cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S\cap\mathcal{B}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ \underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{\underset{S}}{ **Corollary 4.4** (Tightness of problem-dependent bounds).: _Let \(\delta_{T}=T^{-\lambda}\) with a fixed \(\lambda>0\), the regret_ \[\mathrm{Reg}(T)\in\Omega\bigg{(}\sum_{i\in E}\frac{\ln T}{\Delta_{i,\mathcal{S }\cap\mathcal{B},\min}}+\frac{\mu^{\star}}{K^{2}}H\left(\Delta(\Lambda)\right) \bigg{)}\cap O\bigg{(}\sum_{i\in E}\frac{K\ln T}{\Delta_{i,\mathcal{S}\cap \mathcal{B},\min}}+\mu^{\star}H\left(\Delta(\Lambda)\right)\bigg{)}\bigg{)}\] _where \(H\left(\Delta(\Lambda)\right)\) is defined in Remark 4.2. The upper bound above is achieved by PASCombUCB._ Under different rates of decay of \(\{\delta_{T}\}_{T\in\mathbb{N}}\) (see App. D for the cases where \(\ln(1/\delta_{T})=\omega(\ln T)\) and \(o(\ln T)\)), the upper bound of the regret due to suboptimality \(\mathrm{Reg}_{1}(T)\) (the first term in the total regret) and the upper bound of the regret due to safeness-checking \(\mathrm{Reg}_{2}(T)\) (the latter term) match their corresponding lower bounds up to factors of \(K\) and \(K^{2}\) respectively; this gap is acceptable as \(K\) (e.g., number of ads displayed) is usually small relative to \(L\) (total number of ads). We consider general instances where all the items are independent and the gap in \(\mathrm{Reg}_{1}(T)\) can be closed when that the items are correlated, as in the lower bound for the unconstrained combinatorial bandits in Kveton et al. (2015a). This assumption also allows us to remove a factor of \(K\) from the gap of \(\mathrm{Reg}_{2}(T)\). One may naturally wonder whether we can tolerate a much more stringent probably anytime-safe constraint. The following theorem (with \(b=1\)) indicates no algorithm is \(\{\delta_{T}\}_{T\in\mathbb{N}}\)-variance-constrained consistent if \(\delta_{T}\) decays _exponentially fast_ in \(T\). **Theorem 4.5** (Impossibility result).: _Let \(\{\delta_{T}\}_{T=1}^{\infty}\in o(1)\) be a sequence that satisfies the following condition. There exists \(b>0\) such that \(\ln(1/\delta_{T})=\Omega(T^{b})\). For instance \(\Lambda\), the regret of any algorithm is lower bounded by \(\Omega(T^{b})\)._ ## 5 Problem-independent Bounds We can derive a problem-independent upper bound on the regret of PASCombUCB from the problem-dependent one in Theorem 4.1 with some delicate calculations. **Theorem 5.1** (Problem-independent Upper Bound).: _Let \(\{\delta_{T}\}_{T=1}^{\infty}\in o(1)\) be a sequence that satisfies \(\ln(1/\delta_{T})=o(T^{b})\) for all \(b>0\). If \(T>L\), for any instance \(\Lambda\) with variance gaps lower bounded by \(\Delta^{\mathrm{v}}\leq\min_{S\in\mathcal{A}_{K}}\Delta^{\mathrm{v}}_{S}\), the regret of PASCombUCB is upper bounded by_ \[O\bigg{(}\sqrt{KLT\ln T}+\frac{LK^{2}}{(\Delta^{\mathrm{v}})^{2}}\ln\Big{(} \frac{1}{\delta_{T}}\Big{)}\bigg{)}.\] **Theorem 5.2** (Problem-independent lower bound).: _Let the minimum variance gap be \(\Delta^{\mathrm{v}}:=\min_{S\in\mathcal{A}_{K}}\Delta^{\mathrm{v}}_{S}\). When \(K^{3}\geq L^{2}\), we have_ \[\mathrm{Reg}(T)=\Omega\bigg{(}\sqrt{KLT}+\min\Big{\{}\frac{L}{(\Delta^{ \mathrm{v}})^{2}}\ln\Big{(}\frac{1}{\delta_{T}}\Big{)},T\Big{\}}\bigg{)}.\] **Remark 5.3**.: _The assumption that the variance gaps of all solutions are lower bounded by \(\Delta^{\mathrm{v}}\) is needed to achieve a non-vacuous problem-independent bound, hence, somewhat unconventionally, it appears in our "problem-independent" bounds. Given any algorithm and time budget \(T\), the variance gap of \(S^{\star}\) can be arbitrarily small if \(\Delta^{\mathrm{v}}\) is not bounded away from zero, so the \(\min\) in Theorem 5.2 will be dominated by the linear term \(T\), and hence, no algorithm can attain sublinear regret._ The above results allow us to investigate the tightness of problem-independent bounds. **Corollary 5.4** (Tightness of problem-independent bounds).: _Let \(K^{3}\leq L^{2}\), and \(\{\delta_{T}\}_{T=1}^{\infty}\in o(1)\) satisfies \(\ln(1/\delta_{T})=o(T^{b})\) for all \(b>0\). We have_ \[\mathrm{Reg}(T)\in\Omega\bigg{(}\sqrt{KLT}+\frac{L}{(\Delta^{ \mathrm{v}})^{2}}\ \ln\Big{(}\frac{1}{\delta_{T}}\Big{)}\bigg{)}\cap O\bigg{(}\sqrt{KLT\ln T}+ \frac{LK^{2}}{(\Delta^{\mathrm{v}})^{2}}\ln\Big{(}\frac{1}{\delta_{T}}\Big{)} \bigg{)}.\] _The upper bound is achieved by PASCombUCB._ We observe that the gap between the upper and lower bounds is manifested on \(\sqrt{\ln T}\) and \(K^{2}\). The presence of \(\sqrt{\ln T}\) is not unexpected as it is also involved in the gap between the bounds on the regret for the (unconstrained) combinatorial bandits (Kveton et al., 2015a). Besides, the term \(K^{2}\) is induced by the design of PASCombUCB. During each phase, we select and sample solutions which are disjoint subsets of \(A_{p}\), and hence one item is sample at most once during one phase. However, we believe that it is possible to sample some items more than once during one phase, which will help reduce the regret but requires more delicate analysis. We view this as a promising venue for future work. ## 6 Proof Sketch of the Problem-Dependent Upper Bound (Theorem 4.1) Assume that PASCombUCBhas processed \(T^{\prime}\) phases with \(T\) time steps, we have \(\mathbb{P}[T^{\prime}\leq T]=1\) since each phase is composed by multiple time steps. Denote the expected regret of PASCombUCBwith \(p\) phases as \(\mathbb{E}[\mathrm{R}(p)]\). The expected regret of PASCombUCBafter \(T\) time steps is \[\mathbb{E}[\mathrm{R}(T^{\prime})]:=\mathbb{E}\bigg{[}\sum_{p=1}^{T^{\prime}} \sum_{r=1}^{n_{p}}(\mu^{\star}-\mu_{A_{p,r}})\bigg{]}.\] In the proof of Theorem 4.1, we first show a regret decomposition lemma (Lemma 6.1) that separates the total regret into _the regret due to suboptimality \(\mathbb{E}[\mathrm{R}_{1}(T^{\prime})]\)_, _the regret due to safeness-checking \(\mathbb{E}[\mathrm{R}_{2}(T^{\prime})]\)_ and the regret due to the failure of the "good" event and the initialization. Then we upper bound \(\mathrm{R}_{1}(T^{\prime})\) and \(\mathrm{R}_{2}(T^{\prime})\) separately. To elucidate the dependence of the regret on the confidence parameters \(\omega_{\mu},\omega_{\mathrm{v}}\) and \(\omega_{\mathrm{v}}^{\prime}\), we retain these notations henceforth. For \(p\in[T],i\in E\), define the "good" events that the sample mean and the sample variance are near their ground truths: \(\mathcal{E}_{i,T_{i}(p)}^{\mu}:=\{\hat{\mu}_{i}(p)-\alpha(T_{i}(p))\leq\mu_{i }\leq\hat{\mu}_{i}(p)+\alpha(T_{i}(p))\}\) and \(\mathcal{E}_{i,T_{i}(p)}^{\nu}(\rho):=\{\hat{\sigma}_{i}^{2}(p)-3\cdot\mathrm{ ill}(T_{i}(p),\rho)\leq\sigma_{i}^{2}\leq\hat{\sigma}_{i}^{2}(p)+3\cdot\mathrm{ ill}(T_{i}(p),\rho)\}\) and \[\mathcal{E}_{i,T_{i}(p)} :=\mathcal{E}_{i,T_{i}(p)}^{\mu}\cap\mathcal{E}_{i,T_{i}(p)}^{ \nu}(\omega_{\mathrm{v}})\cap\mathcal{E}_{i,T_{i}(p)}^{\nu}(\omega_{\mathrm{v }}^{\prime})\] \[\mathcal{E} :=\bigcap_{i\in E}\bigcap_{p\in[T^{\prime}]}\mathcal{E}_{i,T_{i} (p-1)}\] For \(r\in[Q-1]\), define \(\mathcal{U}_{p}(r):=\{U_{A_{p}}^{\nu}(p-1)>r\bar{\sigma}^{2}\}\). When event \(\mathcal{U}_{p}(r)\) occurs at phase \(p\), it indicates at least \(r+1\) sub-solutions are needed in order to sample the items in \(A_{p}\) and guarantee the safeness constraint. **Lemma 6.1**.: _Assume that PASCombUCBhas processed \(T^{\prime}\) phases with \(T\) time steps, the expected regret of PASCombUCBcan be decomposed into three parts as follows_ \[\mathbb{E}[\mathrm{R}(T^{\prime})]\leq\mathbb{E}[\mathrm{R}_{1}(T^{\prime})| \mathcal{E}]+\mathbb{E}[\mathrm{R}_{2}(T^{\prime})|\mathcal{E}]+\mathrm{R}_{ 3}(T)\] _where_ \[\mathrm{R}_{1}(T^{\prime}) :=\sum_{p=1}^{T^{\prime}}\mathbbm{1}\left\{A_{p}\in\mathcal{B} \right\}\Delta_{A_{p}}\] \[\mathrm{R}_{2}(T^{\prime}) :=\mu^{\star}\sum_{p=1}^{T^{\prime}}\left[2\sum_{r=1}^{Q-1} \mathbbm{1}\left\{\mathcal{U}_{p}(r)\right\}\right]\] \[\mathrm{R}_{3}(T) :=2\mu^{\star}L\big{(}1+T\big{(}\xi(\omega_{\mu})+2\xi(\omega_{v}) +2\xi(\omega_{v}^{\prime})\big{)}\] In Lemma 6.1, the first term \(\mathrm{R}_{1}(T^{\prime})\) is the _(high-probability) regret due to suboptimality_, in the sense that only the mean gaps of the suboptimal solutions contribute to \(\mathrm{R}_{1}(T)\). The second term \(\mathrm{R}_{2}(T^{\prime})\) is called _the (high-probability) regret due to safeness-checking_, since it depends on the variance gaps and goes to \(0\) if \(\bar{\sigma}^{2}\) is sufficiently large. The last term \(\mathrm{R}_{3}(T)\) contains the regret from the initialization stage and the regret results from the failure of the "good" event \(\mathcal{E}\). The regret due to suboptimality can be bounded in terms of the minimum safe/unsafe-suboptimal gaps as follows. **Lemma 6.2**.: _Conditioned on event \(\mathcal{E}\), the regret due to suboptimality \(\mathrm{R}_{1}(T)\) can be bounded by_ \[O\bigg{(}\sum_{i\in E\setminus\mathcal{S}^{\star}}\frac{K}{\Delta_{i,\mathcal{ S}\cap\mathcal{B},\min}}\ln\frac{1}{\omega_{\mu}}+\sum_{i\in E}\frac{c_{i}K}{ \Delta_{i,\mathcal{S}^{\star}\cap\mathcal{B},\min}}\ln\frac{1}{\omega_{\mathrm{ v}}^{\prime}}\bigg{)}.\] The regret due to safeness-checking involves more critical parameters of the instance and we encode them in \(T^{\prime}_{r^{\prime}}\) and \(H(r^{\prime},\Lambda)\) for \(r^{\prime}\in[Q]\) (see Figure 5), which are defined formally in (25) and (26) respectively. **Lemma 6.3**.: _On the event \(\mathcal{E}\), if \(T^{\prime}\in[T^{\prime}_{r^{\prime}},T^{\prime}_{r^{\prime}-1})\) then_ \[\mathrm{R}_{2}(T^{\prime})\leq 2\mu^{\star}\big{[}T^{\prime}(r^{\prime}-1)+H(r^{ \prime},\Lambda)\big{]}\leq 2\mu^{\star}H(1,\Lambda)\] To obtain the upper bound for \(\mathrm{R}_{2}(T^{\prime})\), we assume the algorithm will sample those solutions with large \(U_{A}^{\nu}(p)\) in \(\bar{\mathcal{S}}_{p}\), which will be split into the most many sub-solutions (see Figure 4). Furthermore, for \(r^{\prime}=Q-1,Q-2,\ldots,1\), we derive an upper bound for the number of phases in which event \(\mathcal{U}_{p}(r^{\prime})\cap(\mathcal{U}_{p}(r^{\prime}+1))^{c}\) occurs (at most \(2r^{\prime}+1\) sub-solutions are being pulled in these phases). To be more specific (see Figure 5), for \(r^{\prime}=Q-1\), we compute the maximum number of phases \(T^{\prime}_{Q-1}\) in which at most \(2Q-1\) sub-solutions are sampled. Then for \(r^{\prime}=Q-2\), we compute the maximum number of phases \(T^{\prime}_{Q-2}-T^{\prime}_{Q-1}\) in which at most \(2Q-3\) sub-solutions are sampled. We continuously do this before the time budget runs out. As \(T^{\prime}\) increases, \(r^{\prime}\) decreases and \(H(r^{\prime},\Lambda)\) increases. When \(r^{\prime}=1\), i.e. \(T^{\prime}\geq T^{\prime}_{1}\), \(H(1,\Lambda)\) is an upper bound for the total number of sub-solutions being pulled (up to a constant) for the safeness-checking. It can also be regarded as the price for the probably anytime-safe constraint and the upper bound for the regret due to safeness-checking remains an instance-dependent constant \(2\mu^{*}H(1,\Lambda)\) when \(T^{\prime}\geq T^{\prime}_{1}\). More detailed discussions are postponed to Step 3 in the proof in App. B.4.
2309.10906
An early giant planet instability recorded in asteroidal meteorites
Giant planet migration appears widespread among planetary systems in our Galaxy. However, the timescales of this process, which reflect the underlying dynamical mechanisms, are not well constrained, even within the solar system. Since planetary migration scatters smaller bodies onto intersecting orbits, it would have resulted in an epoch of enhanced bombardment in the solar system's asteroid belt. To accurately and precisely quantify the timescales of migration, we interrogate thermochronologic data from asteroidal meteorites, which record the thermal imprint of energetic collisions. We present a database of 40K-40Ar system ages from chondrite meteorites and evaluate it with an asteroid-scale thermal code coupled to a Markov chain Monte Carlo inversion. Simulations require bombardment in order to reproduce the observed age distribution and identify a bombardment event beginning 11.3 +9.5/-6.6 million years after the Sun formed (50% credible interval). Our results associate a giant planet instability in our solar system with the dissipation of the gaseous protoplanetary disk.
Graham Harper Edwards, C. Brenhin Keller, Elisabeth R. Newton, Cameron W. Stewart
2023-09-19T20:00:12Z
http://arxiv.org/abs/2309.10906v2
# An early giant planet instability recorded in asteroidal meteorites ###### Abstract Giant planet migration appears widespread among planetary systems in our Galaxy. However, the timescales of this process, which reflect the underlying dynamical mechanisms, are not well constrained, even within the solar system. Since planetary migration scatters smaller bodies onto intersecting orbits, it would have resulted in an epoch of enhanced bombardment in the solar system's asteroid belt. To accurately and precisely quantify the timescales of migration, we interrogate thermochronologic data from asteroidal meteorites, which record the thermal imprint of energetic collisions. We present a database of \({}^{40}\)K-\({}^{40}\)Ar system ages from chondrite meteorites and evaluate it with an asteroid-scale thermal code coupled to a Markov chain Monte Carlo inversion. Simulations require bombardment in order to reproduce the observed age distribution and identify a bombardment event beginning \(\sim\)11 million years after the Sun formed. Our results associate a giant planet instability in our solar system with the dissipation of the gaseous protoplanetary disk. Earth Sciences, Dartmouth College, 19 Fayerweather Hill Road, Hanover, 03755, NH, U.S.A. **Corresponding author's email:** [email protected] ## Introduction Planetary migrations seem to be commonplace in our Galaxy. The proximity of "hot Jupiters" to their host stars results from inward migration from more distant planetary birth radii [1]. Planets in the TRAPPIST-1 system also likely migrated inward from larger radii where they inherited their volatile inventories [2]. Distributions of both exoplanet eccentricity and orbital spacing in multi-planet systems are most readily explained by histories of dynamical instability and orbital reorganization [e.g. 3, 4]. A breadth of evidence indicates that the solar system's giant planets underwent at least one episode of migration. The admixture of material from the inner and outer solar system among main belt asteroids [5] and asteroidal meteorites [6] requires dynamical mixing of protoplanetary reservoirs. The orbital architecture of giant planets [7, 8] and the Kuiper Belt [9] as well as the low masses of Mars and the asteroid belt [10, 11, 12] could not have formed in situ and require a history of dynamical excitation. Consequently, GPM established the long-term (\(>\)4 billion-year) physical and chemical structure of the solar system [5, 6, 7, 8, 9, 10, 11, 12] and perhaps promoted terrestrial habitability by supplying volatile-rich material from the outer solar system to the early Earth [13, 14, 15]. As a corollary, we expect migrations to similarly imprint these characteristics in exoplanetary systems [15]. Dynamical mechanisms that drive giant planet migration (GPM) generally fall into one of two categories (Fig. 1): dynamical instability triggered by interplanetary gravitational interactions [7, 8, 9, 11, 16] or inward migration triggered by tidal interactions with a surrounding gaseous disk, also known as "Type II" migration [17, 18]. Since these two processes respectively require the absence or presence of a gaseous protoplanetary disk and gaseous disks are transient features [19], constraining the timescale of migration may help resolve its origin (Fig. 1). We examine the solar system as a natural laboratory to test the timescales of GPM and identify the mechanism that best describes the solar history and chronology. A dynamical instability was first proposed as an explanation for the hypothesized Late Heavy Bombardment (LHB, described below) [20] and the orbital architecture of the outer solar system [7, 8, 9]. In these original giant planet instability (hereafter "instability") models, interactions between the giant planets and an outer planetesimal disk caused secular migration of Jupiter and Saturn into a 1:2 mean motion orbital resonance. This excited the planets' orbits and caused a solar-system scale instability [8]: a period of chaotic and intersecting planetary orbits. More recent simulations have revealed a variety of plausible instability stimuli [16, 21]. A gas-driven migration model was originally proposed to explain the low mass of Mars and the admixture of bodies from the inner and outer solar system in the asteroid belt [10]. In this model, the proto-Jovian and proto-Saturnian cores formed embedded in a gaseous protoplanetary disk. As they reached sufficiently large masses, they carved out gaps in the disk and migrated inwards due to associated torques, following the pattern of Type II planet migration [17, 18]. Jupiter began migrating before Saturn, until the latter began a faster migration. When the cores reached a 2:3 mean motion resonance, their migration reversed outward--a so-called "Grand Tack"--until the disk eventually dissipated and froze their orbital positions. In all GPM scenarios, migrating planets scatter smaller bodies onto dynamically excited and intersecting orbits, resulting in a surge of collisions [10, 11, 12, 20]. Conversely, acute collisional episodes require dynamical excitation, and we are not aware of a hypothesis other than GPM that so reliably accomplishes widespread collisions. Thus, bombardment events seem to be a reliable proxy for GPM. We posit that the epoch of enhanced collisions resulting from GPM is a diagnostic event that may be recorded in planetary records [10, 20], and its timescales, if precisely constrained, could be used to identify the causal dynamical processes (Fig. 1). We reconstruct the timescales of reheating during GPM-triggered bombardment by evaluating the thermochronologic records of asteroidal meteorites. Thermochronologic mineral systems record the timescales of thermal processes through the temperature-dependent retention of radiogenic isotopes in mineral lattices. As asteroids cool to sufficiently low temperatures, these minerals retain radiogenic daughter isotopes and record a "cooling age." The energy released by collisional impacts are well-established sources of heat in (proto)planetary systems and are recorded in their thermochronologic records [e.g. 22, 23]. This study integrates early solar system chronologies from a variety of sources, including cosmochronologic data and physics-based simulations, which rely on fundamentally different timescales. Several cosmochronologies (e.g. \({}^{40}\)K-\({}^{40}\)Ar ages) are derived from radioisotopic measurements that record time relative to the present. Following the conventions of isotope geo- and cosmochemistry, we report these ages in _annum_ (a), meaning "years before the present". However, physics-based simulations record time after reference events. To connect cosmochemical records with astrophysical processes, a solar time-zero is canonically anchored to Ca-Al-rich inclusions (CAIs), the solar system's oldest macroscopic solids, which condensed from a solar composition gas [e.g. 24] heated by the infant Sun as it entered its pre-main-sequence phase [25]. We report astronomically referenced time in terms of years after CAIs (\(\rm y_{ss}\)), such that 0 \(\rm M_{yss}\equiv 4567.3\) Ma [26]. Prior work on the solar system's GPM history has relied on a combination of thermochroscopic data and the modern solar system architecture. The LHB was first hypothesized as a period of enhanced lunar bombardment at \(\sim\)4 Ga, based on the absence of \(>\)4 Ga thermochronologic cooling ages in Apollo mission return samples and an apparent paucity of \(>\)4 Ga craters on the lunar surface [23, 27]. Subsequent studies revealed that neither of these lunar records actually required a \(\sim\)4 Ga lunar bombardment [28, 29] and could be explained instead by a monotonic decline in bombardment flux since the epoch of planetary assembly. More recently, dynamical constraints on instability triggers [16], the survival of binary asteroids [30], and meteorite thermochronology [31] have constrained the timescales of GPM to the solar system's first 100 My. A variety of models for "early" (\(<\)100 \(\rm M_{yss}\)) GPM fall within different timeframes (Fig. 1). The Grand Tack model requires the presence of a gaseous protoplanetary disk [10]. Observations of solar-mass stars limit gas-disks to the first 10 My of stellar lifetimes [19, 32, 33], with a typical lifetime of \(<\)5 \(\rm M_{yss}\) derived from observations [34], dynamical models [35], and cosmochemical constraints [36]. In contrast, instability models require the dissipation of the gaseous disk during or prior to the GPM event. In the earliest scenario, instability occurs during gas dissipation, due to asymmetric torques on giant planets from the inner edge of the outward-migrating gas disk [21]. After gas dissipation, simulations seeded with plausible planet formation/evolution histories [37] reveal that an early giant planet instability can occur by canonical planet-planetesimal disk interactions or by a self-triggered instability resulting from unstable orbital architectures established while embedded in the gas-disk [16]. The self-triggered instabilities occur rapidly, within 10 My after gas dissipation (\(\sim\)15 \(\rm M_{yss}\)), whereas self-stable systems interacting with a planetesimal disk typically experienced instabilities \(>\)30 My after gas dissipation. In summary, plausible GPM stimuli occur at \(\lesssim\)5 \(\rm M_{yss}\) (gas-disk stimulus or gas dissipation), 5-15 \(\rm M_{yss}\) (unstable orbital architectures), or \(>\)15 \(\rm M_{yss}\) (planetesimal-disk interactions) (Fig. 1). This study aims to resolve the timescale of GPM and concomitant inner solar system bombardment through its imprint on the thermochronologic record of asteroidal meteorites. We focus on the \({}^{40}\)K-\({}^{40}\)Ar system, which is susceptible to resetting at relatively low temperatures [e.g. 22], and therefore well-suited to capturing the relatively subtle thermal signals of impact-heating [38, 39]. Moreover, we follow the logic of [29] that protracted, bombardment-warmed planetary histories systematically young \({}^{40}\)K-\({}^{40}\)Ar system ages. Although prior evaluation of meteorite thermochronology has constrained dynamical instability to within the first \(\sim\)100 \(\rm M_{yss}\)[31], establishing more precise constraints has been challenging because these timescales overlap with those of radiogenic heating and subsequent cooling of the meteorite parent planetesimals. To deconvolve these endogenic thermal histories of asteroidal meteorite parent bodies from exogenic bombardment reheating histories, we use a Bayesian statistical approach. We compiled a database of cooling ages from the thermochronologic \({}^{40}\)K-\({}^{40}\)Ar system for chondrite meteorites with provenance in the inner solar system [6]: the ordinary chondrites (OC) comprised of the H, L, and LL types; the enstatite chondrites (EC) comrprised of the EH and EL types; and the Rumuruti-type chondrites (RC). These groups come from undifferentiated planetesimals with well-characterized thermal histories largely explained by simple conductive cooling histories [40], which are relatively straightforward to model (Extended Data Fig. 2). The various parent bodies corresponding to these meteorites likely experienced similar thermal histories, and our Bayesian statistical approach allows us to rigorously account for un certainties stemming from modest differences among parent body histories (Methods). While prior work has explored the chronology of highly shocked and impact-melted chondrites [41], we consider all chondritic \({}^{40}\)K-\({}^{40}\)Ar system ages, since shock effects and shock heating are heterogenously distributed [39, 42] and seemingly unshocked meteorites may still record heating from impacts. This approach allows us to use a database that spans solar system history, with a high density of ages throughout the timescales of potential GPM (Figs. 1, 2). To resolve the respective contributions of endogenic and exogenic heating on the chondritic \({}^{40}\)K-\({}^{40}\)Ar record, we explore the parameter-space describing this two-component thermal history with a Markov chain Monte Carlo (MCMC) algorithm (Methods). This model is constrained by two sets of priors: our database of \({}^{40}\)K-\({}^{40}\)Ar system ages (Fig. 2) and published constraints on the thermochronologic model parameters (Table 1). Through its exploration of the parameter space, the algorithm produces Markov chains whose stationary distributions yield Bayesian posterior estimates of each parameter in the simulation. To constrain the timescales of GPM and dynamical excitation, the simulated thermochronologic histories include exponentially decaying bombardment fluxes, which may mimic either collisional pulses resulting from GPM or a more secular decline in collisions as the solar system ages (Extended Data Fig. 2). In the following sections, we report and discuss the results of these simulations and their implications for the GPM history of the solar system. ## Results ### Database of \({}^{40}\)K-\({}^{40}\)Ar system ages Figure 2 shows the distribution of \({}^{40}\)K-\({}^{40}\)Ar system ages in our database (n=203), which includes measurements by the \({}^{40}\)Ar-\({}^{39}\)Ar and K-Ar techniques (Methods). \({}^{40}\)Ar-\({}^{39}\)Ar ages follow a bimodal distribution. The maximum of its probability density is slightly younger than the solar age and declines monotonically to minimal density by \(\sim\)3 Ga, followed by a secondary peak of ages at \(\lesssim\)1 Ga. The distinct bimodality of \({}^{40}\)Ar-\({}^{39}\)Ar ages results from the technique's higher analytical resolution and capability for identifying and excluding disturbed K-bearing domains from age calculations [e.g. 43] (see Methods for a more comprehensive explanation of the two methods). In contrast, the peak K-Ar age density is \(>\)300 Ma younger than the \({}^{40}\)Ar-\({}^{39}\)Ar peak and has a broad noisy tail that extends to the present-day, which reflects the propensity of K-Ar ages for partial resetting by low-temperature Ar-loss (Methods). Even when \({}^{40}\)Ar-\({}^{39}\)Ar ages are resampled at the lower precision of K-Ar ages (on average \(\sigma=6\%\)), it produces a broader and shallower density profile, but its maximum is still older and more pronounced than the maximum of the K-Ar age distribution (Fig. 2). Since K-Ar ages are systematically younger due to partial resetting, we only include \({}^{40}\)Ar-\({}^{39}\)Ar ages in our analyses and interpretations. The recent peak in \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages at 500 Ma is comprised only of the OC groups, represented predominantly by the L chondrites (Extended Data Fig. 1). This is consistent with the \(\sim\)460 Ma disruption of an L chondrite parent body inferred by prior studies [22, 44]. Since the \({}^{40}\)K-\({}^{40}\)Ar system is resilient to resetting during atmospheric entry [45, 46], we are confident that these \(<\)2 Ga cooling ages reflect collisional heating events rather than modern resetting. Thus, we interpret the \(\sim\)500 Ma secondary peak of \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages as a relatively recent collisional event--or events--that comminuted asteroids and left meteoroid rubble piles on near-resonant orbits, which were later perturbed onto Earth crossing trajectories [47]. The apparently coordinated \(\sim\)500 Ma reheating of OCs implies a relationship in their excavation and delivery to Earth. We suggest that this coordination may reflect either an orbital relationship among discreet H/L/LL parent bodies or a shared provenance of these stones from an L-dominant OC rubble pile parent body. Since these young impact-heating ages occur on timescales substantially post-dating any plausible GPM processes (Fig. 1), we exclude all \(<\)2 Ga \({}^{40}\)Ar-\({}^{39}\)Ar ages from our analyses. Table 2 reports population statistics of the \(>\)2 Ga \({}^{40}\)Ar-\({}^{39}\)Ar ages that we use as priors in our Bayesian inversion calculations. We interpret the ancient peak and decline in \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages to reflect endogenic thermal histories overlain by exogenic reheating. We associate the exogenic reheating with impacts related to secular collisional history and/or early bombardment(s) [31]. An LHB predicts an enhancement in bombardment \(\leq\)4 Ga, which would appear as a distinct peak in cooling ages at its onset, but the predominantly monotonic decline from 4.5-3.0 Ga contradicts such a history. Minor local maxima in the distribution around 4 Ga may reflect a subtle LHB signal (Fig. 2), which we rigorously test (and reject) with the Bayesian inversion we apply herein. ### Thermochronologic evidence for a bombardment history Thermochronologic simulations of an unperturbed, radiogenically heated body produce a monotonic decline of cooling ages following an initial peak (Extended Data Fig. 2). To test whether or not these features of the \(>\)2 Ga \({}^{40}\)Ar-\({}^{39}\)Ar age distribution require reheating by impacts, we run MCMC inversions with and without any bombardment histories. We concurrently use two sets of priors: the distribution of measured \(>\)2 Ga \({}^{40}\)Ar-\({}^{39}\)Ar ages and constraints on the thermochronologic model parameters (Table 1). Simulations that are inconsistent with the prior information (i.e. too many/few bombardments) yield posteriors in poor agreement (hereafter "tension") with those priors, whereas scenarios that are consistent with prior information converge on posterior distributions that are largely concordant with priors. Figure 3 compares the prior distribution (measured) with the posterior (simulated) distributions of \({}^{40}\)Ar-\({}^{39}\)Ar ages from simulations with 0-3 bombardment episodes. Simulations without impact reheating (Fig. 3a) yield posterior distributions of \({}^{40}\)Ar-\({}^{39}\)Ar ages that fail to reproduce the brief early peak of ages, the shape of the monotonic decline, and the near-nil density of ages between 3.5-2 Ga. The posterior distributions of several model parameters are in stark tension with their priors (Extended Data Fig. 3, Extended Data Table 1). In contrast, each of the simulated asteroid thermal histories that incorporate impact reheating (Fig. 3b-d) yield posterior distributions that are concordant with the prior distributions of \({}^{40}\)Ar-\({}^{39}\)Ar-ages and model parameters. These observations are quantitatively supported by the corresponding log-likelihoods (\(\ell\)), such that a no-impact history corresponds to a mean \(\ell=-1016\pm 2\) (1\(\sigma\)), whereas bombardment histories yielded similar \(\ell>-1000\). We conclude that the meteorite record requires one or more \(>\)2 Ga bombardment events to explain the chondritic \({}^{40}\)Ar-\({}^{39}\)Ar age distribution. ### Impact flux parameters and their characteristics We simulate episodes of enhanced collisions and bombardment in the asteroid belt with exponentially decaying fluxes characterized by an onset date (\(t_{o}\)), initial impactor flux (\(F_{o}\)), and \(e\)-folding timescale or mean-life (\(\tau\)) (Extended Data Fig. 2). We simulate impact reheating nonphysically (Methods), so the values of \(F_{o}\) are proportional to an impact flux rate and reflect the degree of thermochronologic resetting but ought not be interpreted as quantitative estimates of asteroid belt impact flux. Our model framework accommodates up to three bombardment events, which we denote with \(\alpha\), \(\beta\), and \(\gamma\). To limit multimodality in the posterior distributions of multi-bombardment simulations, we require the onset dates of these bombardments to occur sequentially (\(t_{o}\alpha<t_{o}\beta<t_{o}\gamma\)). Additionally, in simulations with \(>1\) bombardment we assign the first bombardment (\(\alpha\)) as a primordial bombardment (\(t_{o}=0\) My\({}_{\rm ss}\)) to reduce dimensionality. We envision this primordial flux as the long-term (Gy-scale) background rate of inner solar system collisions [as in 48], distinct from punctuated and transient GPM-triggered bombardment. For this reason, we require that the \(\tau\) of the primordial flux exceeds those of the post-accretion bombardments meant to simulate the dynamical consequences of GPM. In all simulations, we find that \(F_{o}\) and \(\tau\) are closely related. Within any given bombardment event, \(F_{o}\) decreases with increasing \(\tau\) (Fig. 4, Extended Data Figs. 6-8). This inverse scaling of \(F_{o}\) and \(\tau\) indicates that a longer bombardment duration can compensate for a relatively low initial flux while, conversely, rapid recovery can compensate for a high initial flux. Among bombardment events, each bombardment event falls into one of two categories with respect to \(F_{o}\) and \(\tau\): intense and brief bombardments (hereafter intense/brief) are characterized by \(F_{o}>100\) My\({}^{-1}\) and \(\tau\ll 100\) My (e.g. Fig. 4a), whereas mild and protracted bombardments (hereafter mild/protracted) are characterized by \(F_{o}\ll 100\) My\({}^{-1}\) and \(\tau>100\) My (e.g. Fig. 4b). Additionally, \(t_{o}\) scales inversely with \(F_{o}\), such that more intense bombardment events occur earlier in solar system history. Notably, parameters from separate bombardment events never appear correlated, indicating that they are independent with respect to each other. These patterns are consistent among all simulations that include bombardment events (Fig. 4, Extended Data Figs. 6-8). ## Number, timescale, and intensity of bombardment event(s): Evidence for two impact fluxes A single-bombardment scenario converges on posterior impact flux parameter distributions that reflect a mild/protracted bombardment (Extended Data Fig. 6), such as that expected for the background rate of inner solar system collisions. The persistence of the mild/protracted impact flux over the intense/brief flux emphasizes the critical importance of secular collisions to explain the thermochronologic histories of chondrites as well as the inner solar system impact record [48]. However, numerous lines of independent evidence indicate that there was an episode of GPM in solar system history that would have driven extensive small-body migration through the region of the asteroid belt [5, 8, 9, 11, 12, 20, 49] and thereby precipitated a punctuated episode of asteroid collisions. Therefore, a single-bombardment scenario is insufficient to meet the requisite dynamical history of the solar system, and at least one more impact flux is required. In support of this conclusion, the posterior distributions of several model parameters are in greater tension with their priors for single-bombardment simulations than those with 2-3 bombardment histories (Extended Data Figs. 4, 5). Figure 4 reports the posterior distributions of bombardment parameters for simulations characterized by two bombardments: a primordial bombardment anchored to the solar age (\(0\) My\({}_{\rm ss}\)) and a post-accretion bombardment with an onset date (\(t_{o}\)) that varies along with all other parameters in Table 1. The post-accretion bombardment (Fig. 4a) is intense/brief with a median onset date of \(11.3^{+44.4}_{-11.0}\) (95% credible interval, hereafter CI) and mean of \(15\pm 14\) My\({}_{\rm ss}\) (\(1\sigma\)). We favor the median as an estimate of central tendency, since the distribution is skewed. In contrast, the primordial flux (Fig. 4b) is mild/protracted (summary statistics reported in Extended Data Table 2). These results are consistent with a solar system history characterized by a slowly decaying background rate of infrequent collisions punctuated by an early (\(<100\) My\({}_{\rm ss}\)) episode of dynamical excitation and intense bombardment [31, 48]. Moreover, this intense and transient collisional episode satisfies theoretical predictions of GPM-associated bombardment in the inner solar system [10, 12, 20]. We also explore the effect of incorporating a third impact flux \(\gamma\) (also post-accretion) and find that three bombardment events reproduce the \({}^{40}\)Ar-\({}^{39}\)Ar age distribution as well as two (Fig. 3). In this scenario, the \(\alpha\) (primordial) and \(\beta\) (post-accretion) impact fluxes are nearly identical to the respective fluxes of the two-bombardment simulation shown in Fig. 4 (Extended Data Fig. 8, Extended Data Table 2). The posterior of the \(\gamma\) flux is bimodally distributed with respect to \(t_{o}\) and \(\tau\). One mode mimics the \(\beta\) bombardment with a local maximum at \(t_{o}\sim 10\) My\({}_{\rm ss}\) and \(\tau\sim 10\) My. The other mode is extremely late in the simulation time-domain with \(t_{o}>1000\) My\({}_{\rm ss}\)--perhaps capturing one of the minor maxima between 3.5 and 2 Ga--and has an extremely brief lifetime of \(\tau\sim 0.01\) My (i.e. flux becomes negligible within a single 1 My timestep). Thus, the third flux either mimics the \(\beta\) event or is suppressed by an extremely short \(\tau\). Since a third flux neither adds new information nor has any discernible effect on the simulated age distribution (Fig. 4), we conclude that it is unnecessary and do not consider it further. In our multi-bombardment simulations, the \(\alpha\) bombardment \(t_{o}\) is always primordial (\(t_{o}=0\) My\({}_{\rm ss}\)). If we instead simulate two post-accretion impact fluxes (\(\beta\), \(\gamma\)) with no primordial bombardment requirement (Extended Data Fig. 7), we get a similar result to the one-primordial/one-post-accretion two-bombardment scenario in Fig. 4. The posterior distributions of \(\beta\)-flux parameters are indistinguishable between these scenarios, and the distributions of \(F_{o}\) and \(\tau\) for the \(\alpha\)-fluxes are very similar (Fig. 4, Extended Data Fig. 7). Curiously, in scenarios with only post-accretion bombardments, the \(t_{o}\) of the mild/protracted fluxes have median values between 100-140 My\({}_{\rm ss}\), rather than a more "primordial" value near 0 My\({}_{\rm ss}\). In the case of the 2-bombardment distribution, the late onset date partly reflects the model requirement that \(t_{o}\beta<t_{o}\gamma\). Since there is no such restriction in the case of the 1-bombardment distribution, the inversion likely favors the later onset because it produces the long, shallow tail of cooling ages to 3.5 Ga without also placing a larger, long-lasting impact flux overlapping with the timeframe of maximum age density at \(\sim\)4500 Ma (Fig. 2). Nonetheless, whether the \(\alpha\) bombardment is primordial or post-accretion does not discernibly affect the characteristics of intense bombardment or the concordance between the observed and simulated \({}^{40}\)Ar-\({}^{39}\)Ar age distributions (\(\ell=-990\pm 3\) for two post-accretion bombardments). Thus, our assumption of a primordial \(\alpha\) bombardment is valid for the purposes of this study, and we favor this approach since it is more consistent with the mild/protracted impact flux representing a secular process. ## Comparison to non-bombardment parameters For all bombardment scenarios, posterior distributions of environmental, cosmochemical, asteroidal, and material parameters are overall concordant with their prior constraints (Extended Data Fig. 5 and Extended Data Table 2). Our models' ability to satisfy both these independent constraints as well as the chondritic \({}^{40}\)Ar-\({}^{39}\)Ar record supports their overall veracity. We briefly consider three minor discrepancies between the priors and posteriors. First, accretion time (\(t_{a}\)) tends toward slightly later values, likely reflecting a bias towards the later accretion timescales for the parent bodies of OCs [50], which comprise \(>\)70 % of the \({}^{40}\)Ar-\({}^{39}\)Ar database. Second, the disk midplane temperature (\(T_{m}\)) tends toward lower values than the observation-constrained prior [51] but remains consistently within chondritic constraints of \(<\)503 K [52]. Since the relatively warm gaseous disk dissipated early on in the asteroids' thermal histories, the tendency toward lower temperature may reflect the unimodal model compensating for this bimodal temperature history, though insulation from surface regolith layers likely reduced the effect of the temperature drop. Third, Ar closure temperature (\(T_{c}\)) tends toward higher values, suggesting that the \({}^{40}\)K-\({}^{40}\)Ar system may have closed at slightly higher temperatures in chondrites than some previously measured Arrhenius relationships imply [43]. ## Discussion Since a two-bombardment history is necessary and sufficient to explain the chondrite thermochronologic record in the context of post-accretion dynamical instability (Results), we select the scenario shown in Figs. 3c and 4 (two bombardments: one primordial, one post-accretion) as our preferred model to interpret the timescales of giant planet migration (GPM). Since the mild/protracted primordial impact flux reflects a secular rate of collisions over solar system history [48], we conclude that it is not related to GPM. In contrast, the intense/brief impact flux is consistent with a violent yet transient bombardment event. Though our parameterization uses arbitrary units, the median \(F_{o}\) of this bombardment heats \(\sim 50\%\) of the simulated asteroid volume at the onset of bombardment. Although the upper constraints on \(F_{o}\) are poorly constrained by our model and may not be representative of the body's deeper interior (see extended discussion in Methods), the large values underscore the severity of this event, especially for shallower portions of early chondritic bodies. Additionally, this bombardment dissipates on a timescale similar to the most intense and short-lived component of impact flux models for the inner solar system [48], which are comparable to the timescales predicted for dynamical cooling after GPM [10, 20]. This concordance motivates our first of two key conclusions: the thermochronologic record of the asteroid belt records a single GPM event. The second key conclusion regards its timing. Figure 4c compares the posterior distribution of the GPM-induced bombardment onset to the timescales of potential stimuli, as described above and in Fig. 1. The timescales of GPM pre-date canonical LHB timescales by hundreds of My (Figs. 1, 4). Since bombardment of the terrestrial planets by asteroids or comets requires scattering within or through the asteroid belt, we expect that at least one of the \(\sim\)6 parent bodies represented in our database would record these collisions. Our findings firmly refute any intense bombardment occurring \(\gg\)100 My\({}_{\rm ss}\), consistent with other recent work [16, 28, 29, 30, 31, 53]. We do not resolve a unique timeframe for \(<\)100 My\({}_{\rm ss}\) GPM, since all 4 timescales fall within the traditional 95% CI. However, only the lower 26% of the distribution overlaps timeframes during which the gaseous disk could have played a causal role in GPM (\(<\)5 My\({}_{\rm ss}\), Fig. 4). Observations of protoplanetary disks for solar-type stars show that disk lifetimes are consistently \(<\)10 My, with few disks surviving \(>\)6 Ma and most dissipated within \(\lesssim\)3 Ma [19, 32, 33]. Dynamical models corroborate these observations and predict mean disk lifetimes of 3.7 Ma [34, 35]. This is consistent with meteoritical evidence for gas disk dissipation beyond Jupiter's current orbit (\(\sim 5\) au) by 3.5-5 My\({}_{\rm ss}\)[36]. Thus, we approximate the plausible timescales of gas dissipation to 3-5 My\({}_{\rm ss}\). Since a Grand Tack-style migration must be shortly followed by gas dissipation to halt the outward migration of Jupiter and Saturn, we assign a generous lower-limit for gas-embedded GPM onset at 1 My\({}_{\rm ss}\) (Fig. 4c). This 1-5 My\({}_{\rm ss}\) timeframe for a Grand Tack style migration overlaps with only 20% of the posterior distribution of onset times, and even this is an upper limit due to potential insensitivity of the Bayesian inversion at very early timescales (see extended discussions in Methods). The remaining upper 75% of the distribution overlaps timescales post-dating dissipation of the gaseous disk (Fig. 4c). The 50% CI of the distribution, including the mean (15.0 My\({}_{\rm ss}\)) and median (11.3 My\({}_{\rm ss}\)), overlaps the timescales of instability resulting from self-unstable orbital architectures left behind after dissipation of the gaseous disk. The upper 38% of the posterior overlaps the timescales of GPM caused by interaction with an outer planetesimal disk [16]. Collectively, 82% of the distribution overlaps timescales associated with dynamical instability of giant planets rather than a Type II style migration embedded in a gaseous disk. This motivates our second key conclusion that GPM in our solar system was a result of a dynamical instability, most likely occurring shortly after the dissipation of the gaseous disk. Though the results are non-unique, the most probable mechanistic stimulus of instability was an unstable giant planet orbital configuration, potentially exacerbated by interaction with a massive outer disk of planetesimals [16]. Whether the systems were destabilized by asymmetric torques of a receding disk [21], inherently unstable orbital configurations without the support of a gaseous disk [16], interactions with an outer planetesimal disk [8, 12, 16], or any combination of these processes, gas dissipation was a critical process in triggering or predisposing the system to GPM. Our results provide a precise cosmochronological constraint on the timescales of GPM in our solar system, constrained by a broad subset of the meteorite record (n=97). We expect that our results will be refined as future efforts expand meteorite thermochronometry, provide greater astronomical context for this data (e.g. asteroid sample return missions), and refine thermal models to simulate more physical scenarios. We recognize that the model is a simplification of meteorite thermal histories. Its greatest limitation is its relative insensitivity to impact heating prior to thermochronologic closure (see Methods for comprehensive treatments of all assumptions and limitations). Although over-early (\(<\)1 My\({}_{\rm ss}\)) bombardment scenarios are already incompatibile with plausible solar system GPM chronologies (Fig. 4), we expect that simulations incorporating more nuanced impact processes and numerically solved thermal histories may solidify and refine our present findings. Yet, despite its simplicity, the model effectively replicates the broad \({}^{40}\)Ar-\({}^{39}\)Ar thermochronologic record of chondrites and corroborates extant constraints on chondritic planetesimals (Table 1), both of which support its overall veracity for early solar system chronology. GPMs appear to be a quotiian feature of exoplanetary systems [3, 4]. Our findings further solidify the growing evidence that this process is characteristically constrained to the earliest stages of a planetary system's history. We encourage future dynamical studies to focus on this epoch of planetary evolution, as it likely plays an outsized role on the long-term physical and chemical structure of exoplanetary systems, as it did in ours. Similarly, our results motivate observational emphases on young (\(<\)20 My) exoplanetary systems [e.g. 54, 55] that might host planetary architectures on the verge of or recovering from instability and migration. ## Methods ### Database of \({}^{40}\)K-\({}^{40}\)Ar system ages To evaluate the thermal histories of undifferentiated bodies in the solar system, we compiled a database of \({}^{40}\)K-\({}^{40}\)Ar system cooling ages for chondrites with inner solar system provenance (ordinary, enstatite, and Rumuruti-type). \({}^{40}\)K undergoes branching decay to \({}^{40}\)Ar and \({}^{40}\)Ca with a half-life of \(\sim\)1.25 Gy. The \({}^{40}\)K-\({}^{40}\)Ar decay system is a thermochronometer due to the temperature dependent diffusivity of gaseous Ar through mineral crystal lattices. The nominal closure temperature of plagioclase (the most K-rich mineral found in chondrites) to Ar diffusion is approximately 500 K for cooling rates spanning 1-1000 K/My [43, 56]. Dates in the \({}^{40}\)K-\({}^{40}\)Ar system have been measured by one of two techniques: the K-Ar and \({}^{40}\)Ar-\({}^{39}\)Ar methods. The K-Ar method entails first degassing Ar and measuring its isotopic composition to obtain the total abundance of \({}^{40}\)Ar, then measuring the K abundance of the degassed sample and correcting for a known or assumed \({}^{40}\)K/K to obtain the absolute abundance of \({}^{40}\)K, and finally calculating a date from the ratio \({}^{40}\)Ar/\({}^{40}\)K. While the earliest \({}^{40}\)K-\({}^{40}\)Ar system ages were measured by this method, K-Ar ages are inaccurate and misleading if the system was previously heated and partially degassed, which results in ages that fall between the primary cooling age and the age of reheating. To circumvent this issue, the \({}^{40}\)Ar-\({}^{39}\)Ar method entails irradiating a sample with fast neutrons to convert \({}^{39}\)K to \({}^{39}\)Ar and then degassing the sample and measuring the isotopes of Ar. Since \({}^{40}\)K/\({}^{39}\)K only varies sub stantively due to time-dependent \({}^{40}\)K decay, \({}^{39}\)K (and therefore irradiated \({}^{39}\)Ar) is a reliable proxy for \({}^{40}\)K for a given moment in geologic time. A standard of known age is irradiated alongside the unknown sample and its measured \({}^{40}\)Ar/\({}^{39}\)Ar is used to correct for the incomplete conversion of \({}^{39}\)K to \({}^{39}\)Ar. Thus, paired measurements of \({}^{40}\)Ar/\({}^{39}\)Ar for a sample of unknown age and a standard may be used to calculate the unknown sample's \({}^{40}\)Ar/\({}^{40}\)K and age. By sequentially degassing samples at a range of temperatures with the \({}^{40}\)Ar-\({}^{39}\)Ar method, phases that experienced partial loss of Ar at lower temperatures can be identified and excluded from the final age calculation. We compiled a database of both K-Ar and \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages from the published literature, using several prior compilations as a starting point [22, 31, 57]. In most cases, we followed the recommended ages reported by the publishing authors, typically plateau or "reduced plateau" ages in the case of the \({}^{40}\)Ar-\({}^{39}\)Ar method. If there were two clear \({}^{40}\)Ar-\({}^{39}\)Ar plateaus, we incorporated the older, as this study focuses on the early cooling history of asteroids and the younger cooling ages (typically \(<1\) Ga) typically reflect collisions that ejected meteoroids from larger parent bodies toward Earth-crossing orbits [58]. However, the uncertainties of dates were not always reported or calculated from rigorously propagated uncertainties. We do not include ages that are reported as only minimum/maximum ages, as quantified uncertainties are necessary for our Bayesian approach. Where dates were given without uncertainty or calculated from an assumed K abundance, we assumed a cautiously large uncertainty of \(2\sigma=10\%\). In cases where multiple ages were reported and there was not a clearly more contemporary or less disrupted age (n=13), we calculated the mean of the reported ages by Monte Carlo method. #### Using common constants and re-calibrating \({}^{40}\)K-\({}^{40}\)Ar ages For quantitative comparison, all the ages in our database must be calculated relative to a common set of decay constants and \({}^{40}\)K/K. The branching decays of \({}^{40}\)K to \({}^{40}\)Ar by electron capture and \({}^{40}\)Ca by \(\beta^{-}\)-emission are respectively described by the decay constants \(\lambda_{e}\) and \(\lambda_{\beta}\) (the summed decay constant is denoted \(\lambda\)). However, the values of these constants used to calculate \({}^{40}\)K-\({}^{40}\)Ar ages have changed over the history of the system's geochronometric use. The decay constants of [59, hereafter SJ77] have been used almost ubiquitously by the meteoritics community since its publication, despite publication of more recently revised \({}^{40}\)K decay constants [60, 61]. This is in part due to uncertainty regarding the accuracy of \({}^{40}\)K decay constants on early solar system timescales and debate over the appropriate approach to recalibration [61, 62]. However, a recently reported \({}^{40}\)Ar-\({}^{39}\)Ar age calculated with the SJ77 decay constants for a ureilitic clast (MS-MU-011) of the Almahata Sitta meteorite agrees with its corresponding Pb-Pb age nearly within \(1\sigma\)[63]. Since this fast-quenched system likely cooled through effective Pb and Ar closure almost simultaneously [63], the SJ77 decay constant seems more accurate on early solar system timescales than previously argued. Additionally, the ubiquitous use of SJ77 combined with inconsistent reporting of co-irradiated standard information across much of the literature inhibits accurate recalibration. Thus, we conclude that the SJ77 decay constants are the best option available at this time for calculating \({}^{40}\)K-\({}^{40}\)Ar system ages of meteorites, and note that errors stemming from using the different decay constants (\(\lesssim 20\) Ma) are well within the 1-\(\sigma\) uncertainties of most ages in our database. To ensure that all \({}^{40}\)K-\({}^{40}\)Ar ages used in this study are calculated relative to a set of common decay constants and \({}^{40}\)K/K ratio (\(K\)), we recalculate all \({}^{40}\)Ar-\({}^{39}\)Ar and K-Ar ages published prior to 1977 with the values of SJ77. Given the two different methods employed and non-standardized reporting of methods, these recalculations required one of three scenarios, each of which we employed using a Monte Carlo method to propagate all reported uncertainties. The first and most straightforward scenario entails recalculating K-Ar ages. To do this, we rearrange the \({}^{40}\)K-\({}^{40}\)Ar age equation to calculate the measured \({}^{40}\)Ar/\({}^{40}\)K ratio from the reported age, correct for the updated \(K\) ratio, and recalculate the age: \[t=\frac{1}{\lambda}\left(ln\left[\frac{\lambda}{\lambda_{e}}\ \frac{K^{\prime}}{K}\ \frac{\lambda^{\prime}_{e}}{\lambda^{\prime}}\ (e^{t^{\prime}\lambda^{\prime}}-1)\right]+1\right) \tag{1}\] where the prime symbol indicates the reported age and its corresponding decay and \(K\) constants, which were consistently (within rounding error) the values used by [64]. In the case of the \({}^{40}\)Ar-\({}^{39}\)Ar method, dates are calculated relative to a co-irradiated standard of known age by the equation: \[t=\frac{1}{\lambda}ln\left[1+J\ \frac{{}^{40}Ar}{{}^{39}Ar}\right] \tag{2}\] where \(J\) is calculated from the standard's age, \(t_{s}\), and its measured \({}^{40}\)Ar/\({}^{39}\)Ar ratio: \[J=\frac{{}^{40}Ar}{{}^{39}Ar}\ (e^{\lambda t_{s}}-1) \tag{3}\] Thus, an \({}^{40}\)Ar-\({}^{39}\)Ar age may be recalculated simply by correcting the J term: \[t=\frac{1}{\lambda}\ ln[1+k_{J}\ (e^{\lambda^{\prime}t^{\prime}}-1)] \tag{4}\] where \(k_{J}\) is the ratio of the recalculated and original \(J\)-factors (\(J/J^{\prime}\)). So, the second and third scenarios depend on how the standards used to calculate the \(J\) term are calibrated and how methods were reported in each corresponding study. The second scenario entails cases where \(t_{s}\) was calculated with a chronometric system other than the \({}^{40}\)K-\({}^{40}\)Ar system. Since \(t_{s}\) is constrained independent of the \({}^{40}\)K decay constants used, \(k_{J}\) is simply calculated by \[k_{J}=\frac{e^{\lambda t_{s}}-1}{e^{\lambda^{\prime}t_{s}}-1} \tag{5}\] The third scenario entails cases where \(t_{s}\) was calibrated by the K-Ar method, as was typical of \({}^{40}\)Ar-\({}^{39}\)Ar dates measured early in the history of this technique. In these situations, we recalculated \(t_{s}\) from the previously used \(t^{\prime}_{s}\) with Equation 1, and calculated \(k_{J}\) through a slightly modified version of Equation 5 (note the \(t^{\prime}_{s}\)): \[k_{J}=\frac{e^{\lambda t_{s}}-1}{e^{\lambda^{\prime}t^{\prime}_{s}}-1} \tag{6}\] Unfortunately, not all studies report the standard or its corresponding \(t_{s}\). In these ambiguous cases, we calculated a distribution of \(k_{J}\) values by Monte Carlo method using Equation 5, where \(t^{\prime}_{s}\) is uniformly distributed over the interval \([0,3)\) Ga, which generously encompasses the full range of standards commonly used for the \({}^{40}\)Ar-\({}^{39}\)Ar method. This approach results in a range of values for \(k_{j}\) over the interval \(1.04<k_{j}<1.10\), which results in a minor increase in the uncertainty of the recalculated age. For example, for an age of \(4500\ \pm\ 20\) Ma, the uncertainty increases to \(\sim 30\) Ma. Since most \({}^{40}\)Ar-\({}^{39}\)Ar dates measured prior to 1977 have analytical uncertainties that are significantly larger than this (typically \(>1\%\)), the effect was minor. The complete database and these calculations are available in the ImpactChron.jl package at [https://github.com/grahamedwards/ImpactChron.jl/tree/main/data](https://github.com/grahamedwards/ImpactChron.jl/tree/main/data). The Supplementary Information includes a human-readable table of the \({}^{40}\)Ar-\({}^{39}\)Ar ages used in this study. ## Software & Code We invert \({}^{40}\)K-\({}^{40}\)Ar system ages for asteroid formation and bombardment histories using an asteroid-scale thermochronologic simulation coupled to a Markov chain Monte Carlo (MCMC) inversion, written in the Julia Language [65] and contained in the package ImpactChron.jl ([https://github.com/grahamedwards/ImpactChron.jl](https://github.com/grahamedwards/ImpactChron.jl)). We explain this code in detail in the following sections. We prepared figures and diagrams with Makie.jl [66], Pairplots.jl ([https://github.com/sefffal/PairPlots.jl](https://github.com/sefffal/PairPlots.jl)), and Inkscape vector graphics editor. ### Thermochronologic model We simulate the thermal history of a probabilistic asteroid characterized by the thermal histories of the inner solar system chondrites--ordinary (O), enstatite (E), and Rumuruti-type (R). This assumption is reasonable given the similarities in the inferred parent body histories of OCs and ECs [67, 68, 69, 70] as well as the observation that intergroup variability of material properties is typically less than intragroup variability for these chondrite groups [71, 72, 73, 74, 75, 76]. Although the parent body history of the R chondrites is less well-constrained, they represent only a minor portion (n=2) of the \({}^{40}\)Ar-\({}^{39}\)Ar age database (n=136). We exclude carbonaceous chondrites that have lower temperature aqueous alteration histories and asteroidal meteorites that underwent partial to complete differentiation (achondrites). For computational efficiency, we separately model (1.) radiogenic heating and conductive cooling through thermochronologic closure for a body unperturbed by impacts and (2.) impact reheating and resetting of \({}^{40}\)Ar-\({}^{39}\)Ar ages. To model the unperturbed thermal histories that would result from radiogenic heating and conductive cooling in the absence of any bombardment, we use an analytical solution to the heat equation for a spherical body with an exponentially decaying heat source [77, 78]. The paramterization assumes constant bulk density (\(\rho\)), specific heat capacity \(C_{p}\), thermal conductivity \(K\), ambient temperature \(T_{m}\) (i.e. the solar system midplane temperature at 2.5 au), and body radius \(R\) (Table 1). Heat production is a function of the time of accretion \(t_{a}\) relative to the solar system age \(t_{ss}\), the initial \({}^{26}\)Al/\({}^{27}\)Al of the solar system \({}^{26}\)_Al\({}_{o}\)_, and chondritic Al abundance \([Al]\) (Table 1). We calculate the time-temperature histories of unperturbed cooling in equally spaced concentric shells of the body, and identify the cooling age of each shell as the first timestep with a temperature below the Ar closure temperature (Extended Data Fig. 2a,b). We calculate the volumetric proportion of each shell to produce a distribution of the volumetric abundance of these cooling ages (Extended Data Fig. 2c, black curve). For this model, we assume proportional representation of proto-asteroidal material in our meteorite database, though we recognize that the meteoritic record probably does not represent an unbiased sampling of early asteroidal interiors. While we cannot comprehensively control for heterogenous sampling of asteroidal parent bodies and delivery of meteoroids to Earth, we account for heterogenous representation of parent body material in our database by weighting our results by petrologic type, since petrologic type is predominantly controlled by a meteorite's provenance within its parent body [e.g. 40]. For each shell, we assign a petrologic type based on its peak temperature (\(T_{type}\) in \({}^{\circ}\)C): \(T_{3}\leq 600<T_{4}\leq 700<T_{5}\leq 800<T_{6}\), following the recommendations of [79] for the maximum temperatures of types 3-4 and a minimum type 6 temperature derived from extensive measurements of type 6 OCs by [80]. Since type 7 and impact-melted chondrites reflect exogenous heating events that are not directly related to provenance within a parent body [67], we do not incorporate these classifications into our weighting calculations. We weight the calculated volumetric abundances of each volumetric shell in our simulation so that the relative abundances of each assigned petrologic type (the sum of the volumetric abundances of all shells of that type) equals the relative abundance of that petrologic type in the \({}^{40}\)Ar-\({}^{39}\)Ar age database (Extended Data Fig. 2c). We then superimpose a bombardment reheating history over the primary cooling history by simulating impacts that reheat a fixed fractional volume of each shell within the body. To simplify our model and underlying code, we assume that impacts only deposit energy and neither excavate nor implant material beyond the original body volume, in line with the probabilistic nature of our model asteroid. Although the distribution of impact-heating in asteroidal bodies is heterogenous and complex [38, 39], both the H and LL chondrites record evidence for impacts while the body was still cooling from radiogenic heating [67, 81], and H chondrites directly record the presence of nearly kilometer-scale melt sheets [82] that would have promoted heat transfer into parent body interiors. Indeed, suprasolidus type 7 and impact-melt chondrites are byproducts of impact heating in early asteroid planetesimals and are observed in every chondrite family evaluated in this study (Table 2). Based on this reasoning, we conclude that collisions in the early solar system were capable of depositing significant amounts of thermal energy into chondritic planetesimal interiors. ImpactChron.jl accommodates three different geometries for the zone reheated by an impact: a cone, a parabololoid, or a hemisphere. Since the shape, depth, and radius of the reheating zone beneath an impact site would vary drastically with impactor size and trajectory, we do not assume a single finite morphology for reheating. Instead, we simulate a more probabilistic reheating scenario that heats a cone extending to the center of the body, allowing for reheating of interior zones that might be exposed by larger collisions. We assign the outermost radius of the cone to cover 1% of the asteroid's circumference to limit geometric error stemming from our simplified assumption of disk-shaped reheating volumes (which do not account for curvature) for each shell. We justify this assumption in a later section. We model the bombardment history of the model asteroid with one or more fluxes of impactors that reheat fractions of the body as described above. Each bombardment event is modeled by an exponentially decaying flux defined by an initial impact flux \(F_{o}\) (impacts/My), \(e\)-folding timescale \(\tau\) (My), and bombardment onset date \(t_{o}\) (My\({}_{\text{ss}}\)). The bombardment flux for any given date \(t\) in My\({}_{\text{ss}}\) is \[F=\begin{cases}F_{o}\cdot exp\left(-\frac{t-t_{o}}{\tau}\right),&\text{if }t \geq t_{o}\\ 0,&\text{if }t<t_{o}\end{cases} \tag{7}\] This formulation results in fractional impacts (Extended Data Fig. 2d), which we accept given the approximate nature of our modeled volumes of impact reheating. At each timestep in the model, we sum all fluxes \(F\) to calculate the total number of "impacts" (\(n_{t}=\sum F_{t}\cdot\Delta t\), where the model timestep is \(\Delta t=1\) My) and scale the volumetric proportion of each shell reheated per impact (\(v_{t}^{z}\)) by \(n_{t}\). We assume complete resetting of the \({}^{40}\)K-\({}^{40}\)Ar system within the reheated zone and instantaneous cooling through Ar closure. For each shell in the body at each timestep, we reset equal proportions (scaled by \(n_{t}\cdot v_{t}^{z}\)) of the cooling age(s) recorded within that shell to the timestep age. Thus, for the first timestep of an impact flux (\(t_{1}\)), \(n_{t1}\cdot v_{t1}^{z}\) of the primary cooling age (\(t_{o}\)) is reset to the age of that timestep (\(t_{1}\)). For the second timestep of that impact flux (\(t_{2}\)), \(\frac{1}{2}\cdot n_{t2}\cdot v_{t2}^{z}\) of \(t_{o}\) and \(t_{1}\) are set to an age of \(t_{2}\). And so on. When iterated over each timestep of the simulation, this produces a matrix of fractional volumes for each modeled shell of the asteroid, with rows and columns corresponding to time/age and radial depth in the parent body. By summing the volumetric proportions corresponding to each timestep, we get a distribution of cooling ages for the model asteroid (Extended Data Fig. 2c). ### Bayesian inversion To reconstruct the planetary history that accounts for the observed distribution of \({}^{40}\)K-\({}^{40}\)Ar system ages, we employ a MCMC method that uses this distribution of measured ages as a prior. The thermochronologic model described above returns a distribution of cooling ages that correspond to the chosen suite of parameters describing the environmental, chemical, physical and, material properties of a model asteroid as well as its bombardment history (Table 1). To explore this parameter space and estimate posterior distributions for each parameter, we use a modification of the Metropolis algorithm [83], based on the underlying statistical architecture of [84]. In addition to the prior distribution of measured \({}^{40}\)K-\({}^{40}\)Ar system ages, we incorporate a comprehensive set of "parameter priors," compiled from published data, to constrain all the parameters describing the environmental, cosmochemical, material, and asteroidal properties of the simulation (Table 1). We parameterized each prior as a log-normal distribution based on either the shape of its distribution or the fact that the property must be \(>0\) by definition (e.g. temperatures in Kelvin). For each prior, we compiled a variety of measurements or estimates of the parameter, which included discrete values, normal distributions, and uniform distributions (i.e. data reported as ranges). We then calculated the natural-logarithm-space \(\mu\) and \(\sigma\) of each distribution from \(10^{6}\) random samples of the compiled data. The two exceptions to this approach are the very precise solar age (assigned to the oldest Ca-Al-rich inclusions), which we treat as a constant in our model, and the initial \({}^{26}\)Al/\({}^{27}\)Al of the protoplanetary disk, which is parameterized as a normal distribution (Table 1). Since the bombardment history of the asteroid belt is poorly constrained, we do not have prior distributions for the initial impactor flux, \(e\)-folding time, or onset of each bombardment episode. We assume uniform distributions that span the entire simulation timescale (\(\mathcal{U}[2000,4567.3]\) Ma) for the bombardment onset and \(e\)-folding time (Table 1). Since the volume of impact reheating is simulated nonphysically, setting a maximum on the number of impacts need only accommodate an upper-limit reheating scenario and prevent numerical instability at extreme values. Since our parameterization of the reheating zone would completely reheat the full volume of the sphere with \(\sim\)4000 impacts, we set an arbitrary upperbound of \(10^{4}\) My\({}^{-1}\) (Table 1). The model accommodates up to three bombardment events (\(\alpha\), \(\beta\), \(\gamma\)), each described by its own \(t_{o}\), \(F_{o}\), and \(\tau\) (Table 1). In simulations with multiple bombardment histories, we assign \(\alpha\) as a primordial flux anchored to the solar age (\(t_{o}=0\) My\({}_{\rm ss}\)), and the MCMC algorithm explores the value(s) of \(t_{o}\) for the other flux(es) (\(\beta\), \(\gamma\)) within the time domain. We impart two rules on these fluxes to ensure reproducible model behavior. (1.) To avoid unnecessary bimodal distributions, we require each flux to occur in chronological order (\(t_{o}\alpha<t_{o}\beta<t_{o}\gamma\)), which prevents the Markov chains from "swapping" parameter spaces. (2.) We also require that all post-accretion fluxes (\(\beta\), \(\gamma\)) have a shorter \(e\)-folding timescale than the primordial flux, framing bombardment events as transient episodes of enhanced asteroidal collisions that recover more quickly than the background rate of intersecting orbits. Any steps that violate either of these rules are rejected. Thus, for a given model result--a distribution of simulated \({}^{40}\)Ar-\({}^{39}\)Ar ages--we calculate the log-likelihood (\(\ell\)) that the collection of measured ages (with corresponding, normally distributed uncertainties) were drawn from the simulated distribution. We add to this \(\ell\) the log-likelihoods that each simulated parameter was drawn from its corresponding prior distribution. For each step of the Markov chain, we randomly perturb one parameter at a time to explore the full parameter space and accept proposals for any given step \(i\) with a probability of \(min\{\ell_{i}-\ell_{i-1},1\}\). After each accepted step, we scale the step size of the most recently perturbed variable by a constant tuned for an acceptance rate of \(\sim\)50%. After an extended burn-in/warm-up period (\(8\cdot 10^{5}\) steps), we record \(10^{6}\) subsequent steps of the Markov chain as the stationary distribution. ## Model assumptions In this section, we consider some of the assumptions made to simplify our model. ### Ar closure temperature Since Ar loss from mineral crystal lattices is a diffusive process, effective Ar closure temperature is dependent on many variables, including mineralogy, grain size, and cooling rate. In result, empirically derived Ar closure temperatures for chondrites range broadly (300 - 800 K) due to variability of these properties among samples. Moreover, the concept of closure temperature itself is a fictitious (albeit useful) simplification of the concurrent processes of radiogenic production and temperature-dependent diffusive loss of atoms within a thermochronometric mineral system [85]. In highly constrained mineral systems, fully modeling the production-diffusion process can simulate accurate thermochronologic histories [67, 68]. However, in the case of bulk chondrite Ar isotope measurements with few to no mineralog constraints (the typical case for the ages used in this study), such complete diffusion paramterization is infeasible. To circumvent this heterogeneity and complexity, we instead treat chondrite Ar closure temperatures as distributions rather than absolute values and explore this parameter with the MCMC algorithm, allowing the Bayesian posterior to accommodate its inherent variability. We assert that exploring an effective Ar closure temperature as a free parameter in our Bayesian inversion allows us to capture the heterogeneity and complexity of Ar production-diffusion in chondritic mineral systems without the computational overhead and parametric uncertainty of explicitly simulating Ar diffusion. ### Geometry of impact heating We assume a nonphysical geometry--a cone extending to the planetesimal center--to simulate impact reheating. This assumption allows us to consider impact reheating in a more agnostic, probabilistic fashion by resetting all depths and petrologic types in equal proportions. This should in part capture the thermal effects of impact reheating of material from the asteroid interior exposed by large, more catastrophic disruptions. Though we assume that the parent body was not catastrophically disrupted during the timescales of primary cooling (see below), the delivery of type 6 chondrites to Earth requires eventual excavation of deeper material. Nonetheless, the consequences of full-radius reheating are relatively minor for the deepest portions of our model asteroids. Since planetary centers cool slowest, they are still hot (i.e. above the Ar closure temperature) and insensitive to thermochronologic resetting during early reheating events. For instance, median conditions from our prefered model (\(\tau\sim 20\) Ma) result in negligibly low fluxes (\(<\)1 %c of \(F_{o}\)) after 150 My, about half the time it takes for the asteroid center to cool below Ar closure after accretion (\(>\)300 My). Correspondingly, our model ignores reheating events prior to thermochronologic closure (discussed further below). To test whether our selected parameterization is consistent with more physical approaches, we compare the results of reheating a parabolic region of fixed depth corresponding to impactors of 15 and 1 km diameters in a 2-bombardment scenario (1 primordial, as in our preferred case, Fig. 4). We approximate the relative dimensions of simulated reheating (per impactor) from the results of prior impact heating work [38]. The results show that reheating to \(\gg 1\) km depths is necessary: a smaller impactor (1 km diameter) yields posterior parameter distributions that mimic a no-impact scenario (Supplementary Fig. 3, Extended Data Fig. 3). For sufficient heating depths, posterior bombardment histories are similar between the partial- and full-radius impact-heating parameterizations: the posterior distributions of bombardment parameters for a \(\geq\)15 km impactor diameter (results for a 20 km impactor, not shown, are nearly identical) are similar to those produced by the full-radius reheating approach (compare Fig. 4 to Supplementary Fig. 5). Thus, as long as sufficiently deep (type 5-6) material is reheated by our simulated impact fluxes, the posteriors are largely unchanged. #### Two-stage thermal model As described above, we use a two-stage thermal model: we first model the unperturbed cooling history of a model asteroid with an analytical solution and then superimpose a secondary impact-reheating history over the primary cooling history. While using this analytical solution vastly reduces computation time, it has a few drawbacks. First, it requires constant material parameters (e.g. density, thermal conductivity), though these are known to vary as a function of temperature [74] and shock histories [76]. Additionally, we assume instantaneous accretion of the planetesimal to a fixed radius. In each case, our use of the MCMC algorithm allows much (if not all) of the error stemming from these assumptions to be captured within the variation of the posterior estimates for each parameter. Our use of the two-stage model requires the assumption that the OC, RC, and EC parent planetesimals were not catastrophically disrupted prior to body-wide cooling below Ar closure temperatures. This assumption that planetesimals did not experience early disruptions is supported by several lines of evidence: thermochronologic evidence for the survival of OCs to \(\geq\)60-90 \(\rm M_{ys}\)[67, 68] and observational/dynamical evidence for the long-term preservation of large asteroidal bodies [86] (diameters \(>\)120 km, compare to diameters of \(\sim\)300 km from Table 1). Using the median values of the posterior estimates of our preferred model (Fig. 4, Extended Data Fig. 5), the planetesimal center cools through Ar closure after \(\sim\)300 Ma, but 50% (by volume) of the same body cools through Ar closure within the first 100 \(\rm M_{ys}\). We conclude that parent bodies of the chondrites examined in this study were likely not catastrophically disrupted within the most important timescales of the model and that any samples from the deep interior that were prematurely quenched by disruption reflect a relatively minor volume of material. Nonetheless, this is a significant assumption in our model that future studies could ameliorate by employing more physical thermal histories. Finally, the two-stage thermal model structure ignores all impact events prior to the primary cooling age of any given shell. The logic behind our assumption is simple: if the shell had not already cooled through closure, the addition of energy by an impact could reheat a not-yet closed thermochronologic system. Instead, this energy might prolong a cooling history, which is accounted for (at least partly) by subsequent impacts in the flux following the primary closure time of the corresponding shell. The drawback of this approach is that there is little statistical sensitivity to simulated impact scenarios that begin early in time with very large fluxes, prior to widespread closure of the parent body. However, this drawback reflects an inherent limitation of thermochronology: persistent high temperatures (either sustained or reheated) erase previous thermal information. Thus, similar to nature, our model effectively ignores early, short-lived bombardment events. Since the primary cooling ages of the outermost shells of our model asteroids are typically \(\geq\)5 \(\rm M_{ys}\), bombardment onsets prior to this timestep are not well-constrained. Thus, we suspect that the exaggerated early tail of bombardment onset dates near 0 \(\rm M_{ys}\) in Fig. 4 may be an artifact of this insensitivity. ## References * [1] Dawson, R. I. & Johnson, J. A. Origins of hot Jupiters. _Annual Review of Astronomy and Astrophysics_**56,** 175-221. doi:10.1146/annurev-astro-081817-051853 (2018). * [2] Unterborn, C. T., Desch, S. J., Hinkel, N. R. & Lorenzo, A. Inward migration of the TRAPPIST-1 planets as inferred from their water-rich compositions. _Nature Astronomy_**2,** 297-302. doi:10.1038/s41550-018-0411-6 (2018). * [3] Pu, B. & Wu, Y. Spacing of Kepler planets: sculpting by dynamical instability. _Astrophysical Journal_**807,** 44. doi:10.1088/0004-637X/807/1/44 (2015). * [4] Raymond, S. N., Armitage, P. J. & Gorelick, N. Planet-planet scattering in planetesimal disks. II. Predictions for outer extrasolar planetary systems. _The Astrophysical Journal_**711,** 772. doi:10.1088/0004-637X/711/2/772 (2010). * [5] DeMeo, F. E. & Carry, B. Solar System evolution from compositional mapping of the asteroid belt. _Nature_**505,** 629-634. doi:10.1038/nature12908 (2014). * [6] Kruijer, T. S., Kleine, T. & Borg, L. E. The great isotopic dichotomy of the early Solar System. _Nature Astronomy_**4,** 32-40. doi:10.1038/s41550-019-0959-9 (2020). * [7] Fernandez, J. A. & Ip, W.-H. Some dynamical aspects of the accretion of Uranus and Neptune: The exchange of orbital angular momentum with planetesimals. _Icarus_**58,** 109-120. doi:10.1016/0019-1035(84)90101-5 (1984). * [8] Tsiganis, K., Gomes, R., Morbidelli, A. & Levison, H. F. Origin of the orbital architecture of the giant planets of the Solar System. _Nature_**435,** 459-461. doi:10.1038/nature03539 (2005). * [9] Gomes, R. S. The origin of the Kuiper Belt high-inclination population. _Icarus_**161,** 404-418. doi:10.1016/S0019-1035(02)00056-8 (2003). * [10] Walsh, K. J., Morbidelli, A., Raymond, S. N., O'Brien, D. P. & Mandell, A. M. A low mass for Mars from Jupiter's early gas-driven migration. _Nature_**475,** 206-209 (2011). * [11] Clement, M. S., Kaib, N. A., Raymond, S. N. & Walsh, K. J. Mars' growth stunted by an early giant planet instability. _Icarus_**311,** 340-356. doi:10.1016/j.icarus.2018.04.008 (2018). * [12] Clement, M. S., Raymond, S. N. & Kaib, N. A. Excitation and depletion of the asteroid belt in the early instability scenario. _The Astronomical Journal_**157,** 38. doi:10.3847/1538-3881/aaf21e (2019). * [13] Kasting, J. F. & Catling, D. Evolution of a habitable planet. _Annual review of astronomy and astrophysics_**41,** 429-463. doi:10.1146/annurev.astro.41.071601.170049 (2003). * [14] Alexander, C. M. O. _et al._ The provenances of asteroids, and their contributions to the volatile inventories of the terrestrial planets. _Science_**337,** 721-723. doi:10.1126/science.1223474 (2012). * [15] Fritz, J. _et al._ Earth-like habitats in planetary systems. _Planetary and Space Science_**98,** 254-267. doi:10.1016/j.pss.2014.03.003 (2014). Ribeiro de Sousa, R. _et al._ Dynamical evidence for an early giant planet instability. _Icarus_**339.** doi:10.1016/j.icarus.2019.113605 (2020). * [17] Lin, D. N. C. & Papaloizou, J. On the tidal interaction between protoplanets and the protoplanetary disk. III. Orbital migration of protoplanets. _The Astrophysical Journal_**309,** 846. doi:10.1086/164653 (1986). * [18] Kley, W. & Nelson, R. P. Planet-disk interaction and orbital evolution. _Annual Review of Astronomy and Astrophysics_**50,** 211-249. doi:10.1146/annurev-astro-081811-125523 (2012). * [19] Williams, J. P. & Cieza, L. A. Protoplanetary disks and their evolution. _Annual Review of Astronomy and Astrophysics_**49,** 67-117. doi:10.1146/annurev-astro-081710-102548 (2011). * [20] Gomes, R., Levison, H. F., Tsiganis, K. & Morbidelli, A. Origin of the cataclysmic Late Heavy Bombardment period of the terrestrial planets. _Nature_**435,** 466-469. doi:10.1038/nature03676 (2005). * [21] Liu, B., Raymond, S. N. & Jacobson, S. A. Early Solar System instability triggered by dispersal of the gaseous disk. _Nature_**604,** 643-646. doi:10.1038/s41586-022-04535-1 (2022). * [22] Bogard, D. D. Impact ages of meteorites: A synthesis. _Meteoritics_**30,** 244-268. doi:10.1111/j.1945-5100.1995.tb01124.x (1995). * [23] Tera, F., Papanastassiou, D. A. & Wasserburg, G. J. Isotopic evidence for a terminal lunar cataclysm. _Earth and Planetary Science Letters_**22,** 1-21. doi:10.1016/0012-821X(74)90059-4 (1974). * [24] Grossman, L. Condensation in the primitive solar nebula. _Geochimica et Cosmochimica Acta_**36,** 597-619. doi:10.1016/0016-7037(72)90078-6 (1972). * [25] Brennecka, G. A. _et al._ Astronomical context of Solar System formation from molybdenum isotopes in meteorite inclusions. _Science_**370,** 837-840. doi:10.1126/science.aaz8482 (2020). * [26] Connelly, J. N. _et al._ The absolute chronology and thermal processing of solids in the solar protoplanetary disk. _Science_**338,** 651-655. doi:10.1126/science.1226919 (2012). * [27] Wetherill, G. W. Late heavy bombardment of the moon and terrestrial planets. _Proceedings of Lunar and Planetary Science Conference 6th,_ 1539-1561 (1975). * [28] Fassett, C. I. _et al._ Lunar impact basins: Stratigraphy, sequence and ages from superposed impact crater populations measured from Lunar Orbiter Laser Altimeter (LOLA) data. _Journal of Geophysical Research: Planets_**117.** doi:10.1029/2011JE003951 (2012). * [29] Boehnke, P. & Harrison, T. M. Illusory late heavy bombardments. _Proceedings of the National Academy of Sciences of the United States of America_**113,** 10802-10806. doi:10.1073/pnas.1611535113 (2016). * [30] Nesvorny, D., Vokrouhlicky, D., Bottke, W. F. & Levison, H. F. Evidence for very early migration of the Solar System planets from the Patroclus-Menoetius binary Jupiter Trojan. _Nature Astronomy_**2,** 878-882. doi:10.1038/s41550-018-0564-3 (2018). * [31] Mojzsis, S. J., Brasser, R., Kelly, N. M., Abramov, O. & Werner, S. C. Onset of giant planet migration before 4480 million years ago. _The Astrophysical Journal_**881,** 13. doi:10.3847/1538-4357 (2019). * [32] Haisch, K. E., Lada, E. A. & Lada, C. J. Disk frequencies and lifetimes in young clusters. _The Astrophysical Journal_**553,** L153. doi:10.1086/320685 (2001). * [33] Sung, H., Stauffer, J. R. & Bessell, M. S. A Spitzer view of the young open cluster NGC 2264. _The Astronomical Journal_**138,** 1116-1136. doi:10.1088/0004-6256/138/4/1116 (2009). * [34] Li, M. & Xiao, L. Lifetimes and accretion rates of protoplanetary disks. _The Astrophysical Journal_**820,** 36. doi:10.3847/0004-637X/820/1/36 (2016). * [35] Kimura, S. S., Kunitomo, M. & Takahashi, S. Z. From birth to death of protoplanetary disks: Modeling their formation, evolution, and dispersal. _Monthly Notices of the Royal Astronomical Society_**461,** 2257-2265. doi:10.1093/mnras/stw1531 (2016). * [36] Borlina, C. S., Weiss, B. P., Bryson, J. F. J. & Armitage, P. J. Lifetime of the outer solar system nebula from carbonaceous chondrites. _Journal of Geophysical Research: Planets_**127.** doi:10.1029/2021JE007139 (2022). * [37] Izidoro, A., Morbidelli, A., Raymond, S. N., Hersant, F. & Pierens, A. Accretion of Uranus and Neptune from inward-migrating planetary embryos blocked by Jupiter and Saturn. _Astronomy & Astrophysics_**582,** A99. doi:10.1051/0004-6361/201425525 (2015). * [38] Davison, T. M., Ciesla, F. J. & Collins, G. S. Post-impact thermal evolution of porous planetesimals. _Geochimica et Cosmochimica Acta_**95,** 252-269. doi:10.1016/j.gca.2012.08.001 (2012). * [39] Bland, P. A. _et al._ Pressure-temperature evolution of primordial solar system solids during impact-induced compaction. _Nature Communications_**5,** 5451. doi:10.1038/ncomms6451 (2014). * [40] Miyamoto, M., Fujii, N. & Takeda, H. Ordinary chondrite parent body: An internal heating model. _Lunar and Planetary Science Conference, 12B,_ 1145-1152 (1981). * [41] Bottke, W. F. _et al._ Dating the Moon-forming impact event with asteroidal meteorites. _Science_**348,** 321-323. doi:10.1126/science.aaa0602 (2015). * [42] Stoffler, D., Keil, K. & Scott, E. R. D. Shock metamorphism of ordinary chondrites. _Geochemical Journal_**55,** 3845-3867. doi:10.1016/0016-7037(91)90078-J (1991). * [43] Turner, G., Enright, M. C. & Cadogan, P. H. The early history of chondrite parent bodies inferred from \({}^{40}\)Ar-\({}^{39}\)Ar ages. _Lunar and Planetary Science Conference Proceedings_**1,** 989-1025 (1978). * [44] Korochantseva, E. V. _et al._ L-chondrite asteroid breakup tied to Ordovician meteorite shower by multiple isochron \({}^{40}\)Ar-\({}^{39}\)Ar dating. _Meteoritics & Planetary Science_**42,** 113-130. doi:10.1111/j.1945-5100.2007.tb00221.x (2007). * [45] Turner, G., Saxton, J. M. & Laurenzi, M. Retention of K-Ar ages by meteorite fusion crust and an attempt to date Antarctic dust. _Meteoritics_**25,** 416 (1990). * [46] McConville, P., Kelley, S. & Turner, G. Laser probe \({}^{40}\)Ar-\({}^{39}\)Ar studies of the Peace River shocked L6 chondrite. _Geochimica et Cosmochimica Acta_**52,** 2487-2499. doi:10.1016/0016-7037(88)90307-9 (1988). * [47] Morbidelli, A. & Gladman, B. Orbital and temporal distributions of meteorites originating in the asteroid belt. _Meteoritics & Planetary Science_**33,** 999-1016. doi:10.1111/j.1945-5100.1998.tb01707.x (1998). * [48] Nesvorny, D., Roig, F. & Bottke, W. F. Modeling the historical flux of planetary impactors. _The Astronomical Journal_**153,** 103. doi:10.3847/1538-3881/153/3/103 (2017). * [49] Kruijer, T. S., Burkhardt, C., Budde, G. & Kleine, T. Age of Jupiter inferred from the distinct genetics and formation times of meteorites. _Proceedings of the National Academy of Sciences,_ 201704461. doi:10.1073/pnas.1704461114 (2017). * [50] Sugiura, N. & Fujiya, W. Correlated accretion ages and \(\epsilon^{54}\)Cr of meteorite parent bodies and the evolution of the solar nebula. _Meteoritics and Planetary Science_**49,** 772-787. doi:10.1111/maps.12292 (2014). * [51] Woolum, D. S. & Cassen, P. Astronomical constraints on nebular temperatures: Implications for planetesimal formation. _Meteoritics & Planetary Science_**34,** 897-907. doi:10.1111/j.1945-5100.1999.tb01408.x (1999). * [52] Schrader, D. L., Fu, R. R., Desch, S. J. & Davidson, J. The background temperature of the protoplanetary disk within the first four million years of the Solar System. _Earth and Planetary Science Letters_**504,** 30-37. doi:10.1016/j.epsl.2018.09.030 (2018). * [53] Xie, M. & Xiao, Z. A new chronology from debiased crater densities: Implications for the origin and evolution of lunar impactors. _Earth and Planetary Science Letters_**602,** 117963. doi:10.1016/j.epsl.2022.117963 (2023). * [54] Rizzuto, A. C. _et al._ TESS Hunt for Young and Maturing Exoplanets (THYME). II. A 17 Myr old transiting hot Jupiter in the Sco-Cen Association. _The Astronomical Journal_**160,** 33. doi:10.3847/1538-3881/ab94b7 (2020). * [55] Mann, A. W. _et al._ TESS Hunt for Young and Maturing Exoplanets (THYME) VI: an 11 Myr giant planet transiting a very low-mass star in Lower Centaurus Crux. _The Astronomical Journal_**163,** 156. doi:10.3847/1538-3881/ac511d (2022). * [56] Bogard, D. D. & Garrison, D. H. Ar-Ar and I-Xe ages and thermal histories of three unusual metal-rich meteorites. _Geochimica et Cosmochimica Acta_**73,** 6965-6983. doi:10.1016/j.gca.2009.08.009 (2009). * [57] Swindle, T. D., Kring, D. A. & Weirich, J. R. in _Advances in \({}^{40}\)Ar/\({}^{39}\)Ar dating: From archaeology to planetary sciences_ (eds Jourdan, F., Mark, D. F. & Verati, C.) 333-347 (Geological Society of London, 2014). doi:10.1144/SP378.6. * [58] Bogard, D. D., Garrison, D. H., Norman, M., Scott, E. R. D. & Keil, K. \({}^{39}\)Ar-\({}^{40}\)Ar age and petrology of Chico: Large-scale impact melting on the L chondrite parent body. _Geochimica et Cosmochimica Acta_**59,** 1383-1399. doi:10.1016/0016-7037(95)00051-Z (1995). * [59] Steiger, R. & Jager, E. Subcommission on geochronology: Convention on the use of decay constants in geo- and cosmochronology. _Earth and Planetary Science Letters_**36,** 359-362. doi:10.1016/0012-821X(77)90060-7 (1977). * [60] Renne, P. R., Mundil, R., Balco, G., Min, K. & Ludwig, K. R. Joint determination of \({}^{40}\)K decay constants and \({}^{40}\)Ar*/\({}^{40}\)K for the Fish Canyon sanidine standard, and improved accuracy for and improved accuracy for \({}^{40}\)Ar/\({}^{39}\)Ar geochronology. _Geochimica et Cosmochimica Acta,_ 19 (2010). * [61] Renne, P. R., Balco, G., Ludwig, K. R., Mundil, R. & Min, K. Response to the comment by W.H. Schwarz et al. on "Joint determination of \({}^{40}\)K decay constants and \({}^{40}\)Ar*/\({}^{40}\)K for the Fish Canyon sanidine standard, and improved accuracy for \({}^{40}\)Ar/\({}^{39}\)Ar geochronology" by P.R. Renne et al. (2010). _Geochimica et Cosmochimica Acta_**75,** 5097-5100. doi:10.1016/j.gca.2011.06.021 (2011). * [62] Schwarz, W. H., Kossert, K., Trieloff, M. & Hopp, J. Comment on the "\({}^{40}\)Ar/\({}^{39}\)Ar age of plagioclase from Acapulco meteorite and the problem of systematic errors in cosmochronology" by Paul R. Renne et al. (2010). _Geochimica et Cosmochimica Acta_**75,** 5094-5096. doi:10.1016/j.gca.2011.06.022 (2011). * [63] Turrin, B. D. _et al._\({}^{40}\)Ar/\({}^{39}\)Ar ages of L4, H5, EL6, and feldspathic ureilitic clasts from the Almahata Sitta polymict ureilite (asteroid 2008 TC3). _Meteoritics & Planetary Science_**58,** 304-327. doi:10.1111/maps.13953 (2023). * [64] Husain, L. \({}^{40}\)Ar-\({}^{39}\)Ar chronology and cosmic ray exposure ages of the Apollo 15 samples. _Journal of Geophysical Research_**79,** 2588-2606. doi:10.1029/JB079i017p02588 (1974). * [65] Bezanson, J., Edelman, A., Karpinski, S. & Shah, V. B. Julia: A fresh approach to numerical computing. _SIAM Review_**59,** 65-98. doi:10.1137/141000671 (2017). * [66] Danisch, S. & Krumbiegel, J. Makie.jl: Flexible high-performance data visualization for Julia. _Journal of Open Source Software_**6,** 3349. doi:10.21105/joss.03349 (2021). * [67] Edwards, G. H. & Blackburn, T. Accretion of a large LL parent planetesimal from a recently formed chondrule population. _Science Advances_**6,** eaay8641. doi:10.1126/sciadv.aay8641 (2020). * [68] Blackburn, T., Alexander, C. M., Carlson, R. & Elkins-Tanton, L. T. The accretion and impact history of the ordinary chondrite parent bodies. _Geochimica et Cosmochimica Acta_**200,** 201-217. doi:10.1016/j.gca.2016.11.038 (2017). * [69] Trieloff, M., Hopp, J. & Gail, H.-P. Evolution of the parent body of enstatite (EL) chondrites. _Icarus_**373,** 114762. doi:10.1016/j.icarus.2021.114762 (2022). * [70] Gail, H.-P. & Trieloff, M. Thermal history modelling of the L chondrite parent body. _Astronomy & Astrophysics_**628,** A77. doi:10.1051/0004-6361/201936020 (2019). * [71] Macke, R. _Survey of meteorite physical properties density, porosity and magnetic susceptibility_ PhD (University of Central Florida, 2010). * [72] Flynn, G. J., Consolmagno, G. J., Brown, P. & Macke, R. J. Physical properties of the stone meteorites: Implications for the properties of their parent bodies. _Geochemistry_**78,** 269-298. doi:10.1016/J.CHEMER.2017.04.002 (2018). * [73] Wach, R. A., Adamus, A. & Szurgot, M. Specific heat capacity of Soltmany and NWA 4560 meteorites. _Meteoritics & Planetary Science: Abstracts of the 76th Annual Meteoritical Society Meeting_**48,** Abstract No. 5017 (2013). * [74] Yomogida, K. & Matsui, T. Physical properties of ordinary chondrites. _Journal of Geophysical Research_**88,** 9513-9533 (1983). * [75] Opeil, C. P., Consolmagno, G. J. & Britt, D. T. The thermal conductivity of meteorites: New measurements and analysis. _Icarus_**208,** 449-454. doi:10.1016/j.icarus.2010.01.021 (2010). * [76] Opeil, C. P., Consolmagno, G. J., Safarik, D. J. & Britt, D. T. Stony meteorite thermal properties and their relationship with meteorite chemical and physical states. _Meteoritics and Planetary Science_**47,** 319-329. doi:10.1111/j.1945-5100.2012.01331.x (2012). * [77] Hevey, P. J. & Sanders, I. S. A model for planetesimal meltdown by \({}^{26}\)Al and its implications for meteorite parent bodies. _Meteoritics and Planetary Science_**41,** 95-106. doi:10.1111/j.1945-5100.2006.tb00195.x (2006). * [78] Carslaw, H. S. & Jaeger, J. C. _Conduction of heat in solids_ 2nd ed. (Clarendon Press, Oxford, 1959). * [79] Lodders, K. & Fegley, B. _Planetary Scientist's Companion_ (Oxford University Press (US), Cary, US, 1998). * [80] Slater-Reynolds, V. & McSween, H. Y. Peak metamorphic temperatures in type 6 ordinary chondrites: An evaluation of pyroxene and plagioclase geothermometry. _Meteoritics and Planetary Science_**40,** 745-754. doi:10.1111/j.1945-5100.2005.tb00977.x (2005). * [81] Goudy, S. P., Telus, M. & Chapman, B. Evidence for multiple early impacts on the H chondrite parent body from electron backscatter diffraction analysis. _Meteoritics & Planetary Science_**58,** 501-515. doi:10.1111/maps.13969 (2023). * [82] Rubin, A. E. _et al._ Nature of the H chondrite parent body regolith: Evidence from the Dimmitt breccia. _Journal of Geophysical Research: Solid Earth_**88,** A741-A754. doi:10.1029/JB088iS02p0A741 (1983). * [83] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of state calculations by fast computing machines. _The Journal of Chemical Physics_**21,** 1087-1092. doi:10.1063/1.1699114 (1953). * [84] Keller, C. B., Schoene, B. & Samperton, K. M. A stochastic sampling approach to zircon eruption age interpretation. _Geochemical Perspectives Letters_**8,** 31-35. doi:10.7185/geochemlet.1826 (2018). * [85] Dodson, M. H. Closure temperature in cooling geochronological and petrological systems. _Contributions to Mineralogy and Petrology_**40,** 259-274. doi:10.1007/BF00373790 (1973). * [86] Bottke, W. F. _et al._ The fossilized size distribution of the main asteroid belt. _Icarus_**175,** 111-140. doi:10.1016/J.ICARUS.2004.10.026 (2005). * [87] Jacobsen, B. _et al._\({}^{26}\)Al-\({}^{26}\)Mg and \({}^{207}\)Pb-\({}^{206}\)Pb systematics of Allende CAIs: Canonical solar initial \({}^{26}\)Al/\({}^{27}\)Al ratio reinstated. _Earth and Planetary Science Letters_**272,** 353-364. doi:10.1016/j.epsl.2008.05.003 (2008). * [88] Weirich, J. R., Isachsen, C., Swindle, T. D. & Kring, D. A. Ar-Ar impact ages of shocked LL chondrites. _Meteoritics and Planetary Science Supplement_**72,** 5368 (2009). * [89] Bogard, D. D., Dixon, E. T. & Garrison, D. H. Ar-Ar ages and thermal histories of enstatite meteorites: Ar-Ar ages enstatite meteorites. _Meteoritics & Planetary Science_**45,** 723-742. doi:10.1111/j.1945-5100.2010.01060.x (2010). * [90] Trieloff, M. _et al._ Structure and thermal history of the H-chondrite parent asteroid revealed by thermochronometry. _Nature_**422,** 502-506. doi:10.1038/nature01498.1. (2003). * [91] Henke, S., Gail, H.-P., Trieloff, M. & Schwarz, W. Thermal evolution model for the H chondrite asteroid-instantaneous formation versus protracted accretion. _Icarus_**226,** 212-228. doi:10.1016/J.ICARUS.2013.05.034 (2013). ## Acknowledgements We are indebted to Maggie A. Thompson for many insightful conversations and thoughtful feedback on an early draft of this manuscript. We thank Munazza K. Alam for foundational discussions about giant planet migration mechanisms. G.H.E. was supported by NSF Award #2102591. C.W.S. was supported by an Undergraduate Research Assistantship at Dartmouth (Spring 2022). ## Author Contributions G.H.E. ran simulations and wrote the manuscript. G.H.E. and C.B.K. wrote the code and interpreted results. G.H.E., C.B.K., and E.R.N. conceived of the study. G.H.E. and C.W.S. compiled the thermochronologic age database and wrote age recalculation codes. All authors contributed to editing the manuscript. ## Competing interests The authors declare no competing interests. ## Data Availability All \({}^{40}\)K-\({}^{40}\)Ar system ages examined in this study and/or used in statistical codes are tabulated in Supplementary Table 2, and corresponding literature sources are tabulated in Supplementary Table 1. The data used to estimate priors in Table 1 are available in the literature sources referenced therein. Detailed calculations of priors in Table 1 and recalibrated \({}^{40}\)Kr-\({}^{40}\)Ar system ages are available at [https://github.com/grahamedwards/ImpactChron.jl/tree/main/data](https://github.com/grahamedwards/ImpactChron.jl/tree/main/data) and will be archived with this repository on Zenodo upon publication. Posterior distributions may be reproduced using the ImpactChron.jl software package. ## Code Availability All code used to analyze data and perform Markov chain Monte Carlo algorithms are available at [https://github.com/grahamedwards/ImpactChron.jl](https://github.com/grahamedwards/ImpactChron.jl) and will be archived on Zenodo upon publication. Figure 1: **Timescales of giant planet migration stimuli.** Diagram on the left depicts inward (Type II) migration of a proto-Jupiter and proto-Saturn that carved gaps in the gaseous protoplanetary disk. The solar system’s gaseous disk dissipated \(\leq\)5 \(\rm{M_{ys}}\)[36]. Diagram on the right depicts a giant planet instability (giant planets on high-eccentricity orbits) in the presence of an outer planetesimal disk. Such an instability might be caused by dissipation of the gaseous disk, a giant planet orbital configuration that is inherently unstable without the presence of the gaseous disk, or interactions with an outer planetesimal disk. Gas dissipation occurs between 3–5 \(\rm{M_{ys}}\) (black box), destabilization of a self-unstable system occurs \(\leq\)10 My after gas dissipation (\(\sim\)5–15 \(\rm{M_{ys}}\)), and planetesimal disk-triggered instability occurs within 100 My of gas dissipation [16]. A Late Heavy Bombardment (LHB) scenario occurs \(>\)400 \(\rm{M_{ys}}\), beyond the scale of the timeline. Note that each scenario corresponds to a distinct timeframe in solar system history. In all scenarios, dynamical excitation scatters inner solar system bodies. \(\rm{M_{ys}}\) reflects million years after solar system formation, assigned to the age of the oldest Ca-Al-rich inclusions (4567.3 Ma). Figure 2: **Distributions of \({}^{40}\)K-\({}^{40}\)Ar system ages, measured by the K-Ar and \({}^{40}\)Ar-\({}^{39}\)Ar methods.** Time is reported as both age (Ma, lower x-axis) and time after solar system formation (My\({}_{\rm ss}\), upper x-axis). Histograms reflect the summed distributions of ages from each dataset. K-Ar age density has a broad, shallow maximum at \(\sim\)4200 Ma, that gradually decays to low probability densities with minor local maxima extending to the present day. \({}^{40}\)Ar-\({}^{39}\)Ar age density monotonically decreases from a sharp peak (\(>2\times\) the height of the K-Ar peak) shortly after the age of the solar system (dashed line), with minor local peaks between \(\sim\)4200 and 3600 Ma. By 3500 Ma, the distribution converges on near-nil probability density until an approximately symmetric local maximum within the last 2000 Ma. Since these \(<\)2000 Ma ages (shaded) are not associated with giant planet migration (see text), we exclude them from our analyses. If we recalculate the \({}^{40}\)Ar-\({}^{39}\)Ar age distribution with lower precision (\(\sigma=6\%\)) to mimic the K-Ar system, the early peak is more broad and shallow but the maximum remains \(>\)4300 Ma, indicating that K-Ar ages are systematically younger than the more reliable \({}^{40}\)Ar-\({}^{39}\)Ar ages. Thus, we exclude K-Ar ages from our analyses. Figure 3: **Comparison of prior (measured, black) and posterior (modeled, purple) \({}^{40}\)Ar-\({}^{39}\)Ar age distributions for different bombardment histories.** Time is reported as both age (Ma, lower x-axes) and time after solar system formation (My\({}_{\rm ss}\), upper x-axes). The dashed lines demarcate 0 My\({}_{\rm ss}\). The purple heatmaps show the relative density of posterior age distributions (darker = higher density), overlain by the prior distribution of measured ages in black (as in Fig. 2). (**a**) A no-impact scenario yields a relatively poor agreement between measured and simulated thermochronologic ages, corresponding to a mean log-likelihood (\(\ell\)) of \(-1016\pm 2\) (1\(\sigma\)). (**b**–**d**) Scenarios with impact fluxes yield posterior distributions that are concordant with the prior, corresponding to \(\ell=-992\pm 2\) for a single impact flux scenario (**b**) and \(\ell=-989\pm 3\) for scenarios with 2 or 3 impact fluxes (one of which is anchored to the solar age; panels **c**, **d**). To mimic the contribution of uncertainties in the prior distribution, each posterior age distribution (n=10\({}^{6}\), e.g. Extended Data Fig. 2c) is recalculated by randomly resampling 100 draws with the mean uncertainty of the measured \({}^{40}\)Ar-\({}^{39}\)Ar dates (\(\sigma\sim 1\%\)) and calculating a histogram (as in Fig. 2) from the resampled ages. **Figure 4: Posterior distributions of simulated bombardment history parameters.** Panels **a** and **b** are corner plot diagrams: diagonals depict one-dimensional histograms of each parameter, and off-diagonals depict the 2-dimensional distributions of each parameter pair, with heatmaps of distribution density within 2\(\sigma\) of the means (darker cells reflect higher density). Black lines trace the median values of each parameter. (**a**) corresponds to a “post-accretion” flux, for which the Bayesian inversion explores its onset time \(t_{o}\) (My\({}_{\rm ss}\) or My after solar system formation/Ca-Al-rich inclusions), initial impact flux \(F_{o}\) (My\({}^{-1}\)), and \(e\)-folding time \(\tau\) (My). \(F_{o}\) tends to decrease as \(\tau\) and \(t_{o}\) increase. (**b**) corresponds to a “primordial” flux where the inversion explores values of \(F_{o}\) and \(\tau\), but \(t_{o}\) (not shown) is anchored to 0 My\({}_{\rm ss}\) (4567.3 Ma). \(F_{o}\) and \(\tau\) exhibit a pronounced inverse relationship. Notably, the median values of \(F_{o}\) and \(\tau\) for the post-accretion flux (**a**) are respectively 1000\(\times\) and \(<\)0.1\(\times\) the primordial flux (**b**). Summary statistics are tabulated in Extended Data Table 2. (**c**) Comparison of the posterior distribution of bombardment onset date (\(t_{o}\) in **a**) with the timescales of potential giant planet migration stimuli, as in Fig. 1. Dark bars demarcate the 95 % and 50 % credible intervals (CI). The lower 26 % of the distribution overlaps the timescales of gas-disk-driven migration, the uppermost 38 % overlaps the timescales of planetesimal disk-triggered instabilities, and 45 % of the distribution interior--including the median (11.3 My\({}_{\rm ss}\), vertical line) and mean (15.0 My\({}_{\rm ss}\))--overlaps the timescales of gas dissipation and self-triggered instabilities. The hypothesized Late Heavy Bombardment (LHB) timescale lies beyond the domain. Distributions reflect 10\({}^{6}\) Markov chain steps. \begin{table} \begin{tabular}{l l l l} \multicolumn{1}{c}{**Parameter**} & **Prior** & **Reference** \\ \hline \multicolumn{3}{l}{**Environmental**} \\ \(t_{ss}\) & Solar age (oldest CAIs) & 4567.3 Ma & [26] \\ \(T_{m}\) & Midplane temperature & \(log{\cal N}(5.4,0.5)\)\(\sim\) 210 K & [51] \\ \multicolumn{3}{l}{**Cosmochemical**} \\ \({}^{26}Al_{o}\) & Initial \({}^{26}Al/^{27}Al\) & \(5.23\pm 0.06\times 10^{-5}\) & [87] \\ \([Al]\) & Al abundance & \(log{\cal N}(-4.6,0.1)\)\(\sim\) 1.0 wt \% & [79] \\ \(T_{c}\) & Ar closure temperature & \(log{\cal N}(6.2,0.3)\)\(\sim\) 490 K & [43, 56, 88, 89, 90] \\ \multicolumn{3}{l}{**Asteroid**} \\ \(R\) & Radius & \(log{\cal N}(11.9,0.2)\)\(\sim\) 150 km & [67, 68, 69, 70, 91] \\ \(t_{a}\) & Time of accretion & \(log{\cal N}(0.70,0.08)\)\(\sim\) 2.0 My\({}_{\rm ss}\) & [67, 68, 69, 70, 90, 91] \\ \multicolumn{3}{l}{**Material**} \\ \(\rho\) & Bulk density & \(log{\cal N}(8.12,0.04)\)\(\sim 3400\)\(\frac{kg}{m^{3}}\) & [71, 72] \\ \(C_{p}\) & Specific heat capacity & \(log{\cal N}(6.73,0.08)\)\(\sim 850\)\(\frac{J}{kg\cdot K}\) & [73] \\ \(K\) & Thermal conductivity & \(log{\cal N}(0.3,0.6)\)\(\sim\) 1.4 \(\frac{W}{m\cdot K}\) & [74, 75, 76] \\ \multicolumn{3}{l}{**Bombardment**} \\ \(t_{o}\) & Start time & \({\cal U}[0,t_{max}]\) (My\({}_{\rm ss}\)) \\ \(F_{o}\) & Initial flux & \({\cal U}[0,10^{4}]\) (My\({}^{-1}\)) \\ \(\tau\) & \(e\)-folding time & \({\cal U}[0,t_{max}]\) (My) \\ \hline \end{tabular} \end{table} Table 1: Parameters in the asteroid thermochronologic code and prior distributions used in the Bayesian inversion. Priors are either a constant, a uniform distribution \({\cal U}[a,b]\), a normal distribution \(\mu\pm\sigma\), or a lognormal distribution \(log{\cal N}(\mu,\sigma)\). Log-normal parameters are reported in log-space with linear-space approximations. My\({}_{\rm ss}\) is equivalent to My after the solar age. \begin{table} \begin{tabular}{c|c c c c c|c} & \multicolumn{6}{c|}{**Group**} \\ **Petrologic Type** & **H** & **L** & **LL** & **EH** & **EL** & **Total** \\ \hline **Type 3** & 2 & 4 & 4 & 2 & 1 & 13 \\ **Type 4** & 8 & 5 & 3 & 3 & 0 & 19 \\ **Type 5** & 14 & 1 & 6 & 2 & 1 & 24 \\ **Type 6** & 12 & 4 & 6 & 0 & 10 & 32 \\ **Type 7 / melt** & 3 & 3 & 2 & 5 & 2 & 15 \\ \hline **Total** & 39 & 17 & 21 & 12 & 14 & \\ \end{tabular} \end{table} Table 2: Population statistics of \(>\)2000 Ma \({}^{40}\)Ar-\({}^{39}\)Ar dates (n=97), used as priors in this study. Each cell reports the counts of all samples affiliated with the corresponding chondrite group (columns) and petrologic type (rows). We combine type 7 and impact-melted chondrites given their shared suprasolidus histories. This table excludes (n=2) R chondrites and (n=2) ungrouped E-type impact melts. The summed totals exceed the sample size, since individual meteorites may have multiple group or petrologic type affiliations (e.g. regolith breccias may contain multiple groups and petrologic types). **Extended Data Figure 1: Distributions of \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages compiled for this study (n=136).** Histograms of summed age distributions arranged as a grid of pairings for each group (H, L, LL, EH, EL in columns) and petrologic type (3-7 in rows). Within each panel, bracketed numbers indicate sample size, and empty panels (EH6, EL4) indicate no samples of that classification. As in Table 2, some meteorites occur in multiple panels due to nonexclusive classifications, and we combine type 7s and impact-melted ("melt") chondrites given their shared suprasolidus histories. These plots exclude the ages of Rumuruti (R3-6, \(4460\pm 11\) Ma), Acfer 217 (R3-5, \(4300\pm 70\) Ma), and ungrouped E-type impact melts Zaklodzie (\(4503\pm 9\) Ma) and QUE 97348 (\(4444\pm 17\) Ma). While there are few systematic trends across petrologic types, there are some trends within groups. L chondrites, particularly L5 and L6 types, have a large proportion of \(<\)1 Ga ages. EH and EL chondrite \({}^{40}\)Ar-\({}^{39}\)Ar ages largely overlap and are typically \(>\)4 Ga. All ages and affiliations are tabulated in Supplementary Table 2. **Extended Data Figure 2: Workflow of asteroidal-scale thermochronologic model.** My\({}_{\rm ss}\) denotes megayears after solar system formation. (**a**) We use an analytical solution to the heat equation in a radiogenically heated, conductively cooling spherical body [77, 78]. Each curve traces the time-temperature history at a depth in the simulated asteroid after instantaneous accretion (solid grey line). As the temperature of a given depth passes below the effective closure temperature of Ar (dashed black line), we assign that timestep as the \({}^{40}\)Ar-\({}^{39}\)Ar age (panel **b**). Panels **a** and **b** share color scales and a logarithmic timescale. The parameters used in these simulations are the priors' central tendencies in Table 1. (**c**) We calculate a distribution of \({}^{40}\)Ar-\({}^{39}\)Ar cooling ages from the calculated ages and volumetric proportions of each simulated radial shell (black curve, labeled "Unweighted"). We assign a petrologic type to each depth in the body based on the peak temperature of its time-temperature history (**a**) and recalculate a petrologic type-weighted distribution of ages ("Weighted", red curve). This step increases the proportion of early ages from shallower depths. The blue curve ("Impact Reheated") depicts the effect of impact reheating by the primordial impact flux depicted in panel **d**. (**d**) For simulations with bombardment histories, we "reheat" the body at a range of depths with one or more exponentially decaying fluxes of impacts. Panel **d** depicts two such fluxes: a "primordial" flux anchored to the solar age (0 My\({}_{\rm ss}\)) and a "post-accretion" flux beginning 300 My\({}_{\rm ss}\). The primordial flux has a lower initial flux (20 My\({}^{-1}\)) and longer \(e\)-folding timescale (200 My), resulting in a mild/protracted bombardment. The post-accretion flux has a higher initial flux (100 My\({}^{-1}\)) and shorter \(e\)-folding timescale (20 My), resulting in an intense/brief bombardment. **Extended Data Figure 3: Posterior distributions of thermochronologic model parameters for a simulated asteroid with no impact reheating.** Each histogram reflects \(10^{6}\) steps of post-burn-in MCMC simulation. Dashed lines demarcate prior distributions of each corresponding parameter. Posterior distributions of bulk density (**c**) and initial \({}^{26}\)Al/\({}^{27}\)Al (**d**) agree well with their priors, while all other parameters appear inconsistent with their respective priors. Extended Data Table 1 reports summary statistics. **Extended Data Figure 4: Posterior distributions of non-bombardment thermochroscopic model parameters for a simulated asteroid with a single bombardment flux.** Each histogram reflects \(10^{6}\) steps of post-burn-in MCMC simulation. Dashed lines demarcate prior distributions of each corresponding parameter. Posterior distributions of bulk density (**c**), initial \({}^{26}\)Al/\({}^{27}\)Al (**d**), asteroid radius (**e**), and specific heat capacity (**f**) agree well with their priors, while all other parameters appear inconsistent with their respective priors. Extended Data Table 1 reports summary statistics. Extended Data Fig. 6 depicts bombardment parameter posteriors. **Extended Data Figure 5: Posterior distributions of non-bombardment thermochroscopic model parameters for a simulated asteroid with two bombardment fluxes.** These posteriors are from the Markov chain simulation depicted in Figs. 3c and 4. Each histogram reflects \(10^{6}\) steps of post-burn-in MCMC simulation. Dashed lines demarcate prior distributions of each corresponding parameter. All posterior distributions agree well with their priors. All simulations with 2-3 bombardment histories (Figs. 3c-d, Extended Data Figs. 7, 8) have nearly identical prior-posterior relationships for non-bombardment parameters (Supplementary Figs. 1-2). Extended Data Table 2 reports summary statistics. **Extended Data Figure 6: Corner plot diagram for the bombardment parameters of a single bombardment scenario.** Each distribution reflects \(10^{6}\) Markov chain steps and black lines trace the median values (\(M\)) of each parameter. The initial impactor flux (\(F_{o},\ M\sim 1\) My\({}^{-1}\)) decreases as the onset date (\(t_{o},\ M\sim 100\) My\({}_{\rm ss}\)) and \(e\)-folding decay timescale (\(\tau,\ M\sim 400\) My) increase. See Fig. 4 caption for description of corner plot layout. Extended Data Table 1 reports summary statistics. **Extended Data Figure 7: Corner plot diagram for the bombardment parameters of a 2-flux history.** The scenario is similar to that reported in Fig. 4, except that both bombardments (\(\beta\), \(\gamma\)) are "post-accretion" events with onset dates (\(t_{o}\)) that are explored as free parameters by the Markov chain algorithm. Each distribution reflects \(10^{6}\) Markov chain steps and black lines trace the median values (\(M\)) of each parameter. Impactor flux \(\beta\) begins with a median onset date of \(M(t_{o}\beta)\sim 10\) My\({}_{\rm ss}\) and is intense/brief (\(M(F_{o}\beta)>1000\) My\({}^{-1}\), \(M(\tau\beta)\sim 10\) My). Impactor flux \(\gamma\) begins far later (\(M(t_{o}\gamma)\sim 100\) My\({}_{\rm ss}\)), but is mild/protracted (\(M(F_{o}\gamma)\sim 1\) My\({}^{-1}\), \(M(\tau\gamma)\sim 500\) My), similar to the primordial flux in Fig. 4b. There is little apparent correlation between fluxes, but within each flux, \(F_{o}\) scales inversely with longer \(\tau\) and later \(t_{o}\). See Fig. 4 caption for description of corner plot layout. Extended Data Table 2 reports summary statistics. **Extended Data Figure 8: Corner plot diagram for the bombardment parameters of a 3-flux history.** Bombardment \(\alpha\) is a "primordial" flux (onset date, \(t_{o}\), is anchored to the solar age, 0 \(\rm{My_{ss}}\)), whereas the Markov chain algorithm explores values of \(t_{o}\) for bombardments \(\beta\) and \(\gamma\). Each distribution reflects \(10^{6}\) Markov chain steps and black lines trace the median values (\(M\)) of each parameter. Bombardment events \(\alpha\) and \(\beta\) show a similar pattern to that observed in 2-bombardment simulations (Fig. 4, Extended Data Fig. 7). The primordial impactor flux (\(\alpha\)) is mild/protracted (\(M(F_{o}\beta)\sim 1\) My\({}^{-1}\), \(M(\tau\gamma)\sim 400\) My). Impactor flux \(\beta\) is intense/brief (\(M(F_{o}\beta)>1000\) My\({}^{-1}\), \(M(\tau\beta)\sim 10\) My), with a median onset date of \(\sim 10\) My\({}_{\rm ss}\). Impactor flux \(\gamma\) exhibits a bimodal distribution in \(t_{o}\) and \(\tau\): an early mode (\(t_{o}\gamma<100\) My\({}_{\rm ss}\)) with \(\tau\sim 10\) My and a later mode (\(t_{o}\gamma>1000\) My\({}_{\rm ss}\)) with \(\tau\sim 0.1\) My. The initial impact flux is unimodally large (\(M(F_{o}\beta)>1000\) My\({}^{-1}\)). There is little apparent correlation between fluxes, but within each flux, \(F_{o}\) scales inversely with longer \(\tau\) and later \(t_{o}\). Extended Data Table 2 reports summary statistics. **Extended Data Table 1: Summary statistics of parameter posterior distributions for no-impact and single bombardment simulations.** Parameter names ("Param.") correspond to Table 1. Priors are either a constant, a uniform distribution \(\mathcal{U}[a,b]\), a normal distribution \(\mathcal{N}(\mu,\sigma)\), or a lognormal distribution \(log\mathcal{N}(\mu,\sigma)\). For log-normally distributed parameters, means and standard deviations (\(\mu\pm\sigma\)) are calculated from and reported as natural logarithms. Median and 95% credible interval (\(M\pm CI\)) are all reported in linear-space.
2302.14812
RaDiO: an efficient spatiotemporal radiation diagnostic for particle-in-cell codes
This work describes a novel radiation algorithm designed to capture the three-dimensional, space-time resolved electromagnetic field structure emitted by large ensembles of charged particles. % in particle-in-cell (PIC) codes. The algorithm retains the full set of degrees of freedom that characterize electromagnetic waves by employing the Li\'enard-Wiechert fields to retrieve radiation emission. Emitted electric and magnetic fields are deposited in a virtual detector using a temporal interpolation scheme. This feature is essential to accurately predict field amplitudes and preserve the continuous character of radiation emission, even though particle dynamics is known only in a discrete set of temporal steps. Our algorithm retains and accurately captures, by design, full spatial and temporal coherence effects. We demonstrate that our numerical approach recovers well known theoretical radiated spectra in standard scenarios of radiation emission. We show that the algorithm is computationally efficient by computing the full spatiotemporal radiation features of High Harmonic Generation through a plasma mirror in a Particle-In-Cell (PIC) simulation.
M. Pardal, A. Sainte-Marie, A. Reboul-Salze, R. A. Fonseca, J. Viera
2023-02-28T18:06:10Z
http://arxiv.org/abs/2302.14812v1
# RaDiO: an efficient spatiotemporal radiation diagnostic for particle-in-cell codes ###### Abstract This work describes a novel radiation algorithm designed to capture the three-dimensional, space-time resolved electromagnetic field structure emitted by large ensembles of charged particles. The algorithm retains the full set of degrees of freedom that characterize electromagnetic waves by employing the Lienard-Wiechert fields to retrieve radiation emission. Emitted electric and magnetic fields are deposited in a virtual detector using a temporal interpolation scheme. This feature is essential to accurately predict field amplitudes and preserve the continuous character of radiation emission, even though particle dynamics is known only in a discrete set of temporal steps. Our algorithm retains and accurately captures, by design, full spatial and temporal coherence effects. We demonstrate that our numerical approach recovers well known theoretical radiated spectra in standard scenarios of radiation emission. We show that the algorithm is computationally efficient by computing the full spatiotemporal radiation features of High Harmonic Generation through a plasma mirror in a Particle-In-Cell (PIC) simulation. keywords: Radiation, Plasma, Particle-In-Cell, Spatiotemporal, Coherence + ## 1 Introduction Radiative processes in plasma are ubiquitous in astrophysics [1] and in laboratory settings. In plasma acceleration experiments, for example, they are important to the development of compact light sources [2], commonly employed in probing ultra-fast processes. Radiation emission mechanisms in plasma result from collective effects associated with the self-consistent dynamics of a large number of charged particles in the presence of strong electric and magnetic fields. _Ab-initio_ numerical models, that can capture the motion of single particles, play an important role in this context, not only to validate theoretical advances, but also to predict radiation emission from experiments and in conditions where analytical models are not available. Among the different numerical techniques, the Particle-in-Cell (PIC) [3] scheme provides a standard model to compute the motion of ensembles of charged particles. In its standard version, the PIC scheme consists in a loop that iteratively computes electric and magnetic fields by solving a discretized version of the full set of Maxwell's equations in a grid, and then determines the next positions of the charged particles according to the relativistic Lorentz force. PIC codes are thus capable, by design, to retain most classical radiation emission processes. The resolution required to capture radiation in the PIC algorithm poses quite stringent limitations on the shortest wavelengths that can be captured directly in a simulation, given that increasing the grid resolution will lead to a significant increase in the computational load. Consider a relativistic charged particle, with relativistic factor \(\gamma_{p}\), undergoing a periodic motion with period T: The corresponding radiation wavelength, \(\lambda_{rad}\), is proportional to \(\lambda_{\rm rad}\propto cT/\gamma_{p}^{2}\) Hence, the spatial resolution required to capture \(\lambda_{rad}\) is \(\gamma_{p}^{2}\) times higher than the resolution needed to describe the particle trajectory. Furthermore, because of the Courant-Friedrichs-Lewy condition, the required temporal resolution is also \(\gamma_{p}^{2}\) higher than standard. This results in an increase of \(\gamma_{p}^{4}\) operations per simulation, pushing the limits of current computational capabilities, thereby motivating the development of advanced algorithms to compute radiation emission in PIC codes. The standard approach to avoid the increased computational load and obtain high-frequency radiation emission from PIC simulations consists in performing additional radiation calculations outside the PIC loop using particle trajectory information obtained with the PIC algorithm. Many simulation codes have been developed over the recent years following this strategy. The code JRAD [4] receives a set of charged particle trajectories in order to compute the radiated spectra from the Fourier transform of the Lienard-Wiechert potentials; PICon-GPU [5; 6; 7] follows a similar strategy, but can compute the emitted spectrum as the simulation progresses; the PIC codes OSIRIS [8] and EPOCH employ Monte-Carlo approaches to compute the spectrum of radiation from QED processes at run time (see, e.g. Ref. [9]). These tools have been successfully used to predict the radiation properties of laboratory plasmas (in plasma based accelerators [10]), Quantum Electrodynamics [11] and astrophysical plasmas. However, the spatiotemporal profile of radiation is also important in fields such as astrophysics, where it can reveal the properties of rotating black holes [1; 12] for example. It can also play an important role in advanced microscopy based on twisted light with helical wavefronts [13]. Furthermore, this approach also provides a natural description of orbital angular momentum of light. To address this, we propose a new algorithm that retrieves the spatiotemporal radiation profile instead. This complementary approach includes built-in spatial and temporal coherence effects that are important to describe unexplored features of radiation emission, such as superradiant emission [14], for example. Our scheme can be used whenever the charged particle motion is well resolved, regardless of whether the spatial or temporal resolution is sufficient to resolve the resulting electromagnetic radiation. The PIC simulation framework provides a direct and natural application to our present work and we focused on the implementation of this algorithm into the OSIRIS code naming our tool RaDiO, which stands for Radiation Diagnostic for OSIRIS. This diagnostic is composed of two distinct but equally useful counterparts: one implemented as a post-processing tool that uses previously generated trajectories to find the radiation that was emitted along them, and the other implemented as a run-time diagnostic for the PIC code OSIRIS, that uses the simulation data at each time step to compute the radiation. This paper is structured as follows. In Section 2, we describe the theoretical framework behind radiation emission processes, which lays the groundwork for the development of the algorithm. Section 3 describes the implementation of the algorithm in detail, exploring key aspects like the temporal interpolation scheme. In Section 4, we benchmark our code against theoretical predictions and the results obtained with other radiation codes. Section 5 contains the study of the radiation emitted during the reflection of laser pulses by a plasma mirror. And, finally, Section 6 presents the conclusions. ## 2 Spatiotemporal electromagnetic field structure The Fourier transformed Lienard-Wiechert fields [15] are commonly employed to predict the radiation spectra from charged particle trajectories. Here, instead, we calculate the Lienard-Wiechert fields directly, as these formulas provide the emitted electromagnetic fields at a certain position in space-time. The spatiotemporal Electric (\(\mathbf{E}\)) and Magnetic (\(\mathbf{B}\)) field structure of the radiation emitted by a charged particle according to the Lienard-Wiechert formulas is given by: \[\begin{split}\mathbf{E}(\mathbf{x},t_{det})&=e \left[\frac{\mathbf{n}-\boldsymbol{\beta}}{\gamma_{p}^{2}(1-\boldsymbol{\beta }\cdot\mathbf{n})^{3}R^{2}}\right]_{\mathrm{ret}}+\frac{e}{c}\left[\frac{ \mathbf{n}\times[(\mathbf{n}-\boldsymbol{\beta})\times\dot{\boldsymbol{\beta} })]}{(1-\boldsymbol{\beta}\cdot\mathbf{n})^{3}R}\right]_{\mathrm{ret}},\\ \mathbf{B}(\mathbf{x},t_{det})&=[\mathbf{n}\times \mathbf{E}]_{\mathrm{ret}},\end{split} \tag{1}\] with \(\gamma_{p}=1/\sqrt{1-\beta^{2}}\). In Equation (1), the subscript ret denotes calculations using values at the retarded time, \(\mathbf{n}\) is the unit vector oriented from the particle position to the region in space where we are interested in capturing the emitted radiation. The virtual region in space-time where radiation is deposited is henceforth denoted as the _detector_ and will be described in more detail in Section 3. In addition, \(\mathbf{\beta}=\mathbf{v}/c\) and \(\dot{\mathbf{\beta}}=\dot{\mathbf{v}}/c\) are respectively, the particle velocity normalized to the speed of light, \(c\) and the corresponding acceleration. Here the dot represents the time derivative. The direction of \(\mathbf{\beta}\) and \(\dot{\mathbf{\beta}}\) with respect to the virtual detector and \(\mathbf{n}\) are schematically represented in Figure 1. Moreover \(e\) is the electron charge and the quantity \(R\) is the distance from the particle to the detector. For the purpose of determining the radiated fields, the first term in Equation (1) can be dropped if \(R\gamma_{p}^{2}\dot{\beta}/c\gg 1\). This condition is usually satisfied in the far field (\(R\gg c/\dot{\beta}\)) for sufficiently relativistic particles (\(\gamma_{p}\gg 1\)). The second term in Equation (1) thus corresponds to emission of propagating electromagnetic waves, describing the so-called acceleration fields. Equation (1) describes the emitted electric, \(\mathbf{E}\), and magnetic, \(\mathbf{B}\), fields at a given position, \(x\) and time \(t\), calculated from quantities obtained at the retarded time \(t_{\mathrm{ret}}\). For a given light ray that reaches the detector at a time \(t_{\mathrm{det}}\), \(t_{\mathrm{ret}}\) is the instant of time when emission has occurred. The time of arrival \(t_{\mathrm{det}}\) is given by: \[t_{det}=t_{ret}+|\mathbf{r}_{part}-R_{cell}\mathbf{n}_{cell}|/c, \tag{2}\] where \(\mathbf{r}_{part}\) is the position of the particle and \(R_{cell}\mathbf{n}_{cell}\) is the position of the detector's cell. In order to enhance computational performance, it is useful and possible to simplify Equation (2) in the far field, which gives [15]: \[t_{det}=t_{ret}+R_{cell}/c-\mathbf{r}_{part}\cdot\mathbf{n}_{cell}/c, \tag{3}\] tion (1) can thus be used to retrieve the full set of spatiotemporal degrees of freedom of the radiation emitted by accelerated charges. By mapping the emitted radiation at each timestep in the particle trajectory to the corresponding time of arrival at the detector, the actual temporal resolution of the relativistic particle trajectory can be much coarser than the required one to describe the radiated fields. An estimate of the maximum resolution that can be accurately obtained using Equations (2-3) can be found using the simplified picture shown in Figure 2: The particle located at \(x_{0}\) emits a photon 1 at \(t=t_{0}\). As the photon travels at c, in the next time step it will have travelled an extra \(dt(\text{c}-v_{p})\) than the particle, which emits a second photon at \(t=t_{1}\). Considering that a particle emits a photon at every time-step, the time interval between the arrival of two consecutive photons at the detector, provided that they are emitted by a relativistic particle, is given by Equation (4): \[dt_{rad}=dt(1-v_{p}/c)\simeq dt/\left(2\gamma_{p}^{2}\right), \tag{4}\] with \(dt\) being the temporal distance between emissions and the temporal resolution of the simulation providing for the particle trajectory. Therefore, we are able to capture radiation with frequencies up to \(2\gamma_{p}^{2}\) times larger than the ones used to sample the particle's motion, as our detector time grid can be as fine as \(dt_{\text{det}}=dt/2\gamma_{p}^{2}\). By consequence, the simulation time step can be much larger than the typical period of the emitted radiation. It is also important to note that the resolution in the detector should not be increased indefinitely as resolving time grids finer than \(dt/2\gamma_{p}^{2}\) could generate Figure 2: Illustration of radiation emission. non-physical information. A thorough analysis of these limits can be found in the Supplementary Material. The next section describes our implementation of the radiation algorithm and illustrates the reasons behind the different limits in resolution. ## 3 Algorithm and Implementation The calculations of Equation (1) can be fully integrated either into a pre-existing code that computes the trajectories of charged particles (e.g. the PIC scheme) or be used as a post-processing tool that that computes Equation (1) on a set of pre-calculated trajectories. The algorithm consists of two main parts: calculating and obtaining the radiated fields and depositing them in a discretized grid. In this section we discuss the general steps and approach to incorporate the radiation algorithm considering these two components. ### Radiation calculation algorithm The virtual detector is a key feature of the radiation diagnostic. It is the region of space where radiation is tracked during a given time period. We consider two geometries of the virtual detector, (i) a spherical one [Figure 3 a)], where the grid is defined using spherical coordinates \((\mathbf{e}_{\theta},\mathbf{e}_{\phi},\mathbf{e}_{\mathbf{r}})\) and (ii) a cartesian one [Figure 3 b)], where the grid is defined using cartesian coordinates \((\mathbf{e}_{\mathbf{x}},\mathbf{e}_{\mathbf{y}},\mathbf{e}_{\mathbf{z}})\). RaDiO has the capability to compute the radiation in both types of geometries. Figure 3: Spherical (a) and cartesian (b) detectors. The darker spherical grid has a higher radius than the lighter one. All spherical grids are centered in the origin of the coordinate system. In order to track the emitted radiation at each time step of the trajectory we need to evaluate Equation (1) in every cell of the virtual detector. The radiation emitted at each time step of a given trajectory lies on a spherical shell that expands from the position of the particle at the time of emission, \(t_{\mathrm{ret}}\), at the speed of light. The intersection of the radiation shell with the detector consists of a circumference, whose radius increases with \(t_{\mathrm{det}}\). Figure 4 illustrates this picture, by showing the intersection of the radiation shell with a cartesian detector. The top of Figure 4 shows the detector at three different \(t_{\mathrm{det}}\). The bottom of Figure 4 shows the radiation arriving at each one of the highlighted cells as a function of \(t_{\mathrm{det}}\), which can be calculated using Equation (2) or Equation (3). The illustration of Figure 4 suggests a clear approach to track the radiation reaching the detector from the emission of one particle at a given time step \(t_{\mathrm{ret}}\): Loop through each spatial cell of the detector and to compute \(t_{\mathrm{det}}\) at which radiation arrives. All the required quantities to compute Equation (1) are known or can be easily calculated (see additional details below). This approach features the quality of avoiding loops through the temporal cells in the detector. Thus, radiation computing time becomes independent from the temporal resolution Figure 4: Visual representation of the arrival of radiation emitted by a single particle in a single time step of the simulation at a detector cartesian detector. Top panel: expansion of the intersection between the radiation shell and the detector (in orange). Bottom panel: Time of detection for three distinct cells of the detector. of the detector and the total computing time is proportional to the number of time steps in the PIC simulation, \(N_{\text{t\_PIC}}\) multiplied by the number of particles, \(N_{\text{part}}\) multiplied by the number of spatial cells in the detector, \(N_{\text{sp\_cell}}\). This approach is summarized in Algorithm 1. It comprises two different loops: one through the particles that emit radiation (denoted as radiative particles) and another through the detector spatial cells. The quantities \(t\), \(R\), \(\mathbf{n}\), \(\beta\), \(\dot{\beta}\) and \(t_{det}\) are required in order to evaluate Equation (1). All of these quantities are either readily available or can be directly calculated from other quantities that are available in the simulation, such as the position of the particle (\(\mathbf{x_{part}}\)), the momentum of the particle (\(\mathbf{p}\)) and the time of emission \(t\), as well as quantities that are part of the radiation module, such as the position of each detector cell \(\mathbf{x_{cell}}\) or the previous velocity of the particle \(\beta_{prev}\). These calculations are also shown in Algorithm 1. Because \(t_{\text{det}}\) can be computed at each time of emission, \(t_{\text{ret}}\), using Equation (2) or Equation (3), it is in principle possible to conceive a temporally gridless detector. This approach could provide a very accurate description of the radiated fields, particularly if complemented by a post-interpolation scheme with the goal of retaining the continuous nature of radiation emission. Such approach, however, would require storing as many spatial detector arrays as the number of steps in the particle trajectory, for every particle in the simulation (\(N_{\text{t\_PIC}}\times N_{\text{part}}\times N_{\text{sp\_cell}}\)). High memory consumption would thus be the main limitation of such algorithm. To face this issue, RaDiO deposits radiation in a grid detector with up to 3 dimensions (1 temporal dimension and up to 2 spatial dimensions) with the spatial cells being distributed according to a spherical or cartesian geometry, and uses a temporal interpolation scheme to mimic continuous radiation emission between two consecutive PIC time-steps for every particle in the simulation. The implementation shown in Algorithm 1 can be applied to both post-processing diagnostics, which calculates the radiation given a set of pre-calculated trajectories, and to run-time diagnostics, in which the radiation calculations are performed at run time during the trajectory calculation. In the latter scenario, the calculation and deposition of the emitted radiation can take place in a sub-step of the particle push loop, created specifically for that purpose. This sub-step comes right after pushing the particles, in such a way that the newly calculated positions and momenta can be used, in conjunction with the corresponding stored values from the previous iteration, to compute the required quantities to determine the radiated fields. In the post-processing version, all required quantities can be readily calculated by considering the positions and momenta from consecutive time-steps. ``` 1:procedureRadiationCalculator 2:for all\(particle\)in simulation do 3:\(\beta=\text{velocity}(particle)=\mathbf{p}/\sqrt{|\mathbf{p}|^{2}+1}\) 4:\(\hat{\beta}=\text{acceleration}(particle)=(\beta-\beta_{prev})/dt\) 5:for all\(cell\)in detector do 6:\(R=\text{distance}(particle,cell)=|\mathbf{x_{part}}-\mathbf{x_{cell}}|\) 7:\(\mathbf{n}=\text{direction}(particle,cell)=(\mathbf{x_{part}}-\mathbf{x_{cell}} )/R\) 8:\(t_{det}=R/c+t\) 9:\(t_{det,prev}=R_{prev}/c+t-dt\) 10:if\(t_{\text{det}}min<t_{\text{det}}<t_{\text{det}}max\)then 11:\(\text{RadiationInterpolator}(\mathbf{E}(\mathbf{n},\beta,\hat{\beta}),t_{ \text{det}},t_{\text{det,prev}})\) ``` **Algorithm 1** Radiation calculation and depositing ### Deposition of the radiated fields in a virtual detector According to Eqs. (2) and (3), each PIC simulation timestep corresponds to a given detector time. In general, consecutive time steps in the trajectory will deposit radiation in non-consecutive detector time cells. A simple prescription that only deposits the radiated fields in the temporal cells that are closest to the predictions given by Eqs. (2) and (3) will therefore generate noisy radiation patterns that are non-physical because particles emit radiation continuously. To re-gain the continuous character of radiation emission, and remove the artificial noise induced by the discretization of the trajectories in time, RaDiO interpolates the fields emitted by each particle between every two consecutive PIC time steps. The interpolation scheme in RaDiO assumes that particles radiate constant fields between each consecutive PIC timestep. In order to deposit the fields across different temporal cells, we weigh the contribution of each deposition by the time until the next deposition. In fact, the value of the radiation in a time slot is the integral of the radiation in the interval delimited by two consecutive detector time-steps. Incidentally, real-life applications often employ an _integrator detector_, which takes the information about radiation arriving in-between detector time steps into account. This deposition scheme can be implemented by following Algorithm 2, below. ``` 1:procedureRadiationInterpolator 2:\(n_{\text{slot}}=\text{slot}(t_{\text{array}},t_{\text{det}})\) 3:\(n_{\text{slot,prev}}=\text{slot}(t_{\text{array}},t_{\text{det,prev}})\) 4:\(n_{\text{itr}}=t_{\text{slot,prev}}\) 5:\(t_{\text{tmp}}=t_{\text{det,prev}}\) 6:while\(n_{\text{itr}}<n_{\text{slot}}\)do 7:\(\text{scale\_factor}=(t_{\text{array}}[n_{\text{itr}}+1]-t_{\text{tmp}})/dt_{ \text{det}}\) 8:\(\textbf{E}(cell,n_{itr})=\textbf{E}(\textbf{n},\beta,\dot{\beta})\cdot\text {scale\_factor}\) 9:\(n_{\text{itr}}=n_{\text{itr}}+1\) 10:\(t_{\text{tmp}}=t_{\text{array}}[n_{\text{itr}}]\) 11:\(\text{scale\_factor}=(t_{\text{det}}-t_{\text{array}}[n_{\text{slot}}])/dt_{ \text{det}}\) 12:\(\textbf{E}(cell,n_{itr})=\textbf{E}(\textbf{n},\beta,\dot{\beta})\cdot\text {scale\_factor}\) ``` **Algorithm 2** Radiation interpolation Each variable in Algorithm 2 is calculated at each PIC time-step and for each particle. Here, _slot(...)_ is a function that returns the index of the slot in the detector's time-array (\(t_{\text{array}}\)) where \(t_{\text{det}}\) falls, \(t_{det}\) is the time of the current deposition and \(n_{\text{slot}}\) is the corresponding time-slot position in the detector array. In addition, \(n_{\text{itr}}\) is an iterator that runs from \(n_{\text{slot,prev}}\), the detector time slot where particle deposited radiation in the previous PIC time-step, until \(n_{\text{slot}}\). The quantity \(t_{\text{tmp}}\) is an auxiliary variable for the calculation of the time difference between depositions. It runs from, \(t_{\text{det,prev}}\), the time of the previous deposition, to \(t[n_{\rm{itr}}]\), the time for the actual deposition. Figure 5 shows an example case that clarifies this deposition scheme. Each of these depositions correspond to radiation emitted at a different PIC time step by a single particle. This interpolation can be performed while the simulation is running, as it only requires information about the radiated field in the previous time step. In fact, for the example present in Figure 5 the deposition algorithm would go as follows: 1) At PIC iteration 4, radiation arrives at the detector at \(t_{\rm{det}}=t_{4}^{\prime}\). 2) \(n_{\rm{itr}}\) is set to 2, the slot of the previous deposition, at \(t_{3}^{\prime}\), \(t_{\rm{tmp}}\) is set to \(t_{3}^{\prime}\), the time of the previous deposition, we enter the loop, the scale factor is calculated: \((t_{3}-t_{\rm{tmp}})/dt_{\rm{det}}\), with \(t_{\rm{array}}[n_{\rm{itr}}+1]=t_{3}\) and \(E(t_{3}^{\prime})(t_{3}-t_{3}^{\prime})/dt_{\rm{det}}\) is deposited in the second time slot, \(t_{2}\). 3) \(n_{\rm{itr}}\) is incremented to 3, \(t_{\rm{tmp}}\) is set to \(t_{3}\), the time of the previous deposition, the scale factor is calculated: \((t_{4}-t_{3})/dt_{\rm{det}}\), with \(t_{\rm{array}}[n_{\rm{itr}}+1]=t_{4}\) and \(E(t_{3}^{\prime})(t_{4}-t_{3})/dt_{\rm{det}}\) is deposited in the third time slot, \(t_{3}\). 4) \(n_{\rm{itr}}\) is incremented to 4, \(t_{\rm{tmp}}\) is set to \(t_{4}\), we exit the loop, the scale factor is calculated: \((t_{\rm{det}}-t_{4})/dt_{\rm{det}}\) and \(E(t_{3}^{\prime})(t_{4}^{\prime}-t_{4})/dt_{\rm{det}}\) is deposited in the time slot \(t_{4}\). Using this approach, radiation can be computed and deposited using only the information from the current and the previous time steps. This algorithm interpolates radiation coming from a single particle, but can be repeated for all particles in the simulation, as stated in Algorithm 1, in order to capture radiation from all particles. Figure 5: Integrator detector: radiation is scaled by the time until the next deposition. \(t_{i}\) refers to the detector’s time grid and \(t_{i}^{\prime}\) to the different deposition times. ### Practical example: Helical trajectory Here we look at a practical example, in which an electron with \(\gamma_{p}=57.3\) undergoes an helical motion with amplitude \(0.014\,\mathrm{c}/\omega_{p}\) and frequency \(\omega_{0}=\omega_{p}\), corresponding to a \(K\) parameter of \(K=0.8\), \(K\) is a trajectory parameter that can be taken as a scaled pitch angle the maximum angle of the particle trajectory, normalized to the Lorentz factor \(\gamma_{p}\) and given by \(K=\gamma_{p}r_{0}\omega_{0}/c\). The helical motion was described by the PIC algorithm with a temporal resolution of \(0.1\ \omega_{p}^{-1}\). Here, \(\omega_{p}\), is an arbitrary normalizing frequency. The radiation generated by a particle undergoing such trajectory has a distinctive, expanding spiral spatiotemporal signature. This is shown in Figure 6, which represents the radiated electric field along the \(y\) direction deposited onto a spherical 2D detector with an angular aperture of \(0.1\) rad placed in the direction of the longitudinal motion of the particles (\(x\) axis), with radius \(R=10^{6}\ \mathrm{c}/\omega_{p}\). The temporal resolution of the detector was \(1.33\times 10^{-5}\ \omega_{p}^{-2}\) and the spatial resolution was \(58\ \mathrm{\SIUnitSymbolMicro rad}\). Figure 6 shows a snapshot of the detector at four different temporal positions. The starting point of the spiral follows the circular motion of the particle in the \(y-z\) plane. Between each snapshot the radiation spiral makes two turns, thus the temporal distance between each snapshot is approximately equal to two periods of the emitted radiation. Given the trajectory parameters, the radiation period was expected to be of about \(\sim 2\times 10^{-3}\ \omega_{p}^{-1}\), about \(10\) times smaller than the smaller period that could be resolved using only the PIC algorithm. Figure 6: Spatiotemporal signature of the radiation emitted by a particle undergoing a helical trajectory. ## 4 Benchmarking In order to benchmark our algorithm, we consider the example of a single relativistic particle emitting synchrotron radiation. Synchrotrons have a magnetic field structure that imposes a sinusoidal trajectory to relativistic electrons that go through the device, thus leading to the emission of high frequency photon beams in the X-UV or X-ray regions of the spectrum. The trajectory of the particle would then be given by: \[y(t) =r_{\beta}\cos\left(\omega_{\beta}t\right) \tag{5}\] \[x(t) =\beta_{x0}\left[t-\frac{r_{\beta}^{2}}{8\gamma_{x0}}\left(t- \frac{\cos(2\omega_{\beta}t)}{2}\right)\right] \tag{6}\] where \(\beta_{x0}\) is the initial velocity of the particle along the longitudinal \(x\) direction, \(\gamma_{x0}=(1-\beta_{x0}^{2})^{-1/2}\) its longitudinal Lorentz factor, \(r_{\beta}\) the amplitude of the sinusoidal trajectory, and \(\omega_{\beta}\) its frequency. As far as we are aware, the only explicit analytical formulas capturing the spatiotemporal radiation profile of synchrotron radiation are found in [16], which gives a semi-analytical model for the emitted field lines. However, direct quantitative comparisons between the visual depiction of field lines and the actual value of the emitted field in a region of space can be difficult (see Supplementary Material for a qualitative comparison). On the other hand, the spectral properties of radiation are well documented [17; 18], so we Fourier transformed the data in the virtual detector with respect to time and compared these spectra to the theoretical predictions. The corresponding intensity spectrum (\(I\)) with respect to the frequency \(\omega\) and solid angle \(\Omega\) of the emitted radiation, valid for ultra relativistic particles as an asymptotic limit expression (\(\gamma_{p}\gg 1\)) and assuming very large number of periods in the trajectory, [2], is given by: \[\frac{d^{2}I}{d\omega d\Omega}=\frac{e^{2}\omega^{2}\gamma^{2}}{3\pi^{2}c \omega_{\beta}K}\left(\frac{1}{\gamma_{p}^{2}}+\theta^{2}\right)^{2}\left[ \frac{\theta^{2}}{\gamma_{p}^{-2}+\theta^{2}}K_{2/3}^{2}(\Upsilon)+K_{1/3}^{2} (\Upsilon)\right], \tag{7}\] where, \(\theta\) is the observation angle in the direction perpendicular to the trajectories plane. In addition, \(K_{n}\) is the modified Bessel function and \(\Upsilon\) is a numerical parameter given by \(\Upsilon=\frac{\omega\gamma_{p}}{3\omega_{\beta}K}\left(\gamma_{p}^{-2}+\theta^ {2}\right)^{-3/2}\), with \(K\) being the aforementioned \(K\) parameter. Equation (7) can be integrated over all angles, returning the frequency spectrum [2]: \[\frac{dI}{d\omega}=\sqrt{3}\frac{e^{2}\gamma_{p}\omega}{c\omega_{c}}\int_{ \omega/\omega_{c}}^{\infty}K_{5/3}(x)dx,\;\omega_{c}=\frac{3}{2}K\gamma_{p}^{2 }\omega_{\beta} \tag{8}\] We have benchmarked our algorithm against Equations (7) and (8). The benchmarks were performed using the two dimensional sinusoidal trajectory of a relativistic electron (\(\gamma_{p}=50\)) with an amplitude of \(r_{\beta}=2\,c/\omega_{p}\), \(k_{\beta}=0.1\,\omega_{p}/c\) (\(K=10\)) in the transverse \(x-y\) plane and \(dt=0.01c/\omega_{p}\), where \(\omega_{p}\) is a normalizing frequency. We simulated a line of a spherical detector, placed in the \(z-x\) plane, \(10^{5}\,c/\omega_{p}\) away from the axis origin with an angular aperture of \(0.1\,\mathrm{rad}\) around the \(x\) axis. This detector had 512 spatial cells and 131072 temporal cells, resulting in a temporal detector resolution of \(2.98\times 10^{-5}c/\omega_{p}\) The results are shown in Figure 7 which features a plot of the detected electric field in the \(\mathbf{e}_{\phi}\) direction (perpendicular to the motion plane) for each spatiotemporal cell. Figure 7: Spatiotemporal signature of the radiation emitted by a particle undergoing a sinusoidal motion in a transverse detector (a). The lineouts are shown on the bottom plot (b). Peaks located at smaller \(t\) arrive earlier at the detector. The radiation is composed of several periodically spaced peaks, whose shape can be observed in the lineout [Figure 7 b)]. The short burst nature of the radiation (equivalent to a broad band spectrum), consistent with the large value of the K parameter, is clear from Figure 7. Instead of displaying a purely sinusoidal profile with a single wavelength, the electric field consists of sharply peaked bursts containing many different wavelengths. Moreover, it is possible to observe that consecutive peaks have opposite sign. This is a direct result of the sinusoidal nature of the electron trajectory in which the acceleration \(\dot{\mathbf{\beta}}\) switches sign between peaks. Furthermore, it is possible to note that for higher angles the radiation bursts arrive later, creating the parabola-like structures that can be seen in the upper plot. This delay becomes more significant as the particle approaches the detector's surface, resulting in a decrease of the curves' aperture. This result can also be understood in terms of the spatiotemporal reasoning regarding the estimation for the typical radiation frequency presented in the previous section. However, instead of depicting the emitted radiation parallel to the motion of the particle, we picture them emitted at an angle \(\theta\). The temporal distance between the emission and arrival of light ray emitted at a given longitudinal position \(x\) is then given by: \(c\Delta t_{rad}=\sqrt{R^{2}+x^{2}-2xR\cos\theta}\), where all quantities are defined as in Equation (3). This expression shows that the time of arrival increases with \(\theta\) and also that it is scaled by the longitudinal position \(x\). Thus, as the particle approaches the detector and \(x\) grows larger, the parabolic structures left on the detector become tighter. Figure 7 b), which depicts lineouts of \(E_{\phi}\), also shows that the peaks become wider and less intense for larger angles. This is in concordance with the predictions for the spectrum [see Equation (7)], which features a decrease in the number of harmonics for larger angles, resulting in broader and less intense peaks off-axis. In order to further understand the angular dependent frequency spectra, Figure 8 compares the theoretical result, given by Equation (7), with the simulated result, given by the Fourier transform over time of the field shown in Figure 7 a). The spectrum is symmetric with respect to \(\theta=0\). Thus, the upper half of Figure 8 a) (\(\theta>0\)) shows the simulated results and the bottom half (\(\theta<0\)) the theory. As expected, the theoretical line, being the assymptotic limit of a continuous harmonic distribution with a very large number of oscilations in the trajectory [2], corresponds to the envelope of the numerical result, showing excellent agreement. This is evident from the lineout of the radiated spectra displayed in Figure 8 b). The simulated integrated spectrum over all angles, which yields the frequency distribution of the emitted radiation, can be benchmarked against Equation (8). Figure 8 shows excellent agreement between numerical and theoretical results, as the intensity of most peaks matches the expected result with small relative error which rises as frequency increases. To further confirm the validity of our numerical approach, we benchmarked the frequency integrated spectrum, \(dI/d\Omega\) against the spectrum provided by the post-processing spectral code JRad [4], which comp Figure 8: (a) Comparison between the theoretical and simulated spectra. (b) Comparison between a lineout at \(\Delta\theta=0.02\) from both spectra. (c) Angle integrated spectra, both spectra are normalized to 1. The relative error (\(I_{\mathrm{RaDiO}}/I_{\mathrm{theor}}-1\)) is shown on the inset. (d) Frequency integrated spectra, both spectra are normalized to 1. The absolute error (\(I_{\mathrm{RaDiO}}-I_{\mathrm{JRad}}\)) is shown on the inset. using the spectral version of the Lienard-Wiechert potentials. The results of this comparison, shown in Figure 8 d), are in excellent agreement. ### Coherence tests Because RaDiO captures the emitted fields in space and in time it can also naturally describe temporal and spacial interference effects. This feature is essential to accurately portrait temporal and spatial coherence, present in superradiant emission scenarios for example. This is an intrinsic feature of our spatiotemporal approach, which allows us to directly obtain the fields radiated by every simulation particle, including interference effects by design. To test our ability to acurately model temporal and spatial coherence, we ran simulations using two particles with opposite charges and sinusoidal trajectories, similar the one defined in Equations (5) and (6). The two particles, with particle 1 being positively charged and particle 2 being negatively charged underwent this sinusoidal trajectory in perpendicular planes (particle 1 in plane \(x-y\) and particle 2 in plane \(x-z\)) and the detector was the same as the one used in the previous section. Figure 9 shows the simulated radiated electric field profile as a function of \(\theta\) and \(t_{\mathrm{det}}\) for three different configurations: one with only particle 1 [Figure 9 (a)], one with only particle 2 [Figure 9 (b)] and other with both particles [Figure 9 (c)]. As the two trajectories lie in different planes, the spatiotemporal signatures of the radiation emitted by each particle are noticeably distinct because the detector plane lies on the plane of the trajectory of particle 2, then being perpendicular to the plane of particle 1. By comparing Figure 9 (a) with Figure 9 (b), we can hence readily identify the radiation coming from each particle in Figure 9 (c). As both particles have opposite charges, the field on axis for a given particle will have the opposite sign as the on-axis field for the other particle. Thus, the radiation emitted by both particles will interfere destructively on-axis. This happens exactly at \(\theta=\pi/2\). Thus, if we look at the time averaged squared field (insets in each panel of Figure 9), we see that although \(\langle E^{2}\rangle_{t}\) is maximum at \(\theta=\pi/2\) for the simulations with only one of the particles (insets of Figure 9 [a] and Figure 9 [b]), the opposite happens when we capture the fields radiated by both particles (inset of Figure 9 [ c]). Our algorithm captures coherence effects of the simulation particles by default, but in a PIC code, each particle in the simulation represents a cloud of \(N\) real particles with a size close to cell size that follow the same dynamics, this is the so called macroparticle aprroximation. In our code, however, we calculate the radiation emitted by the macroparticles in the simulation as if they were point charges with charge equal to the total charge inside the macroparticle (\(Nq\)). This is in fact equivalent to assuming that each of the \(N\) particles inside the macroparticle radiates coherently. The assumption that they all radiate coherently holds either for all wavelengths if \(N=1\), or for wavelengths larger than the cell size if \(N\gg 1\). For wavelengths shorter than the cell size, in general, we cannot say it holds, as such an assumption depends on information about particles that are not being simulated. For example, if standard macroparticle approximation is still valid at scales smaller than the cell size, the emitted radiation should be incoherent for wavelengths shorter than the cell size and the result should be corrected with a filter function (see Supplementary Material for a deeper analyisis). The detailed study of the conditions that allow assuming that each of the Figure 9: (a) Spatiotemporal profile of the radiation coming from particle 1 (trajectory perpendicular to the detector). (b) Spatiotemporal profile of the radiation coming from particle 2 (trajectory parallel to the detector). (c) Spatiotemporal profile of the radiation coming from both particles. The insets contain the time averaged squared field, \(\langle E^{2}\rangle_{t}\). \(N\) particles inside the macroparticle radiates coherently is out of the scope of this work. It will be up to the user to decide whether it holds or not. If this assumption does not hold, then results given by our code will be correct for wavelengths larger than the cell size, but could be overestimated for wavelengths smaller than the cell size. Nevertheless our code can, in general, accurately predict the qualitative aspects of the emitted radiation for all wavelengths. ## 5 Example: Radiation from a plasma mirror When an electromagnetic wave collides with a target such as a metallic surface or an overdense plasma, it is unable to propagate and gets reflected. The process of reflection has long been well understood and thoroughly explained at the macroscopic level by Maxwell's laws and classical electrodynamics. In the plasma, the phenomenon is commonly explored using a fluid theory approach. Such description predicts the damping of the wave near the surface of the reflective material (it becomes an evanescent wave) and the appearance of a reflected wave. At the electron level, however, the phenomenon is not always trivial, in particular at relativistic laser intensities (with peak normalized vector potential \(a_{0}=eA_{0}/(m_{e}c)=1\) ), which lead to High Harmonic Generation (HHG) in plasma mirrors [19; 20]. Several theoretical frameworks have been proposed to describe the underlying mechanisms of HHG, each with different regimes of applicability (see e.g. [21; 22]) PIC simulations are commonly employed to deepen the understanding of the physical processes underlying laser reflection and harmonic generation in plasma mirrors. An accurate description of HHG in standard PIC simulations, for instance, is computationally challenging because spatial and temporal PIC grids need to properly resolve the high harmonics. Thus, to accurately capture high harmonics up to the \(10^{th}\) or \(100^{th}\) order, PIC simulations require spatiotemporal resolution up to one-two orders of magnitude higher than one required to resolve the fundamental harmonic. The use of RaDiO may thus be computationally advantageous in HHG simulations, as it allows capturing high frequency harmonics without increasing the PIC resolution. In this section, we present 3D Osiris simulations of an HHG scenario where the laser propagates in the longitudinal \(x\) direction and is linearly polarized along the transverse \(z\) direction. The laser uses a \(\sin^{2}\) temporal profile with 12 full periods (\(T_{0}=8\pi\;\omega_{p}^{-1}\), \(\omega_{0}=\omega_{p}/4\)) and Gaussian perpendicular profile with spot-size (\(W_{0}=2\lambda_{0}\), \(\lambda_{0}\) is the central laser wavelength). The plasma mirror consists in an overdense plasma slab with plasma frequency \(\omega_{p}\) (and density \(n_{p}\), 16 times larger than the critical density \(n_{c}\) for that laser pulse) with thickness 100 \(c/\omega_{p}\), much higher than the non-relativistic plasma skin-depth (\(l_{s}\sim\ c/\omega_{p}\) in this case). As the laser gets reflected, we capture the reflected fields both in the PIC grid through Maxwell's equations and in a virtual detector through RaDiO. We chose to compute the radiation emitted by all plasma electrons located within the plasma cylinder with a radius of three laser spot sizes around the focus. The virtual cartesian detector was located at \(x=-160\ c/\omega_{p}\), ranging from \(y=-160\ c/\omega_{p}\) to 160 \(c/\omega_{p}\), with temporal resolution \(dt_{\rm det}=0.0384\ \omega_{p}^{-1}\), about five times smaller than the PIC temporal resolution \(dt_{\rm PIC}=0.1792\ \omega_{p}^{-1}\). The PIC simulation box ranged from \(x=-288\ c/\omega_{p}\) to \(x=108\ c/\omega_{p}\), with a resolution \(dx=0.96\ c/\omega_{p}\) in the longitudinal direction and from \(y,z=-160\ c/\omega_{p}\) to \(y,z=160\ c/\omega_{p}\) with resolution \(dy,dz=0.32\ c/\omega_{p}\) in the transverse direction. This PIC grid is able to resolve 26 points per laser wavelength. Each cell contains 16 simulation particles. A 2-D slice of the setup is shown in Figure 10, the laser propagates from left to right. We start by capturing the radiation in the absence of HHG, by using a non-relativistic laser intensity, with peak normalized vector potential \(a_{0}=0.1\). Figure 11, top, shows the trajectories of a random sample of 512 plasma particles. The zoomed-in region clearly displays the typical _figure-8_-like motion induced in the plasma particles by the laser pulse. This motion originates the radiation, which is captured both in the PIC grid and in the virtual radiation detector. By comparing the radiation in the detector to the reflected pulse in the PIC grid (Figure 11, bottom), we show that the beam reflection is a direct result of the charged particles' trajectories induced by the incident beam. Figure 10: Reflection radiation simulation setup. A tightly focused gaussian laser pulse propagates from left to right towards an overdense plasma target. Next to investigate a scenario with strong HHG, we used a high-intensity laser (\(a_{0}=4.2\)) in a setup similar to the one shown on Figure 10. In this case, we see the clear effect of the increased intensity on the trajectory of the sampled particles (Figure 12, top), with a similar figure-8 motion for the first few laser periods, but with increased amplitude overall and stronger deviation from the standard figure-8 motion. Figure 11: Trajectories of a random sample of 512 plasma particles under the influence of a low intensity laser (\(a_{0}=0.1\)). The zoomed-in region shows a particle performing the figure-8 motion induced by the incident laser (top). Comparison between the reflected laser profile given by the PIC grid (upper half) and by RaDiO (lower half) at \(x=-160\ c/\omega_{p}\). Comparison between incident and reflected beams as captured by the standard PIC algorithm and by RaDiO (bottom). The laser pulses are properly described in both situations with more than 20 points per wavelength. As a result of this more extreme motion, the reflected laser beam is noticeably different from the incident beam. This is made clear in the comparison shown at the bottom of Figure 12. The differences between incoming and reflected laser pulse electric field profile are due to the existence of high laser harmonics present in the reflected beam. The presence of the high harmonics is also clearly visible in the spectrum of Figure 12. The frequency spectrum shows that the reflected laser captured by RaDiO contains at least 13 harmonics, while the PIC algorithm, which only resolves the plasma relevant scales correctly captures the emission of the first 4 odd harmonics. The PIC grid is able to resolve the original harmonic with 26 points per wavelength, but as the harmonic order increases, Figure 12: Trajectories of a random sample of 512 plasma particles (a) after the reflection a low intensity laser (\(a_{0}=4.2\)). The zoomed-in region shows a particle performing the figure-8 motion induced by the incident laser. Comparison between the reflected laser profile given by the PIC grid (upper half) and by RaDiO (lower half) \(x=-160\)\(c/\omega_{p}\). Spatiotemporal (b) and frequency spectrum (c) of the reflected high intensity laser beam. past the 7th order only RaDiO's resolution can capture the signal correctly. In this case the RaDiO frequency spectrum captures frequencies at least 4 times higher than the OSIRIS PIC grid, being able to capture harmonics at least until the 25th order, as expected from the employed laser intensity (\(a_{0}=4.2\)). ## 6 Conclusions The radiation diagnostic for OSIRIS (RaDiO) was successfully implemented, benchmarked and tested in several scenarios, including production runs. While not described here, it should be also be noted that the algorithm was fully parallelized allowing for large simulations. RaDiO is a novel radiation diagnostic that captures the spatiotemporal features of high frequency radiation in PIC codes. A key aspect of our algorithm is the development of a temporal interpolation scheme for depositing radiation. This is essential to preserve the continuous character of radiation emission and to obtain correct values for the amplitude of the radiated fields. The algorithm is general and only requires knowledge about the trajectories of a an arbitrarily large ensemble of charged particles (\(>10^{6}\)) thus we can apply it to generally enhance the capabilities of any algorithm that predicts the trajectories of charged particles, apart from PIC codes. We described the implementation of RaDiO into OSIRIS and provided benchmarks with well established theoretical models for synchrotron emission. These comparisons showed excellent agreement, therefore adding a high level of confidence to future runs. We also provided an illustration where we used RaDiO to probe the spatiotemporal features of radiation emitted in the context of laser reflection by a plasma mirror. At lower laser intensities, RaDiO fully recovers the PIC simulation result. This further confirms the validity of RaDiO in a setting where temporal and spatial coherence effects are critical. A simulation at higher laser intensity demonstrated the generation of high harmonics beyond the predictions of the PIC algorithm, showing that RaDiO allows for a complete characterization of the reflected beam along with all the harmonics, without increasing the overall PIC resolution, and effectively demonstrating that RaDiO can be effectively used to predict high frequency radiation from PIC codes [14]. RaDiO is a flexible diagnostic tool that can can be further expanded to include additional features such as higher order interpolation schemes, for example using an advanced particle pusher recently developed [23], the option to compute the electromagnetic field potentials in addition to the electromagnetic fields, or the capability to convert radiation to/from relativistic Lorentz boosted frames. Although this diagnostic does not interact with the particles, it could also be employed together with a QED code that captures radiation reaction and affects the particle's trajectories and capture radiation compatible with QED effects as long as the emission is purely classical. Because it captures the radiation in space and in time, RaDiO may also be useful in describing the production of spatiotemporally structured beams [24]. ## Acknowledgments We acknowledge the Partnership for Advanced Computing in Europe (PRACE) for access to the Leibniz Research Center on SuperMUC and the Barcelona Supercomputing Center on Marenostrum 4. This work was partially supported by the EU Accelerator Research for Innovation for European Science and Society (EU ARIES) under grant agreement no. 738071 (H2020-INFRAIA-2016-1). JV acknowledges the support of FCT (Portugal) Grant No. IF/01635/2015/CP1322/CT0001 and MP acknowledges the support of FCT (Portugal) Grant No. PD/BD/150411/2019.
2302.14806
Framelet Message Passing
Graph neural networks (GNNs) have achieved champion in wide applications. Neural message passing is a typical key module for feature propagation by aggregating neighboring features. In this work, we propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing. Different from traditional spatial methods, it integrates framelet representation of neighbor nodes from multiple hops away in node message update. We also propose a continuous message passing using neural ODE solvers. It turns both discrete and continuous cases can provably achieve network stability and limit oversmoothing due to the multiscale property of framelets. Numerical experiments on real graph datasets show that the continuous version of the framelet message passing significantly outperforms existing methods when learning heterogeneous graphs and achieves state-of-the-art performance on classic node classification tasks with low computational costs.
Xinliang Liu, Bingxin Zhou, Chutian Zhang, Yu Guang Wang
2023-02-28T17:56:19Z
http://arxiv.org/abs/2302.14806v1
# Framelet Message Passing ###### Abstract Graph neural networks (GNNs) have achieved champion in wide applications. Neural message passing is a typical key module for feature propagation by aggregating neighboring features. In this work, we propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing. Different from traditional spatial methods, it integrates framelet representation of neighbor nodes from multiple hops away in node message update. We also propose a continuous message passing using neural ODE solvers. It turns both discrete and continuous cases can provably achieve network stability and limit oversmoothing due to the multiscale property of framelets. Numerical experiments on real graph datasets show that the continuous version of the framelet message passing significantly outperforms existing methods when learning heterogeneous graphs and achieves state-of-the-art performance on classic node classification tasks with low computational costs. 3 1-48 4/00; Published 10/00 raph neural networks, neural message passing, framelet transforms, oversmoothing, stability, spectral graph neural network ###### Contents * 1 Introduction * 2 Graph Representation Learning with Message Passing * 3 Depth Limitation of Message Passing by Oversmoothing * 4 Undecimated Framelet System * 5 Graph Framelet Message Passing * 5.1 Graph Framelet Transforms * 5.2 Framelet Message Passing * 5.3 Continuous Framelet Message Passing with Neural ODE * 6 Limited Oversmoothing in Framelet Message Passing * 7 Stability of Framelet Message Passing * 8 Numerical Analysis * 8.1 Experimental Protocol * 8.2 Node Classification * 8.3 Dirichlet Energy * 9 Related Work * 9.1 Message Passing on Graph Neural Networks * 9.2 Spectral Graph Transforms * 9.3 Oversmoothness in Graph Representation * 9.4 Stability of Graph Convolutions ## 1 Introduction Graph neural networks (GNNs) have received growing attention in the past few years (Bronstein et al., 2017; Hamilton, 2020; Wu et al., 2020). The key to successful GNNs is the equipment of effective graph convolutions that distill useful features and structural information of given graph signals. Existing designs on graph convolutions usually summarize a node's local properties from its spatially-connected neighbors. Such a scheme is called message passing (Gilmer et al., 2017), where different methods differentiate each other by their unique design of the aggregator (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018). Nevertheless, spatial convolutions are usually built upon the first-order approximation of eigendecomposition by the graph Laplacian, and they are proved recklessly removing high-pass information in the graph (Wu et al., 2019; Oono and Suzuki, 2019; Bo et al., 2021). Consequently, many local details are lost during the forward propagation. The information loss becomes increasingly ineluctable along with the raised number of layers or the expanded range of neighborhoods. This deficiency limits the expressivity of GNNs and partly gives rise to the oversmoothing issue of a deep GNN. Alternatively, a few existing spectral-involved message passing schemes use eigenvectors to feed the projected node features into the aggregator (Stachenfeld et al., 2020; Balcilar et al., 2021; Beaini et al., 2021). While the eigenvectors capture the directional flow in the input by Fourier transforms, they overlook the power of multi-scale representation, which is essential to preserve sufficient information in different levels of detail. Consequently, neither the vanilla spectral graph convolution nor eigenvector-based message passing is capable of learning stable and energy-preserving representations. To tackle the issue of separately constructing spectral graph convolutions or spatial message passing rules, this work establishes a spectral message passing scheme with multiscale graph framelet transforms. For a given graph, framelet decomposition generates a set of framelet coefficients in the spectral domain with low-pass and high-pass filters. The coefficients with respect to individual nodes then follow the message passing scheme (Gilmer et al., 2017) to integrate their neighborhood information from the same level. The proposed framelet message passing (FMP) has shown promising theoretical properties. First, **it can limit oversmoothing with a non-decay Dirichlet energy during propagation**, and the Dirichlet energy would not explode when the network goes deep. Unlike the conventional message passing convolutions that have to repeat \(m\) times to cover a relatively large range of \(m\)-hop neighbors for a central node, the framelet message passing reaches out all \(m\)-hop neighbors in a single graph convolutional layer. Instead of cutting down the influence of distant neighboring nodes by the gap to the central node, the framelet way disperses small and large ranges of neighboring communities in different scales and levels, where in low-pass framelet coefficients the local approximated information is retained, and high-pass coefficients mainly hold local detailed information. On each of the scales when conducting message passing, the \(m\)-hop information is adaptively accumulated to the central node, which can be considered as an analog to the graph rewiring, and it is helpful for circumventing the long-standing oversmoothing issue in graph representation learning. Meanwhile, **FMP is stable on perturbed node features**. Transforming the graph signal from the spatial domain to the framelet domain divides uncertainties from the Figure 1: An illustrative workflow of the proposed framelet message passing. An input graph signal is first decomposed into multi-scale coefficients (colored polygons) in the framelet domain. In a convolution layer, each of the framelet coefficients aggregates m-hop neighbors’ coefficients from the same level and scale to update its representation. The multi-scale node representation is then summed up as the propagated new representation of the node. In comparison, multiple spatial-based graph convolution layers are required to accelerate the same range of node information, and they are poor at memorizing long-range information. corrupted input signal. Through controlling the variance within an acceptable range in separate scales, we prove that the processed representation steadily roams within a range, _i.e.,_ the framelet message passing is stable to small input perturbations. In addition, **FMP bypasses unnecessary spectral transforms and improves convolutional efficiency**. On top of approaching long-range neighbors in one layer, FMP also avoids the inverse framelet transforms in the traditional framelet convolution (Zheng et al., 2021) and integrates the low and high pass features by adjusting their feature-wise learnable weights in the aggregation. The rest of the paper starts by introducing the message passing framework in Section 2, and discussing its weakness in the oversmoothing issue in Section 3. Based on the established graph Framelet system and its favorable properties (Section 4). Based on this, Section 5 gives the two variants of the proposed FMP, whose energy-preserving effect and stability are justified in Section 6 and 7, respectively. The empirical performances of FMP are reported in Section 8 for node classification tasks on homogeneous and heterogeneous graphs, where FMP achieves state-of-the-art performance. We review the previous literature in the community in Section 9, and then conclude the work in Section 10. ## 2 Graph Representation Learning with Message Passing An undirected attributed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) consists of a non-empty finite set of \(n=|\mathcal{V}|\) nodes \(\mathcal{V}\) and a set of edges \(\mathcal{E}\) between node pairs. Denote \(\mathbf{A}\in\mathbb{R}^{N\times N}\) the (weighted) graph adjacency matrix and \(\mathbf{X}\in\mathbb{R}^{N\times d}\) the node attributes. A graph convolution learns a matrix representation \(\mathbf{H}\) that embeds the structure \(\mathbf{A}\) and feature matrix \(\mathbf{X}=\{\mathbf{X}_{j}\}_{j=1}^{N}\) with \(\mathbf{X}_{j}\) for node \(j\). Message passing (Gilmer et al., 2017) defines a general framework of feature propagation rules on a graph, which updates a central node's smooth representation by aggregated information (_e.g._, node attributes) from connected neighbors. At a specific layer \(t\), the propagation for the \(i\)th node reads \[\mathbf{X}_{i}^{(t)} =\gamma\left(\mathbf{X}_{i}^{(t-1)},\mathbf{Z}^{(t)}\right) \tag{1}\] \[\mathbf{Z}^{(t)} =\square_{j\in\mathcal{N}(i)}\phi(\mathbf{X}_{i}^{(t-1)},\mathbf{X}_{j}^{ (t-1)},\mathbf{A}_{ij}),\] where \(\square(\cdot)\) is a differentiable and permutation invariant aggregation function, such as summation, average, or maximization. Next, the aggregated representation of neighbor nodes \(\mathbf{Z}^{(t)}\) is used to update the central node's representation, where two example operations are addition and concatenation. Both \(\gamma(\cdot)\) and \(\phi(\cdot)\) are differentiable aggregation functions, such as MLPs. The node set \(\mathcal{N}(i)\) includes \(\mathcal{V}_{i}\) and other nodes that are connected directly with \(\mathcal{V}_{i}\) by an edge, which we call node \(\mathcal{V}_{i}\)'s 1-hop neighbors. The majority of (spatial-based) graph convolutional layers are designed following the message passing scheme when updating the node representation. For instance, GCN (Kipf and Welling, 2017) adds up the degree-normalized node attributes from neighbors (including itself) and defines \[\mathbf{X}_{i}^{(t)}=\sigma\left(\sum_{j\in\mathcal{N}(i)\cup\{i\}}\frac{1}{\sqrt {1+d_{i}}}\sqrt{1+d_{j}}\mathbf{X}_{j}^{(t-1)}\mathbf{W}\right),\] where \(\mathbf{W}\) is learnable weights, \(d_{i}=\sum_{j\in\mathcal{N}(i)}\mathbf{A}_{i,j}\) and \(\mathbf{D}=\text{diag}(d_{1},\ldots,d_{N})\) is the degree matrix for \(\mathbf{A}\). Instead of the pre-defined adjacency matrix, GAT (Velickovic et al., 2018) aggregates neighborhood attributes by learnable attention scores and GraphSage(Hamilton et al., 2017) averages the contribution from the sampled neighborhood. Alternatively, GIN (Xu et al., 2019) attaches a custom number of MLP layers for \(\square(\cdot)\) after the vanilla summation. While these graph convolutions construct different formulations, they merely make a combination of inner product, transpose, and diagonalization operations on the graph adjacency matrix, which fails to distinguish different adjacency matrice by the 1-WL test (Balcilar et al., 2021). In contrast, spectral-based graph convolutions require eigenvalues or eigenvectors to construct the update rule and create expressive node representations towards the theoretical limit of the 3-WL test. On top of the expressivity issue, spectral-based convolutions have also proven to ease the stability concern that is widely observed in conventional spatial-based methods. To circumvent the two identified problems, we propose a spectral-based message passing scheme for graph convolution, which is stable and has the ability to alleviate the oversmoothing issues. ## 3 Depth Limitation of Message Passing by Oversmoothing The depth limitation prevents the performance of many deep GNN models. The problem was first identified by Li et al. (2018), where many popular spatial graph convolutions apply Laplacian smoothing to graph embedding. Shallow GNNs perform global denoising to exclude local perturbations and achieve state-of-the-art performance in many semi-supervised learning tasks. While a small number of graph convolutions has limited expressivity, deeply stacking the layers leads the connected nodes to converge to indistinguishable embeddings. Such an issue is widely known as _oversmoothing_. One way to understand the oversmoothing issue is through the Dirichlet energy, which measures the average distance between connected nodes in the feature space. The graph Laplacian of \(\mathcal{G}\) is defined by \(\mathcal{L}=\mathbf{D}-\mathbf{A}\). Let \(\tilde{\mathbf{A}}:=\mathbf{A}+\mathbf{I}_{N},\tilde{\mathbf{D}}:=\mathbf{D}+\mathbf{I}_{N}\) be the adjacent and degree matrix of graph \(\mathcal{G}\) augmented with self-loops and the normalized graph Laplacian is defined by \(\widetilde{\mathcal{L}}:=\tilde{\mathbf{D}}^{-1/2}\mathcal{L}\tilde{\mathbf{D}}^{-1/2}\). Formally, the Dirichlet energy of a node feature \(\mathbf{X}\) from \(\mathcal{G}\) with normalized \(\widetilde{\mathcal{L}}\) is defined by \[E(\mathbf{X})=\text{tr}(\mathbf{X}^{\top}\widetilde{\mathcal{L}}\mathbf{X})=\frac{1}{2} \sum\mathbf{A}_{ij}\left(\frac{\mathbf{X}_{i}}{\sqrt{1+d_{i}}}-\frac{\mathbf{X}_{j}}{ \sqrt{1+d_{j}}}\right)^{2}.\] The energy evolution provides a direct indicator for the degree of feature expressivity in the hidden space. For instance, Cai and Wang (2020) observed that GCN (Kipf and Welling, 2017) has the Dirichlet energy decaying rapidly to zero as the network depth increases, which indicates the local high-frequency signals are ignored during propagation. It is thus desired that the Dirichlet energy of the encoded features is bounded for deep GNNs. We start with GCN as an example to illustrate the cause of oversmoothing. Set \(\mathbf{P}:=\mathbf{I}_{N}-\widetilde{\mathcal{L}}\). It is observed in Oono and Suzuki (2019) that a multi-layer GCN simply writes in the form \(f=f_{L}\circ\cdots\circ f_{1}\) where \(f_{l}:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d}\) is defined by \(f_{l}(\mathbf{X}):=\sigma\left(\mathbf{P}\mathbf{X}\mathbf{W}_{l}\right)\). Although the asymptotic behavior of the output \(X^{(L)}\) of the GCN as \(L\rightarrow\infty\) is investigated in Oono and Suzuki (2019) with oversmoothing property, to facilitate the analysis for our model MPNN, we also sketch the proof for completeness here. **Lemma 1** (Oono and Suzuki (2019)): _For any \(\mathbf{X}\in\mathbb{R}^{n\times d}\), we have_ 1. \(E(\mathbf{X}\mathbf{W}_{l})\leq\mu_{\max}^{2}E(\mathbf{X})\)_, where_ \(\mu_{\max}\) _denotes the singular value for_ \(\mathbf{W}_{l}\) _with the largest absolute value._ 2. \(E(\sigma(\mathbf{X}))\leq E(\mathbf{X})\)_._ **Lemma 2** (Oono and Suzuki (2019)): _Without loss of generality, we suppose \(\mathcal{G}\) is connected. Let \(\lambda_{1}\leq\cdots\leq\lambda_{n}\) be the eigenvalue of \(\mathbf{P}\) sorted in ascending order. Then, we have_ 1. \(-1<\lambda_{1},\lambda_{n-1}<1\)_, and_ \(\lambda_{n}=1\)_, hence_ \(\lambda_{\max}:=\max_{i=1}^{n-1}|\lambda_{i}|<1\)_._ 2. \(E(\mathbf{P}\mathbf{X})\leq\lambda_{\max}^{2}E(\mathbf{X})\)_._ **Proof** Note that \(\mathbf{P}\) and \(\widetilde{\mathcal{L}}\) share the same eigenspace and the Dirichlet energy is closely associated with the spectrum of \(\mathbf{P}\) and \(\widetilde{\mathcal{L}}\). Recall that \(\mathbf{P}=\mathbf{I}_{N}-\widetilde{\mathcal{L}}=\widetilde{\mathbf{D}}^{-1/2} \widetilde{\mathbf{A}}\widetilde{\mathbf{D}}^{-1/2}.\) Since the matrix similarity between \(\mathbf{P}\) and \(\widetilde{\mathbf{D}}^{-1}\widetilde{\mathbf{A}}\), it suffices to investigate the spectrum of \(\widetilde{\mathbf{D}}^{-1}\widetilde{\mathbf{A}}\). Let \(\tilde{\lambda}_{1}\leq\cdots\leq\tilde{\lambda}_{n}\) be the eigenvalue of \(\mathbf{P}\) sorted in ascending order. It can be verified that \(\widetilde{\mathbf{D}}^{-1}\widetilde{\mathbf{A}}\) is a stochastic matrix; hence, by Perron-Frobenius theorem, we have \(-1<\tilde{\lambda}_{1},\leq\tilde{\lambda}_{n-1}<1\) and \(\tilde{\lambda_{n}}=1\) and \(\mathbf{1}\) is the only eigenvector corresponding to \(\lambda_{n}=1\). Then, we conclude the first claim. Moreover, we obtain \(\widetilde{\mathcal{L}}\) is symmetric positive semi-definite matrix with eigenvalues \(0=\lambda_{1}^{\mathcal{L}}<\lambda_{2}^{\mathcal{L}}<\cdots<\lambda_{n}^{ \mathcal{L}}<2\) and \(\mathbf{1}\) is the only eigenvector corresponding to \(\lambda_{1}^{\mathcal{L}}=0\). Combing the fact that \(E(\mathbf{X})=\operatorname{tr}(\mathbf{X}^{\top}\widetilde{\mathcal{L}}\mathbf{X})\) we conclude \(E(\mathbf{P}\mathbf{X})\leq\lambda_{\max}^{2}E(\mathbf{X})\). It means the graph convolution contracts the energy by a factor of \(\lambda_{\max}^{2}\), and we also get a by-product that the convolution shrinks the feature \(\mathbf{X}\) except for the constant component. \(\blacksquare\) By the above Lemmas, we obtain the _oversmoothing_ property of GCN as **Theorem 3** (Oono and Suzuki (2019)): _Let \(\operatorname{GCN}_{L}:=f_{L}\circ f_{L-1}\circ\cdots\circ f_{1}\) be a graph convolutional network with \(L\) layers with input feature \(X\). Then, the Dirichlet energy of \(\operatorname{GCN}_{L}\) is bounded by_ \[E(\operatorname{GCN}_{L}(\mathbf{X}))<\left(\mu_{\max}\lambda_{\max}\right)^{2L}E( \mathbf{X}). \tag{2}\] _Suppose the \(\mu_{\max}\lambda_{\max}<1\), then the energy decays exponentially with layers._ ## 4 Undecimated Framelet System The \(\{\mathbf{g}_{\ell}\}_{\ell=1}^{M}\) from \(l_{2}(\mathcal{G})\) is said a _frame_ for \(l_{2}(\mathcal{G})\) is a collection of elements if there exist constants \(A\) and \(B\), \(0<A\leq B<\infty\), such that \[A\|\mathbf{f}\|^{2}\leq\sum_{\ell=1}^{M}|\left\langle\mathbf{f},\mathbf{g}_{\ell}\right\rangle |^{2}\leq B\|\mathbf{f}\|^{2}\quad\forall\mathbf{f}\in l_{2}(\mathcal{G}). \tag{3}\] Here \(A,B\) are called _frame bounds_. When \(A=B=1\), \(\{\mathbf{g}_{\ell}\}_{\ell=1}^{M}\) is said to form a _tight frame_ for \(l_{2}(\mathcal{G})\). In this case, (3) is alternatively written as \[\mathbf{f}=\sum_{\ell=1}^{M}\left\langle\mathbf{f},\mathbf{g}_{\ell}\right\rangle\mathbf{g}_{ \ell}, \tag{4}\] which follows from the polarization identity. For a tight frame \(\{\mathbf{g}_{\ell}\}_{\ell=1}^{M}\) with \(\|\mathbf{g}_{\ell}\|=1\) for \(\ell=1,\ldots,M\), there must holds \(M=N\), and \(\{\mathbf{g}_{\ell}\}_{\ell=1}^{N}\) forms an orthonormal basis for \(l_{2}(\mathcal{G})\). Tight frames ensures the one-to-one mapping between framelet coefficients \(\langle\mathbf{f},\mathbf{g}_{\ell}\rangle\) and the original vector \(\mathbf{f}\)(Daubechies, 1992). Let \(\Psi=\{\alpha;\beta^{(1)},\ldots,\beta^{(K)}\}\) be a set of functions in \(L_{1}(\mathbb{R})\), where \(L_{1}(\mathbb{R})\) refers to the functions that are absolutely integrable on \(\mathbb{R}\) with respect to the Lebesgue measure. The _Fourier transform_\(\widehat{\gamma}\) of a function \(\gamma\in L_{1}(\mathbb{R})\) is defined by \(\widehat{\gamma}(\xi):=\int_{\mathbb{R}}\gamma(t)e^{-2\pi it\xi}\,\mathrm{d}t\), where \(\xi\in\mathbb{R}\). The Fourier transform can be extended from \(L_{1}(\mathbb{R})\) to \(L_{2}(\mathbb{R})\), which is the space of square-integrable functions on \(\mathbb{R}\). See Stein and Shakarchi (2011) for further information. A _filter bank_ is a set of filters, where a _filter (or mask)_\(h:=\{h_{k}\}_{k\in\mathbb{Z}}\subseteq\mathbb{C}\) is a complex-valued sequence in \(l_{1}(\mathbb{Z}):=\{h=\{h_{k}\}_{k\in\mathbb{Z}}\subseteq\mathbb{C}:\sum_{k \in\mathbb{Z}}|h_{k}|<\infty\}\). The _Fourier series_ of a sequence \(\{h_{k}\}_{k\in\mathbb{Z}}\) is the 1-periodic function \(\widehat{h}(\xi):=\sum_{k\in\mathbb{Z}}h_{k}e^{-2\pi ik\xi}\) with \(\xi\in\mathbb{R}\). Let \(\Psi=\{\alpha;\beta^{(1)},\ldots,\beta^{(K)}\}\) be a set of _framelet generators_ associated with a filter bank \(\mathbf{\eta}:=\{a;b^{(1)},\ldots,b^{(K)}\}\). Then the Fourier transforms of the functions in \(\Psi\) and the filters' corresponding Fourier series in \(\mathbf{\eta}\) satisfy \[\widehat{\alpha}(2\xi)=\widehat{a}(\xi)\widehat{\alpha}(\xi),\quad\widehat{ \beta^{(r)}}(2\xi)=\widehat{b^{(r)}}(\xi)\widehat{\alpha}(\xi),\quad r=1, \ldots,K,\;\xi\in\mathbb{R}. \tag{5}\] We give two typical examples of filters and scaling functions, as follows. **Example.1** The first one is the Haar-type filters with one high pass: for \(\xi\in\mathbb{R}\), \[\widehat{a}(\xi)=\cos(\xi/2),\quad\widehat{b^{(1)}}(\xi)=\sin(\xi/2) \tag{6}\] with scaling functions \[\widehat{\alpha}(\xi)=\frac{\sin(\xi/2)}{\xi/2},\quad\widehat{\beta}(\xi)= \sqrt{1-\left(\frac{\sin(\xi/2)}{\xi/2}\right)^{2}}.\] **Example.2** Another example of filters and scaling functions with two high passes are from (Daubechies, 1992, Chapter 4): \[\widehat{a}(\xi) :=\left\{\begin{array}{ll}1,&|\xi|<\frac{1}{8},\\ \cos\bigl{(}\frac{\pi}{2}\nu(8|\xi|-1)\bigr{)},&\frac{1}{8}\leq|\xi|\leq\frac{ 1}{4},\\ 0,&\frac{1}{4}<|\xi|\leq\frac{1}{2},\end{array}\right. \tag{7a}\] \[\widehat{b^{(1)}}(\xi) :=\left\{\begin{array}{ll}0,&|\xi|<\frac{1}{8},\\ \sin\bigl{(}\frac{\pi}{2}\nu(8|\xi|-1)\bigr{)},&\frac{1}{8}\leq|\xi|\leq\frac{ 1}{4},\\ \cos\bigl{(}\frac{\pi}{2}\nu(4|\xi|-1)\bigr{)},&\frac{1}{4}<|\xi|\leq\frac{ 1}{2}.\end{array}\right.\] (7b) \[\widehat{b^{(2)}}(\xi) :=\left\{\begin{array}{ll}0,&|\xi|<\frac{1}{4},\\ \sin\bigl{(}\frac{\pi}{2}\nu(4|\xi|-1)\bigr{)},&\frac{1}{4}\leq|\xi|\leq\frac{ 1}{2},\end{array}\right. \tag{7c}\] where \[\nu(t):=t^{4}(35-84t+70t^{2}-20t^{3}),\quad t\in\mathbb{R}.\] The associated framelet generators \(\Psi=\{\alpha;\beta^{1},\beta^{2}\}\) are defined by \[\widehat{\alpha}(\xi) =\left\{\begin{array}{ll}1,&|\xi|<\frac{1}{4},\\ \cos\bigl{(}\frac{\pi}{2}\nu(4|\xi|-1)\bigr{)},&\frac{1}{4}\leq|\xi|\leq\frac{1 }{2},\\ 0,&\text{else},\end{array}\right. \tag{8a}\] \[\widehat{\beta^{1}}(\xi) =\left\{\begin{array}{ll}\sin\left(\frac{\pi}{2}\nu(4|\xi|-1) \right),&\frac{1}{4}\leq|\xi|<\frac{1}{2},\\ \cos^{2}\left(\frac{\pi}{2}\nu(2|\xi|-1)\right),&\frac{1}{2}\leq|\xi|\leq 1, \\ 0,&\text{else},\end{array}\right.\] (8b) \[\widehat{\beta^{2}}(\xi) =\left\{\begin{array}{ll}0,&|\xi|<\frac{1}{2},\\ \cos\left(\frac{\pi}{2}\nu(2|\xi|-1)\right)\sin\left(\frac{\pi}{2}\nu(2|\xi|-1 )\right),&\frac{1}{2}\leq|\xi|\leq 1,\\ 0,&\text{else}.\end{array}\right. \tag{8c}\] **Definition 4** (Undecimated Framelet System (Dong, 2017; Zheng et al., 2022)): _Given a filter bank \(\boldsymbol{\eta}:=\{a;b^{(1)},\ldots,b^{(K)}\}\) and scaling functions \(\Psi=\{\alpha;\beta^{(1)},\ldots,\beta^{(K)}\}\), an undecimated framelet system \(\operatorname{UFS}^{J}_{J_{1}}(\Psi,\boldsymbol{\eta})\)\((J>J_{1})\) for \(l_{2}(\mathcal{G})\) from \(J_{1}\) to \(J\) is defined by_ \[\operatorname{UFS}^{J}_{J_{1}}(\Psi,\boldsymbol{\eta}) :=\operatorname{UFS}^{J}_{J_{1}}(\Psi,\boldsymbol{\eta};\mathcal{ G})\] \[:=\{\varphi_{J_{1},p}:p\in\mathcal{V}\}\cup\{\psi^{(k)}_{l,p}:p \in\mathcal{V},l=J_{1},\ldots,J\}^{K}_{k=1}.\] In this paper, we focus on undecimated framelets, which maintain a constant number of framelets at each level of the decomposition. Decimated framelet systems, on the other hand, can be created by constructing a coarse-grained chain for the graph, as described in detail in Zheng et al. (2022). To ensure computational efficiency, Haar-type filters are adopted when generating the scaling functions, which defines \(\widehat{a}(x)=\cos(x/2)\) and \(\widehat{b^{(1)}}(x)=\sin(x/2)\) for \(x\in\mathbb{R}\). Alternatively, other types of filters, such as linear or quadratic filters, could also be considered as described in Dong (2017). Figure 2: Filters and Scaling functions with two high passes in (7) and (8). Suppose \(l\in\mathbb{Z}\) and \(p\in\mathcal{V}\), the _undecimated framelets_\(\boldsymbol{\varphi}_{l,p}(v)\) and \(\boldsymbol{\psi}_{l,p}^{r}(v)\), \(v\in\mathcal{V}\) at scale \(l\) are _filtered Bessel kernels_ (or summability kernels), which are constructed following \[\begin{split}\boldsymbol{\varphi}_{l,p}(v)&:=\sum_{ \ell=1}^{N}\widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{l}}\right) \overline{\boldsymbol{u}_{\ell}(p)}\boldsymbol{u}_{\ell}(v),\\ \boldsymbol{\psi}_{l,p}^{r}(v)&:=\sum_{\ell=1}^{N} \widehat{\beta^{(r)}}\left(\frac{\lambda_{\ell}}{2^{l}}\right)\overline{ \boldsymbol{u}_{\ell}(p)}\boldsymbol{u}_{\ell}(v),\quad r=1,\ldots,K.\end{split} \tag{9}\] We say \(\boldsymbol{\varphi}_{l,p}(v)\) and \(\boldsymbol{\psi}_{l,p}^{r}(v)\) are with respect to the "dilation" at scale \(l\) and the "translation" for the vertex \(p\in\mathcal{V}\). The construction of framelets are analogs of those of wavelets in \(\mathbb{R}^{d}\). The functions \(\alpha,\beta^{(r)}\) of \(\Psi\) are named _framelet generators_ or _scaling functions_ for the undecimated framelet system. **Theorem 5** (Equivalence Conditions of Framelet Tightness, (Zheng et al., 2022)): _Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\boldsymbol{W})\) be a graph and \(\{(\boldsymbol{u}_{\ell},\lambda_{\ell})\}_{\ell=1}^{N}\) be a set of orthonormal eigenpairs for \(l_{2}(\mathcal{G})\). Let \(\Psi=\{\alpha;\beta^{(1)},\ldots,\beta^{(K)}\}\) be a set of functions in \(L_{1}(\mathbb{R})\) with respect to a filter bank \(\boldsymbol{\eta}=\{a;b^{(1)},\ldots,b^{(K)}\}\) that satisfies (5). An undecimated framelet system is denoted by \(\mathsf{UFS}_{J_{1}}^{J}(\Psi,\boldsymbol{\eta};\mathcal{G}),J_{1}=1,\ldots,J\) (\(J\geq 1\)) with framelets \(\boldsymbol{\varphi}_{l,p}\) and \(\boldsymbol{\psi}_{l,p}^{r}\) in (9). Then, the following statements are equivalent._ 1. _For each_ \(J_{1}=1,\ldots,J\)_, the undecimated framelet system_ \(\mathsf{UFS}_{J_{1}}^{J}(\Psi,\boldsymbol{\eta};\mathcal{G})\) _is a tight frame for_ \(l_{2}(\mathcal{G})\)_, that is,_ \(\forall f\in l_{2}(\mathcal{G})\)_,_ \[\|f\|^{2}=\sum_{p\in V}\Big{|}\left\langle f,\boldsymbol{\varphi}_{J_{1},p} \right\rangle\Big{|}^{2}+\sum_{l=J_{1}}^{J}\sum_{r=1}^{K}\sum_{p\in V}\Big{|} \left\langle f,\boldsymbol{\psi}_{l,p}^{r}\right\rangle\Big{|}^{2}.\] (10) 2. _For all_ \(f\in l_{2}(\mathcal{G})\) _and for_ \(l=1,\ldots,J-1\)_, the following identities hold:_ \[f=\sum_{p\in V}\left\langle f,\boldsymbol{\varphi}_{J,p}\right\rangle \boldsymbol{\varphi}_{J,p}+\sum_{r=1}^{K}\sum_{p\in V}\left\langle f, \boldsymbol{\psi}_{J,p}^{r}\right\rangle\boldsymbol{\psi}_{J,p}^{r},\] (11) \[\sum_{p\in V}\left\langle f,\boldsymbol{\varphi}_{l+1,p}\right\rangle \boldsymbol{\varphi}_{l+1,p}=\sum_{p\in V}\left\langle f,\boldsymbol{\varphi}_{ l,p}\right\rangle\boldsymbol{\varphi}_{l,p}+\sum_{r=1}^{K}\sum_{p\in V} \left\langle f,\boldsymbol{\psi}_{l,p}^{r}\right\rangle\boldsymbol{\psi}_{l,p} ^{r}.\] (12) 3. _For all_ \(f\in l_{2}(\mathcal{G})\) _and for_ \(l=1,\ldots,J-1\)_, the following identities hold:_ \[\|f\|^{2}=\sum_{p\in V}\bigl{|}\left\langle f,\boldsymbol{\varphi}_{J,p} \right\rangle\bigr{|}^{2}+\sum_{r=1}^{K}\sum_{p\in V}\bigl{|}\left\langle f, \boldsymbol{\psi}_{J,p}^{r}\right\rangle\bigr{|}^{2},\] (13) \[\sum_{p\in V}\bigl{|}\left\langle f,\boldsymbol{\varphi}_{l+1,p} \right\rangle\bigr{|}^{2}=\sum_{p\in V}\bigl{|}\left\langle f,\boldsymbol{\varphi} _{l,p}\right\rangle\bigr{|}^{2}+\sum_{r=1}^{K}\sum_{p\in V}\bigl{|}\left\langle f,\boldsymbol{\psi}_{l,p}^{r}\right\rangle\bigr{|}^{2}.\] (14) _._ 4. _The functions in_ \(\Psi\) _satisfy_ \[1=\left|\widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{J}}\right) \right|^{2}+\sum_{r=1}^{K}\left|\widehat{\beta^{(r)}}\left(\frac{\lambda_{\ell} }{2^{J}}\right)\right|^{2}\quad\forall\ell=1,\ldots,N,\] (15) \[\left|\widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{l+1}}\right) \right|^{2}=\left|\widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{l}}\right) \right|^{2}+\sum_{r=1}^{K}\left|\widehat{\beta^{(r)}}\left(\frac{\lambda_{\ell }}{2^{l}}\right)\right|^{2}\quad\forall\begin{array}{c}\ell=1,\ldots,N,\\ l=1,\ldots,J-1.\end{array}\] (16) 5. _The identities in (_15_) hold and the filters in the filter bank_ \(\mathbf{\eta}\) _satisfy_ \[\left|\widehat{a}\left(\frac{\lambda_{\ell}}{2^{l}}\right)\right|^{2}+\sum_{r =1}^{K}\left|\widehat{b^{(r)}}\left(\frac{\lambda_{\ell}}{2^{l}}\right) \right|^{2}=1\quad\forall\ell\in\sigma_{\alpha}^{(l)},\;l=2,\ldots,J,\] (17) _with_ \[\sigma_{\alpha}^{(l)}:=\left\{\ell\in\{1,\ldots,N\}:\widehat{\alpha}\left( \frac{\lambda_{\ell}}{2^{l}}\right)\neq 0\right\}.\] **Remark 6**: _In this paper, we use \(2\)-norm._ ## 5 Graph Framelet Message Passing The aggregation of framelet coefficients for the neighborhood of the node \(i\) takes over up to the \(m\)-th multi-hop \(\mathcal{N}^{m}(i)\). ### Graph Framelet Transforms In a spectral-based graph convolution, graph signals are transformed to a set of coefficients \(\hat{\mathbf{X}}=\mathbf{\mathcal{W}}\mathbf{X}^{\text{in}}\) in frequency channels by the decomposition operator \(\mathbf{\mathcal{W}}\). The learnable filters are then trained for the spectral coefficients to approach node-level representative graph embeddings. This work implements undecimated framelet transforms (Zheng et al., 2021) that generate a set of multi-scale and multi-level _framelet coefficients_ for the input graph signal, where the low-pass coefficients include general global trend of \(\mathbf{X}\), and the high-pass coefficients portray the local properties of the graph attributes at different degrees of detail. Consequently, conducting framelet transforms on a graph avoids trivially smoothing out rare patterns to preserve more energy for the graph representation and alleviate the oversmoothing issue during message aggregation. Recall that the orthonormal bases at different levels \((l)\) and scales \((r)\) formulate the framelet decomposition operators \(\mathbf{\mathcal{W}}_{r,l}\),, which is applied to obtain the framelet coefficients \(\hat{\mathbf{X}}\). In particular, \(\mathbf{\mathcal{W}}_{0,J}\) is composed of the low-pass framelet basis \(\mathbf{\varphi}_{J,p},\;p\in\mathcal{V}\) for the low-pass coefficient matrix, _i.e._, \(\hat{\mathbf{X}}=\mathbf{\mathcal{W}}_{0,J}\mathbf{X}\) that approximates the global graph information. Meanwhile, the high-pass coefficient matrices \(\mathbf{\mathcal{W}}_{r,l}\mathbf{X}\) with \(r=1,\ldots,K,\;l=1,\ldots,J\) that record detailed local graph characteristics in different scale levels are decomposed by the associated high-pass framelet bases \(\mathbf{\psi}_{l,p}^{(r)}\). Generally, framelet coefficients at larger scales contain more localized information with smaller energy. As the framelet bases \(\mathbf{\varphi}_{l,p},\mathbf{\psi}_{l,p}^{(r)}\) are defined by eigenpairs of the normalized graph Laplacian \(\widetilde{\mathcal{L}}\), the associated framelet coefficients can be recursively formulated by filter matrices. Specifically, denote the eigenvectors \(\mathbf{U}=[\mathbf{u}_{1},\ldots,\mathbf{u}_{N}]\in\mathbb{R}^{N\times N}\) and the eigenvalues \(\Lambda=\text{diag}(\lambda_{1},\ldots,\lambda_{N})\) for the normalized graph Laplacian \(\widetilde{\mathcal{L}}\), for the low pass and the \(r\)th high pass, \[\mathbf{\mathcal{W}}_{0,J}\mathbf{X}=\mathbf{U}\widehat{\alpha}\left(\frac{ \Lambda}{2}\right)\mathbf{U}^{\top}\mathbf{X}, \tag{18}\] \[\mathbf{\mathcal{W}}_{r,l}\mathbf{X}=\widehat{\mathbf{U}\beta^{(r)}}\left( \frac{\Lambda}{2^{l+1}}\right)\mathbf{U}^{\top}\mathbf{X}\quad\forall l=1,\ldots,J.\] Alternatively, Chebyshev polynomials approximation is a valid solution to achieve efficient and scalable framelet decomposition (Dong, 2017; Zheng et al., 2021) by avoiding time-consuming eigendecomposition. Let \(m\) be the highest order of the Chebyshev polynomial involved. Denote the \(m\)-order approximation of \(\alpha\) and \(\{\beta^{(r)}\}_{r=1}^{K}\) by \(\mathcal{T}_{0}\) and \(\{\mathcal{T}_{r}\}_{r=1}^{K}\), respectively. The approximated framelet decomposition operator \(\mathbf{\mathcal{W}}_{r,l}^{\natural}\) (including \(\mathbf{\mathcal{W}}_{0,J}^{\natural}\)) is defined as products of Chebyshev polynomials of the graph Laplacian \(\mathcal{L}\), _i.e._, \[\mathbf{\mathcal{W}}_{r,l}^{\natural}=\begin{cases}\mathcal{T}_{0}\left(2^{-R} \mathcal{L}\right),&l=1,\\ \mathcal{T}_{k}\left(2^{R+l-1}\mathcal{L}\right)\mathcal{T}_{0}\left(2^{R+l-2 }\mathcal{L}\right)\ldots\mathcal{T}_{0}\left(2^{-R}\mathcal{L}\right),&l=2, \ldots,J.\end{cases}\] Here the real-value dilation scale \(R\) is the smallest integer such that \(\lambda_{\max}=\lambda_{N}\leq 2^{R}\pi\). In this definition, it is required for the finest scale \(1/2^{K+J}\) that guarantees \(\lambda_{\ell}/2^{K+J-l}\in(0,\pi)\) for \(\ell=1,2,...,N\). Chebyshev polynomials approximation not only achieves efficient calculation to obtain framelet coefficients for the nodes, but also collects information from the nodes' neighbors in the framelet domain of the same channel. In particular, the aggregation of framelet coefficients for the neighborhood of the node \(i\) takes over up to the \(m\)-hop \(\mathcal{N}^{m}(i)\). Based on this, we propose the general framelet message passing framework. ### Framelet Message Passing We follow the general message passing scheme in Equation (1) and define the vanilla _Framelet message passing_ (FMP) for the graph node feature \(\mathbf{X}^{(t)}\) at layer \(t\) by \[\mathbf{X}_{i}^{(t)} =\mathbf{X}_{i}^{(t-1)}+\mathbf{Z}_{i}^{(t)} \tag{19}\] \[\mathbf{Z}_{i}^{(t)} =\sigma\left(\sum_{j\in\mathcal{N}^{m}(i)}\left(\sum_{r=1}^{K} \sum_{l=1}^{J}(\mathbf{\mathcal{W}}_{r,l}^{\natural})_{i,j}\mathbf{X}_{j}^{(t-1)}\Theta _{r}+(\mathbf{\mathcal{W}}_{0,J}^{\natural})_{i,j}\mathbf{X}_{j}^{(t-1)}\Theta_{0} \right)\right),\] where the propagated feature \(\mathbf{Z}_{i}^{(t)}\) at node \(i\) sums over the low and high passes coefficients at all scales, \(\sigma\) is the ReLU activation function, and \(\Theta_{r}\in\mathbb{R}^{d\times d}\) are learnable parameter square matrices associated with the high passes and low pass, with the size \(d\) equal to the number of features of \(X^{(t-1)}\). The framelet message passing has the matrix form \[\begin{split}\mathbf{X}^{(t)}&=\mathbf{X}^{(t-1)}+\mathbf{Z}^{(t)} \\ \mathbf{Z}^{(t)}&=\sigma\left(\sum_{r=1}^{K}\sum_{l=1}^{J}( \mathbf{\mathcal{W}}_{r,l}^{\natural})\mathbf{X}^{(t-1)}\Theta_{r}+(\mathbf{\mathcal{W}}_{0,J}^{\natural})\mathbf{X}^{(t-1)}\Theta_{0}\right).\end{split} \tag{20}\] As shown in the previous formulation, the aggregation takes over up to \(m\)-hop neighbours of the target node \(i\). It exploits the framelet coefficients from an \(m\)-order approximated \(\mathbf{\mathcal{W}}_{r,l}^{\natural}\), which involves higher-order graph Laplacians, _i.e._, \(\mathcal{L}^{m}\), to make the spectral aggregation as powerful as conducting an \(m\)-hop neighborhood spatial MPNN (Gilmer et al., 2017). Intuitively, the proposed update rule is thus more efficient than the conventional MPNN schemes in the sense that a single FMP layer is sufficient to reach distant nodes from \(m\)-hops away. Meanwhile, FMP splits framelet coefficients into different scales and levels instead of implementing a rough global summation, which further preserves the essential local and global patterns and circumvents the oversmoothing issue when stacking multiple convolutional layers. ### Continuous Framelet Message Passing with Neural ODE The vanilla FMP in Equation (19) formulates a discrete version of spectral feature aggregation of node features. In order to gain extra expressivity for the node embedding, we design a continuous update scheme and formulate an enhanced FMP by neural ODE, such that \[\begin{split}&\partial\mathbf{X}_{i}(t)/\partial t=\mathbf{Z}_{i}(t)\\ &\mathbf{Z}_{i}(t)=\sum_{j\in\mathcal{N}^{m}(i)}\left(\sum_{r=1}^{K} \sum_{l=1}^{J}(\mathbf{\mathcal{W}}_{r,l}^{\natural})_{i,j}\mathbf{X}_{j}(t)\Theta_{r} +(\mathbf{\mathcal{W}}_{0,J}^{\natural})_{i,j}\mathbf{X}_{j}(t)\Theta_{0}\right)\\ &\mathbf{X}_{i}(0)=\mathrm{MLP}(\mathbf{X}_{i}).\end{split} \tag{21}\] Here \(\mathbf{X}_{i}(t)\) denotes the encoded features of node \(i\) at some timestamp \(t\) during a continuous process. The initial state is obtained by a simple MLP layer on the input node feature, _i.e._, \(\mathbf{X}_{i}(0)=\mathrm{MLP}(\mathbf{X}_{i})\). The last embedding \(\mathbf{X}_{i}(T)\) can be obtained by numerically solving Equation (21) with an ODE solver, which requires a stable and efficient numerical integrator. Since the proposed method is stable during the evolution within finite time, the vast explicit and implicit numerical methods are applicable with a considerable small step size \(\Delta t\). In particular, we implement Dormand-Prince5 (DOPRI5) (Dormand and Prince, 1980) with adaptive step size for computational complexity. Note that the practical network depth (_i.e._, the number of propagation layers) is equal to the numerical iteration number that is specified in the ODE solver. ## 6 Limited Oversmoothing in Framelet Message Passing Following the discussion of oversmoothing in Section 3, this section demonstrates that FMP provides a remedy for the oversmoothing predicament by decomposing \(\mathbf{X}\) into low-pass and high-pass coefficients with the energy conservation property. **Proposition 7** (Energy conservation): \[E(\mathbf{X})=E(\mathbf{\mathcal{W}}_{0,J}\mathbf{X})+\sum_{r=1}^{K}\sum_{l=1}^{J}E(\mathbf{ \mathcal{W}}_{r,l}\mathbf{X}),\] (22) _where \(E(\mathbf{\mathcal{W}}_{0,J}\mathbf{X})\) and \(E(\mathbf{\mathcal{W}}_{r,l}\mathbf{X})\) break down the total energy \(E(\mathbf{X})\) into multi scales, and into low and high passes._ **Proof** Recall the facts that \(E(\mathbf{X})=\operatorname{tr}(\mathbf{X}^{\top}\widetilde{\mathcal{L}}\mathbf{X})\) and \(\widetilde{\mathcal{L}}=\mathbf{U}\Lambda\mathbf{U}^{\top}\), we have \[E(\mathbf{\mathcal{W}}_{0,J}\mathbf{X})=\operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{ \mathcal{W}}_{0,J}^{\top}\widetilde{\mathcal{L}}\mathbf{\mathcal{W}}_{0,J}\mathbf{X} \right)=\operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\widehat{\alpha}\left(\frac {\Lambda}{2}\right)^{2}\Lambda\mathbf{U}^{\top}\mathbf{X}\right),\] and \(\forall l=1,\ldots,J\), \[E(\mathbf{\mathcal{W}}_{r,l}\mathbf{X})=\operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{ \mathcal{W}}_{0,J}^{\top}\widetilde{\mathcal{L}}\mathbf{\mathcal{W}}_{r,l}\mathbf{X} \right)=\operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\widehat{\beta^{(r)}} \left(\frac{\Lambda}{2^{l+1}}\right)^{2}\Lambda\mathbf{U}^{\top}\mathbf{X}\right). \tag{23}\] By the identity (16), we also have \[\widehat{\alpha}\left(\frac{\Lambda}{2}\right)^{2}+\sum_{r=1}^{K}\sum_{l=1}^{ J}\widehat{\beta^{(r)}}\left(\frac{\Lambda}{2^{l+1}}\right)^{2}=\mathbf{I}. \tag{24}\] Therefore, combining (23) and (24), we obtain that \[E(\mathbf{\mathcal{W}}_{0,J}\mathbf{X})+\sum_{r=1}^{K}\sum_{l=1}^{J}E( \mathbf{\mathcal{W}}_{r,l}\mathbf{X})\] \[= \operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\widehat{\alpha}\left( \frac{\Lambda}{2}\right)^{2}\Lambda\mathbf{U}^{\top}\mathbf{X}\right)+\sum_{r=1}^{K} \sum_{l=1}^{J}\operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\widehat{\beta^{(r)}} \left(\frac{\Lambda}{2^{l+1}}\right)^{2}\Lambda\mathbf{U}^{\top}\mathbf{X}\right)\] \[= \operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\left(\widehat{\alpha} \left(\frac{\Lambda}{2}\right)^{2}+\sum_{r=1}^{K}\sum_{l=1}^{J}\widehat{\beta^ {(r)}}\left(\frac{\Lambda}{2^{l+1}}\right)^{2}\right)\Lambda\mathbf{U}^{\top}\mathbf{ X}\right)\] \[= \operatorname{tr}\left(\mathbf{X}^{\top}\mathbf{U}\Lambda\mathbf{U}^{\top} \mathbf{X}\right)\] \[= E(\mathbf{X}),\] thus completing the proof. For the approximated framelet transforms \(\mathbf{\mathcal{W}}_{r,l}^{\natural}\), there is an energy change on RHS of (22) due to the truncation error. The relatively smoother node features account for a lower level energy \(E(\mathbf{\mathcal{W}}_{0,J}\mathbf{X})\). In this way, we provide more flexibility to maneuver energy evolution as the energy does not decay when we use FMP in (19) or (21). For simplicity, we focus on the exact framelet transforms. Under some mild assumptions on \(\Theta_{r}\), we have the following two estimations on the Dirichlet energy. **Lemma 8**: _(Zhou, 2014) Let \(\mathbf{A}_{i}\in\mathbb{R}^{d\times d}\) be positive semi-definite \((i=1,2,\cdots,n)\). Then,_ \[\operatorname{tr}\left(\mathbf{A}_{1}\mathbf{A}_{2}\cdots\mathbf{A}_{n}\right)\leq\sum_{i= 1}^{d}\mu_{i}\left(\mathbf{A}_{1}\mathbf{A}_{2}\cdots\mathbf{A}_{n}\right)\leq\operatorname {tr}\left(\mathbf{A}_{1}\right)\operatorname{tr}\left(\mathbf{A}_{2}\right)\cdots \operatorname{tr}\left(\mathbf{A}_{n}\right),\] _where \(\mu(\mathbf{A})\) denotes the singular value of matrix \(\mathbf{A}\)._ The following theorem shows for the framelet message passing without activation function in the update, the Dirichlet energy of the feature at the \(t\)th layer \(\mathbf{X}^{(t)}\) is not less than that of the \((t-1)\)th layer \(\mathbf{X}^{(t-1)}\). In fact, the \(\mathbf{X}^{(t)}\) and \(\mathbf{X}^{(t-1)}\) are equivalent up to some constant, and the constant with respect to the upper bound depends jointly on the level of the framelet decomposition and the learnable parameters' bound. **Theorem 9**: _Let the \(\operatorname{FMP}\) defined by_ \[\mathbf{X}^{(t)} =\mathbf{X}^{(t-1)}+\mathbf{Z}^{(t)} \tag{25}\] \[\mathbf{Z}^{(t)} =\sum_{r=1}^{K}\sum_{l=1}^{J}(\mathbf{\mathcal{W}}_{r,l})\mathbf{X}^{(t-1 )}\Theta_{r}+(\mathbf{\mathcal{W}}_{0,J})\mathbf{X}^{(t-1)}\Theta_{0},\] _where the parameter matrix \(\Theta_{r}\in\mathbb{R}^{d\times d}(r=0,1,\cdots,K)\). Suppose \(\Theta_{r}\) is positive semi-definite with \(\operatorname{tr}\left(\Theta_{r}\right)\leq M\) for every \(r\), then we can bound the Dirichlet energy of the graph node feature \(\mathbf{X}^{(t)}\) at layer \(t\) by_ \[E\left(\mathbf{X}^{(t-1)}\right)\leq E\left(\mathbf{X}^{(t)}\right)\leq\left(M\sqrt{ KJ+1}+1\right)^{2}E\left(\mathbf{X}^{(t-1)}\right). \tag{26}\] **Proof** By definition, we have \[E\left(\mathbf{X}^{(t)}\right) =\operatorname{tr}\left(\left(\mathbf{X}^{(t)}\right)^{\top}\widetilde {\mathcal{L}}\mathbf{X}^{(t)}\right) \tag{27}\] \[=\operatorname{tr}\left[\left(\mathbf{X}^{(t-1)}+\mathbf{Z}^{(t)}\right) ^{\top}\widetilde{\mathcal{L}}\left(\mathbf{X}^{(t-1)}+\mathbf{Z}^{(t)}\right)\right]\] \[=E\left(\mathbf{X}^{(t-1)}\right)+E\left(\mathbf{Z}^{(t)}\right)+2 \operatorname{tr}\left(\left(\mathbf{X}^{(t-1)}\right)^{\top}\widetilde{\mathcal{ L}}\mathbf{Z}^{(t)}\right).\] By Lemma 8, we have \[E\left(\mathbf{Z}^{(t)}\right) \tag{28}\] \[\leq M^{2}\text{tr}\left[\left(\mathbf{X}^{(t-1)}\right)^{\top}\mathbf{U} \left(\widehat{\alpha}\left(\frac{\Lambda}{2}\right)+\sum_{r=1}^{K}\sum_{l=1}^ {J}\widehat{\beta^{r}}\left(\frac{\Lambda}{2^{l+1}}\right)\right)^{2}\Lambda \mathbf{U}^{\top}\mathbf{X}^{(t-1)}\right],\] \[\operatorname{tr}\left(\left(\mathbf{X}^{(t-1)}\right)^{\top} \widetilde{\mathcal{L}}\mathbf{Z}^{(t)}\right)\] \[\leq M\text{tr}\left[\left(\mathbf{X}^{(t-1)}\right)^{\top}\mathbf{U} \left(\widehat{\alpha}\left(\frac{\Lambda}{2}\right)+\sum_{r=1}^{K}\sum_{l=1} ^{J}\widehat{\beta^{r}}\left(\frac{\Lambda}{2^{l+1}}\right)\right)\Lambda\mathbf{ U}^{\top}\mathbf{X}^{(t-1)}\right].\] From the identity (16), we know that for any eigenvalue \(\lambda\) of the Laplacian \(\widetilde{\mathcal{L}}\), \[\left|\widehat{\alpha}\left(\frac{\lambda}{2}\right)\right|^{2}+\sum_{r=1}^{K} \sum_{l=1}^{J}\left|\widehat{\beta^{(r)}}\left(\frac{\lambda}{2^{l+1}}\right) \right|^{2}=1. \tag{29}\] Therefore, we have \[\left(\widehat{\alpha}\left(\frac{\lambda}{2}\right)+\sum_{r=1}^{K}\sum_{l=1}^ {J}\widehat{\beta^{r}}\left(\frac{\lambda}{2^{l+1}}\right)\right)^{2}\leq KJ+1, \tag{30}\] which leads to \[\begin{split} E\left(\mathbf{Z}^{(t)}\right)\leq& M^{2}(KJ+1)E\left(\mathbf{X}^{(t-1)}\right),\\ \operatorname{tr}\left(\left(\mathbf{X}^{(t-1)}\right)^{\top} \widetilde{\mathcal{L}}\mathbf{Z}^{(t)}\right)\leq& M\sqrt{KJ+1}\,E \left(\mathbf{X}^{(t-1)}\right).\end{split} \tag{31}\] Combining (27), (31) and the facts that \(E\left(\mathbf{Z}^{(t)}\right)\geq 0\), \(\operatorname{tr}\left(\left(\mathbf{X}^{(t-1)}\right)^{\top}\widetilde{\mathcal{ L}}\mathbf{Z}^{(t)}\right)\geq 0\) since \(\Theta_{r},\Theta_{0}\) are symmetric positive semi-definite matrices, then we have \[E\left(\mathbf{X}^{(t-1)}\right)\leq E\left(\mathbf{X}^{(t)}\right)\leq\left(M\sqrt{ KJ+1}+1\right)^{2}E\left(\mathbf{X}^{(t-1)}\right),\] thus completing the proof. In Theorem 9, we only prove the case without nonlinear activation function \(\sigma\). For the \(\operatorname{FMP}_{\text{\tiny{\rm ode}}}\) scheme in 21, we only apply a MLP layer on input features at the beginning. In this case, we prove the Dirichlet energy of the framelet message passing of any layer is non-decreasing and is equivalent to the initial time energy. **Theorem 10**: _Suppose the framelet message passing with ODE update scheme \((\operatorname{FMP}_{\text{\rm ode}})\) is defined by_ \[\begin{split}&\partial\mathbf{X}(t)/\partial t=\mathbf{Z}(t),\\ &\mathbf{Z}(t)=\sum_{r=1}^{K}\sum_{l=1}^{J}(\mathbf{\mathcal{W}}_{r,l}) \mathbf{X}(t)\Theta_{r}+(\mathbf{\mathcal{W}}_{0,J})\mathbf{X}(t)\Theta_{0},\\ &\mathbf{X}(0)=\operatorname{MLP}(\mathbf{X}).\end{split} \tag{32}\] _Under the same assumption in Theorem 9, the Dirichlet energy is bounded by_ \[E(\mathbf{X}(0))\leq E(\mathbf{X}(t))\leq e^{2M\sqrt{KJ+1}\,t}E(\mathbf{X}(0)).\] **Proof** By definition, we have \[\begin{split}\mathbf{X}(t)=&\mathbf{X}(t-\Delta t)+\Delta t \mathbf{Z}(t-\Delta t),\\ E\left(\mathbf{X}(t)\right)=& E\left(\mathbf{X}(t-\Delta t )\right)+2\Delta t\operatorname{tr}\left(\mathbf{X}(t-\Delta t)^{\top}\widetilde{ \mathcal{L}}\mathbf{Z}(t-\Delta t)\right)\\ &+\Delta t^{2}E\left(\mathbf{Z}(t-\Delta t)\right),\\ \frac{\mathrm{d}E(\mathbf{X}(t))}{\mathrm{d}t}=& 2 \operatorname{tr}\left(\mathbf{X}(t)^{\top}\widetilde{\mathcal{L}}\mathbf{Z}(t) \right).\end{split}\] Then, by applying (31), \[0\leq\frac{\mathrm{d}E(\mathbf{X}(t))}{\mathrm{d}t}\leq 2M\sqrt{KJ+1}\,E(\mathbf{X}(t)),\] which leads to \[E(\mathbf{X}(0))\leq E(\mathbf{X}(t))\leq e^{2M\sqrt{KJ+1}\,t}E(\mathbf{X}(0)),\] thus completing the proof. ## 7 Stability of Framelet Message Passing The stability of a GNN refers to its ability to maintain its performance when small changes are made to the input graph or to the model parameters. This section investigates how the multiscale property of graph framelets stabilizes the vanilla FMP in terms of fluctuation in the input node features. **Lemma 11**: _For framelet transforms on graph \(\mathcal{G}\) of level \(l\) and \(r\), and graph signal \(X\) in \(\ell_{2}(G)\),_ \[\left\|\sum_{r=1}^{K}\mathbf{\mathcal{W}}_{r,l}X\right\|\leq C\sqrt{\lambda_{\max }}\,2^{-\frac{l+1}{2}}\left\|X\right\|,\] _where \(C\) is some constant and \(\lambda_{\max}\) is the maximal eigenvalue of the graph Laplacian._ **Proof** By the orthonormality of \(\mathbf{u}_{\ell}\), \[\left\langle X,\mathbf{\varphi}_{l,p}\right\rangle=\sum_{\ell=1}^{N}\overline{ \widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{l}}\right)}\widehat{X}_{\ell} \,\mathbf{u}_{\ell}(p),\quad\left\langle X,\mathbf{\psi}_{l,p}^{r}\right\rangle=\sum_ {\ell=1}^{N}\overline{\widehat{\beta^{(r)}}\left(\frac{\lambda_{\ell}}{2^{l}} \right)}\widehat{X}_{\ell}\,\mathbf{u}_{\ell}(p).\] By (16), \[\sum_{\ell=1}^{N}\sum_{r=1}^{K}\left|\widehat{\beta^{(r)}}\left( \frac{\lambda_{\ell}}{2^{l}}\right)\right|^{2}\left|\widehat{X}_{\ell}\right| ^{2}\,\left\|\mathbf{u}_{\ell}\right\|^{2} =\sum_{\ell=1}^{N}\left[\widehat{\alpha}\left(\frac{\lambda_{\ell }}{2^{l+1}}\right)^{2}-\widehat{\alpha}\left(\frac{\lambda_{\ell}}{2^{l}} \right)^{2}\right]\left|\widehat{X}_{\ell}\right|^{2}\,\left\|\mathbf{u}_{\ell} \right\|^{2}\] \[=\frac{\lambda_{\ell}}{2^{l+1}}\sum_{\ell=1}^{N}\frac{\widehat{ \alpha}\left(\frac{\lambda_{\ell}}{2^{l+1}}\right)^{2}-\widehat{\alpha}\left( \frac{\lambda_{\ell}}{2^{l}}\right)^{2}}{\frac{\lambda_{\ell}}{2^{l+1}}}| \widehat{X}_{\ell}|^{2}\] \[\leq C^{2}\lambda_{\max}2^{-(l+1)}\|X\|^{2},\] where the inequality of the last line uses that the scaling function \(\widehat{\alpha}\) is continuously differentiable on real axis, and Parseval's identity. Then, by (18), \[\left\|\sum_{r=1}^{K}\mathbf{\mathcal{W}}_{r,l}X\right\|^{2} =\left\|\left\langle X,\mathbf{\psi}_{l,\cdot}^{r}\right\rangle \right\|^{2}\] \[=\sum_{\ell=1}^{N}\sum_{r=1}^{K}\left|\widehat{\beta^{(r)}} \left(\frac{\lambda_{\ell}}{2^{l}}\right)\right|^{2}\left|\widehat{X}_{\ell} \right|^{2}\,\left\|\mathbf{u}_{\ell}\right\|^{2}\] \[\leq C^{2}\lambda_{\max}2^{-(l+1)}\|X\|^{2},\] thus completing the proof. **Theorem 12** (Stability): _The vanilla Framelet Message Passing (FMP) in (19) has bounded parameters: \(\|\Theta_{r}\|\leq C_{r}\) with constants \(C_{r}\) for \(r=0,1,\ldots,K\), then the FMP is stable, i.e., there exists a constant \(C\) such that_ \[\left\|\mathbf{X}^{(t)}-\widetilde{\mathbf{X}}^{(t)}\right\|\leq C^{t}\left\|\mathbf{X}^{(0 )}-\widetilde{\mathbf{X}}^{(0)}\right\|,\] _where \(C:=1+4C_{1}\sqrt{\lambda_{\max}}\max_{r=0,1,\ldots,K}C_{r}\), \(\mathbf{X}^{(0)}\) and \(\widetilde{\mathbf{X}}^{(0)}\) are the initial graph node features with \(\mathbf{X}^{(t+1)}\) and \(\widetilde{\mathbf{X}}^{(t+1)}\) at layer \(t\)._ **Proof** Let \[\mathbf{X}^{(t)}=\mathbf{X}^{(t-1)}+\mathbf{Z}^{(t)},\quad\widetilde{\mathbf{X}}^{(t)}= \widetilde{\mathbf{X}}^{(t-1)}+\widetilde{\mathbf{Z}}^{(t)},\] with initialization \(\mathbf{X}^{(0)},\widetilde{\mathbf{X}}^{(0)}\in\ell_{2}(\mathcal{G})\). It thus holds that \[\left\|\mathbf{X}^{(t)}-\widetilde{\mathbf{X}}^{(t)}\right\|\leq\left\|\mathbf{X}^{(t-1)}- \widetilde{\mathbf{X}}^{(t-1)}\right\|+\left\|\mathbf{Z}^{(t)}-\widetilde{\mathbf{Z}}^{( t)}\right\|,\] where \[\mathbf{Z}^{(t)}=\sigma\left(\sum_{r=1}^{K}\sum_{l=1}^{J}\mathbf{\mathcal{W}}_{r,l} \mathbf{X}^{(t-1)}\Theta_{r}+\mathbf{\mathcal{W}}_{0,J}\mathbf{X}^{(t-1)}\Theta_{0}\right) =:\sigma(Y^{(t)}).\] By Lemma 11, \[\left\|\mathbf{Y}^{(t)}-\widetilde{\mathbf{Y}}^{(t)}\right\|\] \[\leq \sum_{l=1}^{J}\left\|\sum_{r=1}^{K}\mathbf{\mathcal{W}}_{r,l}\left( \mathbf{X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right)\Theta_{r}\right\|+\left\|\mathbf{ \mathcal{W}}_{0,J}\left(\mathbf{X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right) \Theta_{0}\right\|\] \[\leq C_{1}\left(\sum_{l=1}^{J}C_{r}\sqrt{\lambda_{\max}}2^{-\frac{l+1} {2}}\left\|\mathbf{X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right\|+C_{0}\sqrt{ \lambda_{\max}}\ \left\|\mathbf{X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right\|\right)\] \[\leq 4C_{1}\sqrt{\lambda_{\max}}\max_{r=0,1,\ldots,K}C_{r}\left\|\mathbf{ X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right\|.\] Therefore, \[\left\|\mathbf{Z}^{(t)}-\widetilde{\mathbf{Z}}^{(t)}\right\| =\left\|\sigma\left(\mathbf{Y}^{(t)}-\widetilde{\mathbf{Y}}^{(t)}\right)\right\|\] \[\leq\left\|\mathbf{Y}^{(t)}-\widetilde{\mathbf{Y}}^{(t)}\right\|\] \[\leq 4C_{1}\sqrt{\lambda_{\max}}\max_{r=0,1,\ldots,K}C_{r}\left\|\mathbf{ X}^{(t-1)}-\widetilde{\mathbf{X}}^{(t-1)}\right\|,\] which leads to \[\left\|\mathbf{X}^{(t)}-\widetilde{\mathbf{X}}^{(t)}\right\|\leq C\left\|\mathbf{X}^{(t-1 )}-\widetilde{\mathbf{X}}^{(t-1)}\right\|\leq C^{t}\left\|\mathbf{X}^{(0)}-\widetilde{ \mathbf{X}}^{(0)}\right\|,\] thus completing the proof. \(\blacksquare\) **Remark 13**: _In the neural ODE case, we can prove_ \[\left\|\mathbf{X}^{(t+1)}-\widetilde{\mathbf{X}}^{(t+1)}\right\|\leq e^{Ct}\left\|\mathbf{X} ^{(0)}-\widetilde{\mathbf{X}}^{(0)}\right\|, \tag{33}\] _where \(C\) is a constant. Given the number of layers, the \(\mathrm{FMP}_{\mathrm{ode}}\) is stable._ ## 8 Numerical Analysis This section validates the performance of FMP with node classification on a diverse of popular benchmark datasets, including 4 homogeneous graphs and 3 heterogeneous graphs. The models are programmed with PyTorch-Geometric (version 2.0.1) and PyTorch (version 1.7.0) and tested on NVIDIA(r) Tesla V100 GPU with 5,120 CUDA cores and 16GB HBM2 mounted on an HPC cluster. ### Experimental Protocol DatasetsWe evaluate the performance of our proposed FMP by the three most widely used citation networks (Yang et al., 2016): **Cora**, **Citeseer**, and **Pubmed**. We also adopt three heterogeneous graphs (**Texas**, **Wisconsin**, and **Cornell**) from **WebKB** dataset (Garcia-Plaza et al., 2016) that record the web pages from computer science departments of different universities and their mutual links. BaselinesWe compare the performance of FMP with various state-of-the-art baselines, which are implemented based on PyTorch-Geometric or their open repositories. On both homogeneous and heterogeneous graphs, a set of powerful classic shallow GNN models are compared against, including MLP, GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), and GraphSage(Hamilton et al., 2017). We also look into JKNet(Xu et al., 2018), SGC (Wu et al., 2019), and APPNP (Gasteiger et al., 2018) which are specially designed for alleviating the over-smoothing issue of graph representation learning. Such techniques usually introduce a regularizer or bias term to increase the depth of graph convolutional layers. In order to validate the effective design of the neural ODE scheme, the performance of three other continuous GNN layers, including GRAND (Chamberlain et al., 2021), CGNN (Xhonneux et al., 2020), and GDE (Poli et al., 2019), are investigated on homogeneous graph representation learning tasks. Furthermore, other specially designed convolution mechanisms, _i.e.,_ GCNII (Chen et al., 2020), HzGCN (Zhu et al., 2020), and ASGAT (Li et al., 2021), are compared for the learning tasks on the three heterogeneous graphs. SetupWe construct FMP with two convolutional layers for learning node embeddings, the output of which is proceeded by a softmax activation for final prediction. The aggregator \(\gamma\) is a 2-layer MLP for \(\mathrm{FMP}_{\mathrm{mlp}}\) in (19), and a linear projection for \(\mathrm{FMP}_{\mathrm{ode}}\) in (21). Grid search is conducted to fine-tune the key hyperparameters from a considerably small range of search space, see Table 1. All the average test accuracy and the associated standard deviation come from 10 runs. ### Node Classification The proposed FMP is validated on six graphs to conduct node classification tasks. We call the three datasets that have relatively high _homophily level_ homogeneous graphs and the other three heterogeneous graphs. To be specific, we follow Pei et al. (2019) and define the homophily levels of a graph by the overall degree of the consistency among neighboring nodes' labels, _i.e.,_ \[\mathcal{H}=\frac{1}{|V|}\sum_{v\in\mathcal{V}}\frac{\#\ v\text{'s neighbors that have identical label as }v}{\#\ v\text{'s neighbors}}.\] Homogeneous GraphsTable 2 reports the performance of three node classification tasks on the three citation networks. The baseline models' prediction scores are retrieved from \begin{table} \begin{tabular}{l l l} \hline \hline **Hyperparameters** & **Search Space** & **Distribution** \\ \hline learning rate & \([10^{-3},10^{-2}]\) & log-uniform \\ weight decay & \([10^{-3},10^{-1}]\) & log-uniform \\ dropout rate & \([0.0,0.8]\) & uniform \\ hidden dim & \(\{64,128,256\}\) & categorical \\ layer & \([1,10]\) & uniform \\ optimizer & \{Adam, Adamax\} & categorical \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameter searching space for node classification tasks. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** \\ homophily level & 0.83 & 0.71 & 0.79 \\ \hline MLP & 57.8\(\pm\)0.1 & 61.2\(\pm\)0.1 & 73.2\(\pm\)0.1 \\ GCN (Kipf and Welling, 2017) & 82.4\(\pm\)0.3 & 70.7\(\pm\)0.4 & 79.4\(\pm\)0.2 \\ GAT (Velč previous literature. In particular, results on the first seven (classic models and oversmoothing-surpassed models) are provided in Zhu et al. (2021), and the last three (continuous methods) are obtained from Chamberlain et al. (2021). \(\text{FMP}_{\text{\tiny{ode}}}\) achieves top performance over its competitors on all three tasks and \(\text{FMP}_{\text{\tiny{mlp}}}\) obtains a comparable prediction accuracy. Both FMP variants take multiple hops' information into account in a single graph convolution, while the majority of left methods require propagating multiple times to achieve the same level of the receptive field. However, during the aggregation procedure, information from long-range neighbors is dilated progressively, and the features from the closest neighbors (_e.g.,_ one-hop neighbors) are augmented recklessly. Consequently, \(\text{FMP}_{\text{\tiny{ode}}}\) gains a stronger learning ability with respect to other models on the entire graph. Moreover, \(\text{FMP}_{\text{\tiny{ode}}}\) utilizes a continuous update scheme and reaches deeper network architectures to pursue even more expressive propagation. Heterogeneous GraphsAs \(\text{FMP}_{\text{\tiny{ode}}}\) can access multi-hop neighbors for a central node in one shot, we next explore its capability in distinguishing dissimilar neighboring nodes by prediction tasks on three heterogeneous datasets. The prediction accuracy for node classification tasks can be found in Table 3. The results of the first six baseline methods (for general graphs) are acquired from Zhu et al. (2020), and the scores on the last three baselines (that are specifically designed for heterogeneous graphs) are provided by the associated original papers. \(\text{FMP}_{\text{\tiny{ode}}}\) outperforms the second best model noticeably by \(3\%-8\%\). Furthermore, \(\text{FMP}_{\text{\tiny{ode}}}\)'s performance is significantly more stable than other methods with smaller standard deviations on repetitive runs. For clarification, all the baselines' results are the same as ours that takes the average prediction accuracy over 10 repetitions. The two exceptions are GCNII and ASGAT, which takes the results from 1 and 3 runs, respectively. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Texas** & **Wisconsin** & **Cornell** \\ homophily level & 0.11 & 0.21 & 0.30 \\ \hline MLP & 81.9\(\pm\)4.8 & **85.3\(\pm\)3.6** & 81.1\(\pm\)6.4 \\ GCN (Kipf and Welling, 2017) & 59.5\(\pm\)5.3 & 59.8\(\pm\)7.0 & 57.0\(\pm\)4.8 \\ GAT (Velicković et al., 2018) & 59.5\(\pm\)5.3 & 59.8\(\pm\)7.0 & 58.9\(\pm\)3.3 \\ GraphSage(Hamilton et al., 2017) & 82.4\(\pm\)6.1 & 81.2\(\pm\)5.6 & 76.0\(\pm\)5.0 \\ \hline GCN-JKNet(Xu et al., 2018) & 66.5\(\pm\)6.6 & 74.3\(\pm\)6.4 & 64.6\(\pm\)8.7 \\ APPNP (Gasteiger et al., 2018) & 60.3\(\pm\)4.3 & 48.4\(\pm\)6.1 & 58.9\(\pm\)3.2 \\ \hline GCNII(Chen et al., 2020b) & 76.5 & 77.8 & 77.8 \\ H\(\text{GCN}\)Zhu et al. (2020) & **84.9\(\pm\)6.8** & **86.7\(\pm\)4.9** & **82.2\(\pm\)4.8** \\ ASGAT (Li et al., 2021b) & **84.6\(\pm\)5.8** & 82.2\(\pm\)3.2 & **86.9\(\pm\)**4.2** \\ \hline \(\text{FMP}_{\text{\tiny{ode}}}\) (ours) & **87.6\(\pm\)5.2** & **93.7\(\pm\)**1.2 & **93.8\(\pm\)**1.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Average accuracy of node classification on heterogeneous graphs over 10 repetitions. **First**, **Second**, **Third**. ### Dirichlet Energy We first illustrate the evolution of the Dirichlet energy of FMP by an undirected synthetic random graph. The synthetic graph has 100 nodes with two classes and 2D feature which is sampled from the normal distribution with the same standard deviation \(\sigma=2\) and two means \(\mu_{1}=-0.5\), \(\mu_{2}=0.5.\) The nodes are connected randomly with probability \(p=0.9\) if they are in the same class, otherwise nodes in different classes are connected with probability \(p=0.1\). We compare the energy behavior of GNN models with four message passing propagators: GCNs (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), GRAND (Chamberlain et al., 2021) and \(\mathrm{FMP}_{\mathrm{ode}}\). We visualize how the node features evolve during 50 layers of message passing process, from input features at layer 0 to output features at layer 50. For each model, the parameters are all properly initialized. The Dirichlet energy of each layer's output are plotted in logarithm scales. Traditional GNNs such as GCN and GAT suffer oversmoothing as the Dirichlet energy exponentially decays to zero within the first ten layers. GRAND relieves this problem by adding skip connections. \(\mathrm{FMP}_{\mathrm{ode}}\) increases energy mildly over network propagation. The oversmoothing issue in GNNs is circumvented with \(\mathrm{FMP}_{\mathrm{ode}}\). ## 9 Related Work ### Message Passing on Graph Neural Networks Message Passing Neural Network (MPNN) (Gilmer et al., 2017) establishes a general computational framework of graph feature propagation that covers the majority of update rules for attributed graphs. In each round, every node computes a message and passes the message to its adjacent nodes. Next, a random node aggregates the messages it receives and uses the aggregation to update its embedding. Different graph convolutions vary in the choice of aggregation. For instance, GCN (Kipf and Welling, 2017) and GraphSage Hamilton et al. (2017) operate (selective) summation to neighborhood features, and some other work Figure 3: Energy evolution for \(\mathrm{FMP}_{\mathrm{ode}}\). refines the aggregation weights by the attention mechanism (Xie et al., 2020; Brody et al., 2022) or graph rewiring (Ruiz et al., 2020; Bruel-Gabrielsson et al., 2022; Banerjee et al., 2022; Deac et al., 2022). While constructing propagation rules from the adjacency matrix, _i.e._, spatial-based graph convolution, is effective enough to encode those relatively-simple graphs instances, it has been demonstrated that such methods ignore high-frequency local information in the input graph signals (Bo et al., 2021). With an increased number of layers, such convolutions only learn node degree and connected components under the influence of the Laplacian spectrum (Oono and Suzuki, 2019), and the non-linear operation merely slows down the convolution speed (Wu et al., 2019). ### Spectral Graph Transforms Spectral-based graph convolutions have shown promising performance in transferring a trained graph convolution between different graphs, _i.e._, the model is transferable and generalizable (Levie et al., 2019; Gama et al., 2020). The output of spectral-based methods is stable with respect to perturbations of the input graphs (Ruiz et al., 2021; Zhou et al., 2022; Maskey et al., 2022). In literature, a diverse set of spectral transforms have been applied on graphs, such as Haar (Li et al., 2020; Wang et al., 2020) where Haar convolution and Haar pooling were proposed using the hierarchical Haar bases on a chain of graphs, scattering (Gao et al., 2019; Ioannidis et al., 2020)where the contractive graph scattering wavelets mimic the deep neural networks with wavelets are neurons and the decomposition is the propagation of the layers, needlets (Yi et al., 2022) when the semidiscrete spherical wavelets are used to define approximate equivariance neural networks for sphere data, and framelets (Dong, 2017; Wang and Zhuang, 2019; Zheng et al., 2022) where the spectral graph convolutions are induced by graph framelets. ### Oversmoothness in Graph Representation As many spatial-based graph convolutions merely perform a low-pass filter that smooths out the local perturbations in the input graph signal, smoothing has been identified as a common phenomenon in MPNNs (Li et al., 2018; Zhao and Akoglu, 2019; Nt and Maehara, 2019). Previous works usually measure and quantify the level of oversmoothing in a graph representation by the distances between node pairs (Rong et al., 2019; Zhao and Akoglu, 2019; Chen et al., 2020; Hou et al., 2020). When carrying out multiple times of signal smoothing operations, the output of nodes from different clusters tends to converge to similar vectors. While oversmoothing deteriorates the performance in GNNs, efforts have been made to preserve the identity of individual messages by modifying the message passing scheme, such as introducing jump connections (Xu et al., 2018; Chen et al., 2020), sampling neighboring nodes and edges (Rong et al., 2019; Feng et al., 2020), adding regularizations (Chen et al., 2020; Zhou et al., 2020; Yang et al., 2021), and increasing the complexity of convolutional layers (Balcilar et al., 2021; Geerts et al., 2021; Bodnar et al., 2021; Wang et al., 2022). Other methods try to trade-off graph smoothness with the fitness of the encoded features (Zhu et al., 2021; Zhou et al., 2021, 2022) or postpone the occurrence of oversmoothing by mechanisms, such as residual networks (Li et al., 2021; Liu et al., 2021) and the diffusion scheme (Chamberlain et al., 2021; Zhao et al., 2021). ### Stability of Graph Convolutions While MPNN-based spatial graph convolutions have attracted increasing attentions due to their intuitive architecture and promising performance. However, when the generalization progress is required from a small graph to a larger one, summation-based MPNNs do not show satisfying stability and transferability (Yehudai et al., 2021). In fact, the associated generalization bound from one graph to another is directly proportional to the largest eigenvalue of the graph Laplacian (Verma and Zhang, 2019). Consequently, employing spectral graph filters that are robust to certain structural perturbations becomes a necessary condition for transferability in graph representation learning. Earlier work discussed the linear stability of spectral graph filters in the Cayley smoothness space with respect to the change in the normalized graph Laplacian matrix (Levie et al., 2019; Kenlay et al., 2020; Gama et al., 2020). The stability of graph-based models could be measured by the statistical properties of the model, where the graph topology and signal are viewed as random variables. It has been shown that the output of the spectral filters in stochastic time-evolving graphs behave the same as the deterministic filters in expectation (Isufi et al., 2017). Ceci and Barbarossa (2018) approximated the original filtering process with uncertainties in the graph topology. Later on, stochastic analysis is leveraged to learn graph filters with topological stochasticity (Gao et al., 2021). Maskey et al. (2022) proved the stability of the spatial-based message passing. Kenlay et al. (2020, 2021a, 2021b) proved the stability for the spectral GCNs. ## 10 Conclusion This work proposes an expressive framelet message passing for GNN propagation. The framelet coefficients of neighboring nodes provide a graph rewiring scheme to amalgamate features in the framelet domain. We show that our FMP circumvents oversmoothing that appears in most spatial GNN methods. The spectral information reserves extra expressivity to the graph representation by taking multiscale framelet representation into account. Moreover, FMP has good stability in learning node feature representations at low computational complexity.
2307.00075
Quantum State Assignment Flows
This paper introduces assignment flows for density matrices as state spaces for representing and analyzing data associated with vertices of an underlying weighted graph. Determining an assignment flow by geometric integration of the defining dynamical system causes an interaction of the non-commuting states across the graph, and the assignment of a pure (rank-one) state to each vertex after convergence. Adopting the Riemannian Bogoliubov-Kubo-Mori metric from information geometry leads to closed-form local expressions which can be computed efficiently and implemented in a fine-grained parallel manner. Restriction to the submanifold of commuting density matrices recovers the assignment flows for categorial probability distributions, which merely assign labels from a finite set to each data point. As shown for these flows in our prior work, the novel class of quantum state assignment flows can also be characterized as Riemannian gradient flows with respect to a non-local non-convex potential, after proper reparametrization and under mild conditions on the underlying weight function. This weight function generates the parameters of the layers of a neural network, corresponding to and generated by each step of the geometric integration scheme. Numerical results indicates and illustrate the potential of the novel approach for data representation and analysis, including the representation of correlations of data across the graph by entanglement and tensorization.
Jonathan Schwarz, Jonas Cassel, Bastian Boll, Martin Gärttner, Peter Albers, Christoph Schnörr
2023-06-30T18:29:14Z
http://arxiv.org/abs/2307.00075v1
# Quantum state assignment flows ###### Abstract. This paper introduces assignment flows for density matrices as state spaces for representing and analyzing data associated with vertices of an underlying weighted graph. Determining an assignment flow by geometric integration of the defining dynamical system causes an interaction of the non-commuting states across the graph, and the assignment of a pure (rank-one) state to each vertex after convergence. Adopting the Riemannian Bogoliubov-Kubo-Mori metric from information geometry leads to closed-form local expressions which can be computed efficiently and implemented in a fine-grained parallel manner. Restriction to the submanifold of commuting density matrices recovers the assignment flows for categorial probability distributions, which merely assign labels from a finite set to each data point. As shown for these flows in our prior work, the novel class of quantum state assignment flows can also be characterized as Riemannian gradient flows with respect to a non-local non-convex potential, after proper reparametrization and under mild conditions on the underlying weight function. This weight function generates the parameters of the layers of a neural network, corresponding to and generated by each step of the geometric integration scheme. Numerical results indicates and illustrate the potential of the novel approach for data representation and analysis, including the representation of correlations of data across the graph by entanglement and tensorization. Key words and phrases:Assignment flows, Riemannian gradient flows, density matrix, information geometry 2020 Mathematics Subject Classification: 53B12, 62H35, 68T07 This work is funded by the Deutsche Forschungsgemeinschaft (DFG), grant SCHN 457/17-1, within the priority programme SPP 2298: Theoretical Foundations of Deep Learning. This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTUREURES Excellence Cluster). 4.5 Recovering the Assignment Flow for Categorial Distributions * 5 Experiments and Discussion * 5.1 Geometric Integration * 5.2 Labeling 3D Data on Bloch Spheres * 5.3 Basic Image Patch Smoothing * 6 Conclusion * 7 Proofs * 7.1 Proofs of Section 2 * 7.2 Proofs of Section 3 * 7.3 Proofs of Section 4 ## 1. Introduction ### Overview and Motivation A basic task of data analysis is categorization of observed data. We consider the following scenario: On a given undirected, weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},w)\), data \(D_{i}\in\mathcal{X}\) are observed as points in a metric space \((\mathcal{X},d_{\mathcal{X}})\) at each vertex \(i\in\mathcal{V}\). Categorization means to determine an assignment \[D_{i}\ \to\ j\in\{1,\ldots,c\}=:[c] \tag{1.1}\] of a _class label_\(j\) out of a _finite_ set of labels to each data point \(D_{i}\). Depending on the application, labels carry a specific meaning, e.g. type of tissue in medical image data, object type in computer vision or land use in remote sensing data. The decision at any vertex typically depends on decisions at other vertices. Thus the overall task of labeling data on a graph constitutes a particular form of _structured prediction_ in the field of machine learning [1]. _Assignment flows_ denote a particular class of approaches for data labeling on graphs [1, 2]. The basic idea is to represent each possible label assignment at vertex \(i\in\mathcal{V}\) by an _assignment vector_\(S_{i}\in\Delta_{c}\) in the standard probability simplex, whose vertices encode the unique label assignment for every label by the corresponding unit vector \(e_{j},\ j\in[c]\). Data labeling is accomplished by computing the flow \(S(t)\) of the dynamical system \[\dot{S}=R_{S}[\Omega S],\qquad S(0)=S_{0}, \tag{1.2}\] with the row-stochastic matrix \(S(t)\) and row vectors \(S_{i}(t)\) as state, which under mild conditions converges to unique label assignment vectors (unit vectors) at every vertex \(i\in\mathcal{V}\)[10]. The vector field on the right-hand side in (1.2) is parametrized by parameters collected in a matrix \(\Omega\). These parameters strongly affect the contextual label assignments. They can be learned from data in order to take into account typical relations of data in the current field of application [13]. For a demonstration of the application of this approach to a challenging medical imaging problem, we refer to [14]. From a geometric viewpoint, the system (1.2) can be characterized as a collection of individual flows \(S_{i}(t)\) at each vertex which are _coupled_ by the parameters \(\Omega\). Each individual flow is determined by a _replicator equation_ which constitutes a basic class of dynamical systems known from evolutionary game theory [13, 15]. By restricting each vector \(S_{i}(t)\) to the relative interior \(\dot{\Delta}_{c}\) of the probability simplex (i.e. the set of strictly positive discrete probability vectors) and by turning this convex set into a statistical manifold equipped with the Fisher-Rao geometry [1], the assignment flow (1.2) becomes a Riemannian ascent flow on the corresponding product manifold. The underlying information geometry is not only important for making the flow converge to unique label assignments but also for the design of efficient algorithms that actually determine the assignments [10]. For extensions of the basic assignment flow approach to unsupervised scenarios of machine learning and for an in-depth discussion of connections to other closely related work on structured prediction on graphs, we refer to [10, 11] and [14], respectively. In this paper, we study a novel and substantial generalization of assignment flows from the different point of view: assignment of labels to metric data where the labels are elements of a _continuous_ set. This requires to replace the simplex \(\Delta_{c}\) as state space which can only represent assignments of labels from a _finite_ set. The substitute for assignment vectors \(S_{i},\ i\in\mathcal{V}\) are Hermitian positive definite _density matrices_\(\rho_{i},\ i\in\mathcal{V}\) with unit trace, \[\mathcal{D}_{c}=\{\rho\in\mathbb{C}^{c\times c}\colon\rho=\rho^{*},\ \operatorname{tr}\rho=1\}. \tag{1.3}\] Accordingly, the finite set of unit vectors \(e_{j},\ j\in[c]\) (vertices of \(\Delta_{c}\)) are replaced by _rank-one_ density matrices \(\rho^{\infty}\), a.k.a. _pure states_ in quantum mechanics [1]. The resulting _quantum state assignment flow (QSAF)_, \[\dot{\rho}=\mathfrak{R}_{\rho}\big{[}\Omega[\rho]\big{]},\quad\rho(0)=\rho_{0}, \tag{1.4}\] has a form similar to (1.2) due to adopting the design strategy: the system (1.4) couples the individual evolutions \(\rho_{i}(t)\) at each vertex \(i\in\mathcal{V}\) through parameters \(\Omega\), and the underlying information geometry causes convergence of each \(\rho_{i}(t)\) towards a pure state. Using a different state space \(\mathcal{D}_{c}\) (rather than \(\hat{\Delta}_{c}\) in (1.2)) requires to adopt a different Riemannian metric which results in a corresponding definition of the operator \(\mathfrak{R}_{\rho}\). Our approach is natural in that restricting (1.4) to _diagonal_ density matrices results in (1.2), after identifying each vector \(\operatorname{diag}(\rho_{i})\) of diagonal entries of the density matrix \(\rho_{i}\) with an assignment vector \(S_{i}\in\hat{\Delta}_{c}\). Conversely, (1.4) considerably generalizes (1.2) and enhances modelling expressivity due to the _noncommutative_ interaction of the state spaces \(\rho_{i},\ i\in\mathcal{V}\) across the underlying graph \(\mathcal{G}\), when the quantum state assignment flow is computed by applying geometric numerical integration to (1.4). We regard our approach merely as an _approach to data representation and analysis_, rather than a contribution to quantum mechanics. For example, the dynamics (1.4) clearly differs from the Hamiltonian evolution of quantum systems. Yet we adopt the term 'quantum state' since not only density matrices as state spaces, but also the related information geometry, have been largely motivated by quantum mechanics and quantum information theory [1, 2]. ### Contribution and Organization Section 2 summarizes the information geometry of both the statistical manifold of categorial distributions and the manifold of strictly positive definite density matrices. Section 3 summarizes the assignment flow approach (1.2), as a reference for the subsequent generalization to (1.4). This generalization is the main contribution of this paper and presented in Section 4. Each row of the table below specifies the section where an increasingly general version of the original assignment flow (left column) is generalized to the corresponding quantum state assignment flow (right column, same row). \begin{tabular}{|c|c|} \hline **Assignment Flow (AF)** & **Quantum State AF (QSAF)** \\ \hline \hline single-vertex AF (Section 3.1) & single-vertex QSAF (Section 4.2) \\ \hline AF approach (Section 3.2) & QSAF approach (Section 4.3) \\ \hline Riemannian gradient AF (Section 3.3) & Riemannian gradient QSAF (Section 4.4) \\ \hline recovery of the AF from the QSAF by restriction (Section 4.5) \\ \hline \end{tabular} Alternative metrics on the positive definite matrix manifold which have been used in the literature, are reviewed in Section 2.3, in order to position our approach also from this point of view. Few academical experiments illustrate properties of the novel approach in Section 5. Working out a particular scenario of data analysis is beyond the scope of this paper. We conclude and indicate directions of further work in Section 6. In order not to compromise the reading flow, proofs are listed in Section 7. This paper considerably elaborates the short preliminary conference version [10]. ### Basic Notation For the readers convenience, we specify below the basic notation and notational conventions used in this paper. \begin{tabular}{l l} \([c]\) & \(\{1,2,\ldots,c\},\quad c\in\mathbb{N}\) \\ \(\mathbb{1}_{c}\) & \((1,1,\ldots,1)^{\top}\in\mathbb{R}^{c}\) \\ \(\mathbb{R}^{c}_{+}\) & \(\{x\in\mathbb{R}^{c}\colon x_{i}\geq 0,\ i\in[c]\}\) \\ \(\mathbb{R}^{c}_{++}\) & \(\{x\in\mathbb{R}^{c}\colon x_{i}>0,\ i\in[c]\}\) \\ \(e_{1},e_{2},\ldots\) & canonical basis vectors of \(\mathbb{R}^{c}\) \\ \(\langle u,v\rangle\) & Euclidean inner vector product \\ \(\|u\|\) & Euclidean norm \(\sqrt{\langle u,u\rangle}\) \\ \(I_{c}\) & unit matrix of \(\mathbb{R}^{c\times c}\) \\ \(p\cdot q\) & componentwise vector multiplication \((p\cdot q)_{i}=p_{i}q_{i},\ i\in[c],\ p,q\in\mathbb{R}^{c}\) \\ \(\frac{q}{p}\) & componentwise division \(\left(\frac{q}{p}\right)_{i}=\frac{q_{i}}{p_{i}},\ i\in[c],\ q\in\mathbb{R}^{ c},\ p\in\mathbb{R}^{c}_{++}\) \\ \(\mathcal{H}_{c}\) & space of Hermitian \(c\times c\) matrices (cf. (2.16b)) \\ \(\operatorname{tr}(A)\) & trace \(\sum_{i}A_{ii}\) of a matrix \(A\) \\ \(\langle A,B\rangle\) & matrix inner product \(\operatorname{tr}(AB)\), \(A,B\in\mathcal{H}_{c}\) \\ \([A,B]\) & commutator \(AB-BA\) \\ Diag\((v)\) & the diagonal matrix with vector \(v\) as entries \\ diag\((V)\) & the vector of the diagonal entries of a square matrix \(V\) \\ exp\({}_{\text{m}}\) & the matrix exponential \\ log\({}_{\text{m}}\) & the matrix logarithm \(\exp_{\text{m}}^{-1}\) \\ \(\Delta_{c}\) & the set of discrete probability vectors of dimension \(c\) (cf. (2.2)) \\ \(\mathcal{S}_{c}\) & the relative interior of \(\Delta_{c}\), i.e. the set of strictly positive probability vectors (cf. (2.3)) \\ \(\mathcal{W}_{c}\) & the product manifold \(\mathcal{S}_{c}\times\cdots\times\mathcal{S}_{c}\) (cf. (3.9)) \\ \(\mathcal{P}_{c}\) & the set of symmetric positive definite \(c\times c\) matrices (cf. (2.12)) \\ \(\mathcal{D}_{c}\) & the subset of matrices in \(\mathcal{P}_{c}\) whose trace is equal to \(1\) (cf. (2.13)) \\ \(\mathcal{Q}_{c}\) & the product manifold \(\mathcal{D}_{c}\times\cdots\times\mathcal{D}_{c}\) (cf. (4.23)) \\ \(\mathbb{1}_{\mathcal{S}_{c}}\) & barycenter \(\frac{1}{c}\mathbb{1}_{c}\) of the manifold \(\mathcal{S}_{c}\) \\ \(\mathbb{1}_{\mathcal{W}_{c}}\) & barycenter \((\mathbb{1}_{\mathcal{S}_{c}},\mathbb{1}_{\mathcal{S}_{c}},\ldots,\mathbb{1} _{\mathcal{S}_{c}})^{\top}\) of the manifold \(\mathcal{W}\) \\ \(\mathbb{1}_{\mathcal{D}_{c}}\) & matrix \(\operatorname{Diag}(\mathbb{1}_{\mathcal{S}_{c}})\in\mathcal{D}_{c}\subset \mathbb{C}^{c\times c}\) \\ \(g_{p},g_{W},g_{\rho}\) & the Riemannian metrics on \(\mathcal{S}_{c},\mathcal{W}_{c},\mathcal{D}_{c}\) (cf. (2.4), (3.10), (2.19)) \\ \(T_{c,0},\mathcal{T}_{c,0},\mathcal{H}_{c,0}\) & the tangent spaces to \(\mathcal{S}_{c},\mathcal{W}_{c},\mathcal{D}_{c}\) (cf. (2.6), (3.10), (2.16a)) \\ \(\pi_{c,0},\Pi_{c,0}\) & orthogonal projections onto \(T_{0},\mathcal{H}_{c,0}\) (cf. (2.7), (2.18)) \\ \(R_{p},R_{W},\mathfrak{R}_{\rho}\) & replicator operators associated with the assignment flows \\ & on \(\mathcal{S}_{c},\mathcal{W}_{c},\mathcal{D}_{c},\mathcal{Q}_{c}\) (cf. (2.8), (3.14), (4.2), (4.32b)) \\ \(\partial\) & Euclidean gradient operator: \(\partial f(p)=\left(\partial_{p_{1}}f(p),\partial_{p_{2}}f(p),\ldots\right)^{\top}\) \\ \(\operatorname{grad}\) & Riemannian gradient operator with respect to the Fisher-Rao metric \\ \(R_{W}[\cdot],\Omega[\cdot]\), etc. & square brackets indicate a linear operator which acts in a non-standard way, \\ & e.g. row-wise to a matrix argument. \\ \end{tabular} ## 2. Information Geometry _Information geometry_[1, 2] is concerned with the representation of parametric probability distributions from a geometric viewpoint like, e.g., the exponential familiy of distributions [1]. Specifically, an open convex set \(\mathcal{M}\) of parameters of a probability distribution becomes a Riemannian manifold \((\mathcal{M},g)\) when equipped with a Riemannian metric \(g\). The _Fisher-Rao metric_ is the canonical choice due to its invariance properties with respect to reparametrization [1]. A closely related scenario concerns the representation of the interior of compact convex bodies as Riemannian manifolds \((\mathcal{M},g)\) due to the correspondence between compactly supported Borel probability measures and an affine equivalence class of convex bodies [1]. A key ingredient of information geometry is the so-called \(\alpha\)_-family of affine connections_ introduced by Amari [1], which comprises the so-called \(e\)-connection \(\nabla\) and \(m\)-connection \(\nabla^{*}\) as special cases. These connections are torsion-free and dual to each other in the sense that they jointly satisfy the equation which uniquely characterizes the Levi-Civita connection as metric connection [1, Def. 3.1, Thm. 3.1]. Regarding numerical computations, working with the exponential map induced by the \(e\)-connection is particularly convenient since its domain is the entire tangent space. We refer to [1, 14, 15] for further reading and to [10, 15, Ch. 7] for the specific case of quantum state spaces. In this paper, we are concerned with two classes of convex sets, * the relative interior of probability simplices, each of which represents the categorical (discrete) distributions of the corresponding dimension, and * the set of positive-definite symmetric matrices with trace one. Sections 2.1 and 2.2 introduce the information geometry for the former and the latter class of sets, respectively. ### Categorical Distributions We set \[[c]:=\{1,2,\ldots,c\},\qquad c\in\mathbb{N}. \tag{2.1}\] and denote the probability simplex of distributions on \([c]\) by \[\Delta_{c}:=\Big{\{}p\in\mathbb{R}_{+}^{c}\colon\langle\mathbb{1}_{c},p\rangle =\sum_{i\in[c]}p_{i}=1\Big{\}},\qquad\mathbb{1}_{c}:=(1,1,\ldots,1)^{\top}\in \mathbb{R}^{c}. \tag{2.2}\] Its relative interior equipped with the Fisher-Rao metric becomes the Riemannian manifold \((\mathcal{S}_{c},g)\), \[\mathcal{S}_{c}:=\operatorname{rint}\Delta_{c}=\{p\in\Delta_{c} \colon p_{i}>0,\ i\in[c]\}, \tag{2.3}\] \[g_{p}(u,v):=\sum_{i\in[c]}\frac{u_{i}v_{i}}{p_{i}}=\langle u, \operatorname{Diag}(p)^{-1}v\rangle,\quad\forall u,v\in T_{c,0},\quad p\in \mathcal{S}_{c}, \tag{2.4}\] with trivial tangent bundle given by \[T\mathcal{S}_{c}\cong\mathcal{S}_{c}\times T_{c,0} \tag{2.5}\] and the tangent space \[T_{c,0}:=T_{\mathbb{1}_{\mathcal{S}_{c}}}\mathcal{S}_{c}=\{v\in\mathbb{R}^{c} \colon\langle\mathbb{1}_{c},v\rangle=0\}. \tag{2.6}\] The orthogonal projection onto \(T_{c,0}\) is denoted by \[\pi_{c,0}\colon\mathbb{R}^{c}\to T_{c,0},\qquad\pi_{c,0}v:=v-\frac{1}{c} \langle\mathbb{1}_{c},v\rangle\mathbb{1}_{c}=\Big{(}I_{c}-\mathbb{1}_{c} \mathbb{1}_{\mathcal{S}_{c}}^{\top}\Big{)}v. \tag{2.7}\] The mapping defined next plays a major role in all dynamical systems being under consideration in this paper. **Definition 2.1** (replicator operator).: The replicator operator is the linear mapping of the tangent space \[R\colon\mathcal{S}_{c}\times T_{c,0}\to T_{c,0},\qquad R_{p}v:=( \operatorname{Diag}(p)-pp^{\top})v,\qquad p\in\mathcal{S}_{c},\quad v\in T_{c,0} \tag{2.8}\] parametrized by \(p\in\mathcal{S}_{c}\). The name'replicator' is due to the role of this mapping in evolutionary game theory; see Remark 3.1 on page 9. **Proposition 2.2** (properties of \(R_{p}\)).: _The mapping (2.8) satisfies_ \[R_{p}\mathbb{1}_{c} =0, \tag{2.9a}\] \[\pi_{c,0}R_{p} =R_{p}\pi_{c,0}=R_{p},\quad\forall p\in\mathcal{S}_{c}. \tag{2.9b}\] _Furthermore, let \(f\colon\mathcal{S}_{c}\to\mathbb{R}\) be a smooth function and \(\widetilde{f}\colon U\to\mathbb{R}\) a smooth extension of \(f\) to an open neighborhood \(U\) of \(\mathcal{S}_{c}\subset\mathbb{R}^{c}\) with \(\widetilde{f}|_{\mathcal{S}_{c}}=f\). Then the Riemannian gradient of \(f\) with respect to the Fisher-Rao metric (2.4) is given by_ \[\operatorname{grad}f(p)=R_{p}\partial\widetilde{f}(p). \tag{2.10}\] Proof.: Appendix 7.1 **Remark 2.3**.: Equations (2.10) and (7.12), respectively, show that the replicator operator \(R_{p}\) is the inverse metric tensor with respect to the Fisher-Rao metric (2.4), expressed in the ambient coordinates. The exponential map induced by the \(e\)-connection is defined on the entire space \(T_{c,0}\) and reads [1] \[\operatorname{Exp}\colon\mathcal{S}_{c}\times T_{c,0}\to\mathcal{S}_{c},\qquad \operatorname{Exp}_{p}(v):=\frac{p\cdot e^{\frac{p}{p}}}{\langle p,e^{\frac{p}{p }}\rangle},\qquad p\in\mathcal{S}_{c},\quad v\in T_{c,0}. \tag{2.11}\] ### Density Matrices We denote the open convex cone of positive definite matrices by \[\mathcal{P}_{c}:=\{\rho\in\mathbb{C}^{c\times c}\colon\rho=\rho^{*},\ \rho\succ 0\} \tag{2.12}\] and the manifold of strictly positive definite density matrices by \[\mathcal{D}_{c}:=\{\rho\in\mathcal{P}_{c}\colon\ \operatorname{tr}\rho=1\}. \tag{2.13}\] \(\mathcal{D}_{c}\) is the intersection of \(\mathcal{P}_{c}\) and the hyperplane defined by the trace-one constraint. Its closure \(\overline{\mathcal{D}}_{c}\) is convex and compact. We can identify the space \(\mathcal{D}_{c}\) as the space of invertible density operators, in the sense of quantum mechanics, on the finite-dimensional Hilbert space \(\mathbb{C}^{c}\) without loss of generality. Any matrix ensemble of the form \[\{M_{i}\}_{i\in[n]}\subset\overline{\mathcal{P}}_{c}\colon\quad\sum_{i\in[n]}M _{i}=I_{c} \tag{2.14}\] induces the probability distribution on \([n]\) via the Born rule \[p\in\Delta_{n}\colon\quad p_{i}=\langle M_{i},\rho\rangle=\operatorname{tr}(M _{i}\rho),\quad i\in[n]. \tag{2.15}\] (2.14) is called _positive operator valued measure (POVM)_. We refer to [1] for the physical background and to [1] and references therein for the mathematical background. The analog of (2.6) is the tangent space which, at any point \(\rho\in\mathcal{D}_{c}\), is equal to the space of trace-less symmetric matrices \[\mathcal{H}_{c,0}:=\mathcal{H}_{c}\cap\{X\in\mathbb{C}^{c\times c}\colon\ \operatorname{tr}X=0\},\] (2.16a) where \[\mathcal{H}_{c}:=\{X\in\mathbb{C}^{c\times c}\colon\ X^{*}=X\}. \tag{2.16b}\] The manifold \(\mathcal{D}_{c}\) therefore has a trivial tangent bundle given by \[T\mathcal{D}_{c}=\mathcal{D}_{c}\times\mathcal{H}_{c,0}, \tag{2.17}\] with the tangent space \(\mathcal{H}_{c,0}=T_{\mathfrak{1}_{\mathcal{D}_{c}}}\mathcal{D}_{c}\) defined in equation (2.16a). The corresponding orthogonal projection onto the tangent space \(\mathcal{H}_{c,0}\) reads \[\Pi_{c,0}\colon\mathcal{H}_{c}\to\mathcal{H}_{c,0},\qquad\Pi_{c,0}[X]:=X-\frac {\operatorname{tr}X}{c}I_{c}. \tag{2.18}\] Equipping the manifold \(\mathcal{D}_{c}\) as defined in equation (2.13) with the _Bogoliubov-Kubo-Mori (BKM) metric_[13] results in a Riemannian manifold \((\mathcal{D}_{c},g)\). Using \(T_{\rho}\mathcal{D}_{c}=\mathcal{H}_{c,0}\), this metric can be expressed by \[g_{\rho}(X,Y):=\int_{0}^{\infty}\operatorname{tr}\big{(}X(\rho+\lambda I)^{-1} Y(\rho+\lambda I)^{-1}\big{)}d\lambda,\quad X,Y\in\mathcal{H}_{c,0},\quad\rho \in\mathcal{D}_{c}. \tag{2.19}\] This metric uniquely ensures the existence of a symmetric e-connection \(\nabla\) on \(\mathcal{D}_{c}\) that it mutually dual to its m-connection \(\nabla^{*}\) in the sense of information geometry, leading to the _dually-flat_ structure \((g,\nabla,\nabla^{*})\)[11], [1, Thm. 7.1]. The following map and its inverse, defined in terms of the matrix exponential \(\exp_{\rm m}\) and its inverse \(\log_{\rm m}=\exp_{\rm m}^{-1}\), will be convenient. \[\mathbb{T}\colon\mathcal{D}_{c}\times\mathcal{H}_{c} \to\mathcal{H}_{c}, \tag{2.20a}\] \[\mathbb{T}_{\rho}[X] :=\frac{d}{dt}\log_{\rm m}(\rho+tX)\big{|}_{t=0}=\int_{0}^{\infty} (\rho+\lambda I)^{-1}X(\rho+\lambda I)^{-1}d\lambda,\] (2.20b) \[\mathbb{T}_{\rho}^{-1}[X] =\frac{d}{dt}\exp_{\rm m}(H+tX)\big{|}_{t=0}=\int_{0}^{1}\rho^{1- \lambda}X\rho^{\lambda}d\lambda,\qquad\rho=\exp_{m}(H). \tag{2.20c}\] The inner product (2.19) may now be written in the form \[g_{\rho}(X,Y)=\langle\mathbb{T}_{\rho}[X],Y\rangle, \tag{2.21}\] since the trace is invariant with respect to cyclic permutations of a matrix product as argument. Likewise, \[\langle\rho,X\rangle=\operatorname{tr}(\rho X)=\operatorname{tr}\mathbb{T}_{ \rho}^{-1}[X]. \tag{2.22}\] We consider also two subspaces on the tangent space \(T_{\rho}\mathcal{D}_{c}\), \[T_{\rho}^{u}\mathcal{D}_{c} :=\left\{X\in\mathcal{H}_{c,0}\colon\exists\Omega=-\Omega^{*} \text{ such that }X=[\Omega,\rho]\right\}, \tag{2.23a}\] \[T_{\rho}^{c}\mathcal{D}_{c} :=\left\{X\in\mathcal{H}_{c,0}\colon\ [\rho,X]=0\right\}, \tag{2.23b}\] which yield the decomposition [1] \[T_{\rho}\mathcal{D}_{c}=T_{\rho}^{c}\mathcal{D}_{c}\oplus T_{\rho}^{u} \mathcal{D}_{c}. \tag{2.24}\] In Section 4.5, we will use this decomposition to recover the assignment flow for categorical distributions from the quantum state assignment flow, by restriction to a submanifold of commuting matrices. ### Alternative Metrics and Geometries The positive definite matrix manifold \(\mathcal{P}_{c}\)1 has become a tool for data modelling and analysis during the last two decades. Accordingly, a range of Riemannian metrics exist with varying properties. A major subclass is formed by the \(O(n)\)-invariant metrics, including the log-Euclidean, affine-invariant, Bures-Wasserstein and Bogoliubov-Kubo-Mori (BKM) metric. We refer to [14] for a comprehensive recent survey. Footnote 1: We confine ourselves in this subsection to the case of of real density matrices, as our main references for comparison only deal with real matrix manifolds. This section provides a brief comparison of the _BKM metric_ (2.19), adopted in this paper, with two often employed metrics in the literature, the _affine-invariant metric_ and the _log-Euclidean metric_, which may be regarded as 'antipodal points' in the space of metrics from the geometric and the computational viewpoint, respectively. #### 2.3.1. Affine-Invariant Metrics The affine-invariant metric has been derived in various ways, e.g. based on the canonical matrix inner product on the tangent space [14, Section 6] or as Fisher-Rao metric on the statistical manifold of centered multivariate Gaussian densities [20]. The metric is given by \[g_{\rho}(X,Y)=\operatorname{tr}\bigl{(}\rho^{-\frac{1}{2}}X\rho^{-\frac{1}{2 }}\rho^{-\frac{1}{2}}Y\rho^{-\frac{1}{2}}\bigr{)}=\operatorname{tr}\left(\rho ^{-1}X\rho^{-1}Y\right),\qquad\rho\in\mathcal{P}_{c},\quad X,Y\in T_{\rho} \mathcal{P}_{c}. \tag{2.25}\] The exponential map with respect to the Levi-Civita connection reads \[\exp_{\rho}^{(\text{aff})}(X)=\rho^{\frac{1}{2}}\exp_{\text{m}}\left(\rho^{- \frac{1}{2}}X\rho^{-\frac{1}{2}}\right)\rho^{\frac{1}{2}},\qquad\rho\in \mathcal{P}_{c},\quad X\in T_{\rho}\mathcal{P}_{c}. \tag{2.26}\] This Riemannian structure turns \(\mathcal{P}_{c}\) into a manifold with negative sectional curvature [1, Ch. II.10], which is convenient from the geometric viewpoint due to uniquely defined Riemannian means and geodesic convexity [21, Section 6.9]. On the other hand, evaluating (2.25) and (2.26) is computationally expensive, in particular when computing the quantum state assignment flow which essentially involves geometric averaging. #### 2.3.2. Log-Euclidean Metric The log-Euclidean metric, introduced by [1], is the pullback of the canonical matrix inner product with respect to the matrix logarithm and given by \[g_{\rho}(X,Y)=\operatorname{tr}\left(d\log_{\text{m}}(\rho)[X],d\log_{\text{ m}}(\rho)[Y]\right)\overset{\eqref{eq:eq #### 2.3.3. Comparison to Bogoliubov-Kubo-Mori Metric The BKM metric (2.19), (2.22), given by \[g_{\rho}(X,Y)=\langle\mathbb{T}_{\rho}[X],Y\rangle,\qquad\rho\in\mathcal{P}_{c} \quad X,Y\in T_{\rho}\mathcal{P}_{c}, \tag{2.29}\] looks similar to the log-Euclidean metric (2.27). Regarding them both as members of the class of _mean kernel metrics_[23, Def. 4.1] enables an intuitive comparison. For real-valued matrices, mean kernel metrics have the form \[g_{\rho}(X,X)=g_{D}(X^{\prime},X^{\prime})=\sum_{i,j\in[c]}\frac{(X^{\prime}_{ ij})^{2}}{\phi(D_{ii},D_{jj})},\qquad\rho=VDV^{\top}\quad V\in O(n),\quad X= VX^{\prime}V^{\top}, \tag{2.30}\] with a diagonal matrix \(D=\operatorname{Diag}(D_{11},\dots,D_{cc})\) and a bivariate function \(\phi(x,y)=a\,m(x,y)^{\theta},\ a>0\) in terms of a symmetric homogeneous mean \(m\colon\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\). Regarding the log-Euclidean metric, one has \(\phi(x,y)=\big{(}\frac{x-y}{\log x-\log y}\big{)}^{2}\), whereas for the BKM metric one has \(\phi(x,y)=\frac{x-y}{\log x-\log y}\). Taking also the restriction to density matrices \(\mathcal{D}_{c}\subset\mathcal{P}_{c}\) into account, one has the relation \[\exp_{\rho}^{(\log)}(Y) =\operatorname{Exp}_{\rho}^{(e)}(X),\qquad\rho\in\mathcal{D}_{c},\quad X\in\mathcal{H}_{c,0}, \tag{2.31a}\] \[Y =X-\log\Big{(}\operatorname{tr}\exp_{m}\big{(}\log_{\mathrm{m}}( \rho)+\mathbb{T}_{\rho}[X]\big{)}\Big{)}\rho, \tag{2.31b}\] as will be shown below as Remark 4.11. Here, the left-hand side of (2.31a) is the exponential map (2.28) induced by the log-Euclidean metric and \(\exp_{\rho}^{(e)}\) is the exponential map with respect to the affine e-connection of information geometry, as detailed below by Proposition 4.6. This close relationship of the e-exponential map \(\exp_{\rho}^{(e)}\) to the exponential map of the log-Euclidean metric highlights the computational efficiency of using BKM metric, which we adopt for our approach. This is also motivated by the lack of an explicit formula for the exponential map with respect to the Levi-Civita connection [10]. To date, the sign of the curvature is not known either. We note that to our best knowledge, the introduction of the affine connections of information geometry, as surrogates of the Riemannian connection for any statistical manifold, predates the introduction of the log-Euclidean metric for the specific space \(\mathcal{P}_{c}\). ## 3. Assignment Flows The assignment flow approach has been informally introduced in Section 1. In this section, we summarize the mathematical ingredients of this approach, as a reference for the subsequent generalization to quantum states (density matrices) in Section 4. Sections 3.1 and 3.2 introduce the assignment flow on a single vertex and on an arbitrary graph, respectively. A reparametrization turns the latter into a Riemannian gradient flow (Section 3.3). Throughout this section, we refer to definitions and notions introduced in Section 2.1. ### Single-Vertex Assignment Flow Let \(D=(D_{1},\dots,D_{c})^{\top}\in\mathbb{R}^{c}\) and consider the task to pick the smallest components of \(D\). Formulating this operation as optimization problem amounts to evaluating the support function (in the sense of convex analysis [20, p. 28]) of the probability simplex \(\Delta_{c}\) at \(-D\), \[\min_{j\in[c]}\{D_{1},\dots,D_{c}\}=\max_{p\in\Delta_{c}}\langle-D,p\rangle. \tag{3.1}\] In practice, the vector \(D\) represents real-valued noisy measurements at some vertex \(i\in\mathcal{V}\) of an underlying graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and hence will be in 'general position', that is the minimal component will be unique: if \(j^{*}\in[c]\) indexes the minimal component \(D_{j^{*}}\), then the corresponding unit vector \(p^{*}=e_{j^{*}}\) will maximize the right-hand side of (3.1). We call _assignment vectors_ such vectors which assign a label (index) to observed data vectors. If \(D\) varies, the operation (3.1) is non-smooth, however. In view of a desired interaction of label assignments across the graph (cf. Section 3.2), we therefore replace this operation by a _smooth_ dynamical system whose solution converges to the desired assignment vector. To this end, the vector \(D\) is represented on \(\mathcal{S}_{c}\) as _likelihood vector_ \[L_{p}(D):=\exp_{p}(-\pi_{c;0}D)\stackrel{{\eqref{eq:m_p}}}{{=}} \exp_{p}(-D),\qquad p\in\mathcal{S}_{c}, \tag{3.2}\] where \[\exp\colon\mathcal{S}_{c}\times T_{c;0}\to\mathcal{S}_{c},\qquad\exp_{p}(v):= \operatorname{Exp}_{p}\circ R_{p}(v)=\frac{p\cdot e^{v}}{\langle p,e^{v}\rangle}, \qquad p\in\mathcal{S}_{c}. \tag{3.3}\] The _single-vertex assignment flow_ equation reads \[\dot{p}=R_{p}L_{p}(D)=p\cdot\big{(}L_{p}(D)-\langle p,L_{p}(D)\rangle\mathbb{1 }_{c}\big{)},\qquad p(0)=\mathbb{1}_{\mathcal{S}_{c}}. \tag{3.4}\] Its solution \(p(t)\) converges to the vector that solves the label assignment problem (3.1), see Corollary 3.4 below. **Remark 3.1** (**replicator equation**).: Differential equations of the form (3.4), with some \(\mathbb{R}^{c}\)-valued function \(F(p)\) in place of \(L_{p}(D)\), are known as _replicator equation_ in evolutionary game theory [10]. **Lemma 3.2**.: _Let \(p\in\mathcal{S}_{c}\). Then the differentials of the mapping (3.3) with respect to \(p\) and \(v\) are given by_ \[d_{v}\exp_{p}(v)[u] =R_{\exp_{p}(v)}u, \tag{3.5a}\] \[d_{p}\exp_{p}(v)[u] =R_{\exp_{p}(v)}\frac{u}{p},\qquad p\in\mathcal{S}_{c},\quad u,v \in T_{c;0}. \tag{3.5b}\] Proof.: Appendix 7.2. **Theorem 3.3** (**single vertex assignment flow**).: _The single-vertex assignment flow equation (3.4) is equivalent to the system_ \[\dot{p} =R_{p}q, p(0) =\mathbb{1}_{\mathcal{S}_{c}}, \tag{3.6a}\] \[\dot{q} =R_{q}q, q(0) =L_{\mathbb{1}_{\mathcal{S}_{c}}}(D), \tag{3.6b}\] _with solution given by_ \[p(t) =\exp_{\mathbb{1}_{\mathcal{S}_{c}}}\Big{(}\int_{0}^{t}q(\tau)d \tau\Big{)}. \tag{3.6c}\] Proof.: Appendix 7.2. **Corollary 3.4** (**single vertex label assignment**).: _Let \(\mathcal{J}^{*}:=\arg\min_{j\in[c]}\{D_{j}\colon j\in[c]\}\subseteq[c]\). Then the solution \(p(t)\) to (3.4) satisfies_ \[\lim_{t\to\infty}p(t)=\frac{1}{|\mathcal{J}^{*}|}\sum_{j\in J^{*}}e_{j}\in \arg\max_{p\in\Delta_{c}}\langle-D,p\rangle. \tag{3.7}\] _In particular, if \(D\) has a unique minimal component \(D_{j^{*}}\), then \(p(t)\to e_{j^{*}}\) as \(t\to\infty\)._ Proof.: Appendix 7.2. ### Assignment Flows The assignment flow approach consists of the weighted interaction - as define below - of single-vertex assignment flows, associated with vertices \(i\in\mathcal{V}\) of a weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\omega)\) with nonnegative weight function \[\omega\colon\mathcal{E}\to\mathbb{R}_{+},\qquad ik\mapsto\omega_{ik}. \tag{3.8}\] The assignment vectors are denoted by \(W_{i},\,i\in\mathcal{V}\) and form the row vectors of a row-stochastic matrix \[W\in\mathcal{W}_{c}:=\underbrace{\mathcal{S}_{c}\times\cdots\times\mathcal{S}_ {c}}_{|\mathcal{V}|\text{ factors}}. \tag{3.9}\] The product space \(\mathcal{W}_{c}\) is called _assignment manifold_\((\mathcal{W}_{c},g)\), where the metric \(g\) is defined by applying (2.4) row-wise, \[g_{W}(U,V):=\sum_{i\in\mathcal{V}}g_{W_{i}}(U_{i},V_{i}),\qquad U,V\in\mathcal{ T}_{c;0}:=T_{c;0}\times\cdots\times T_{c;0}. \tag{3.10}\] The _assignment flow equation_ generalizing (3.4) reads \[\dot{W}=R_{W}[S(W)], \tag{3.11}\] where the _similarity vectors_ \[S_{i}(W):=\mathrm{Exp}_{W_{i}}\Big{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\, \mathrm{Exp}_{W_{i}}^{-1}\left(L_{W_{k}}(D_{k})\right)\Big{)},\qquad i\in\mathcal{V} \tag{3.12}\] form the row vectors of the matrix \(S(W)\in\mathcal{W}_{c}\). The neigborhoods \[\mathcal{N}_{i}:=\{i\}\cup\{k\in\mathcal{V}\colon ik\in\mathcal{E}\} \tag{3.13}\] are defined by the adjacency relation of the underlying graph \(\mathcal{G}\), and \(R_{W}[\cdot]\) of (3.11) applies (2.8) row-wise, \[R_{W}[S(W)]_{i}=R_{W_{i}}S_{i}(W),\qquad i\in\mathcal{V}. \tag{3.14}\] Note that the similarity vectors \(S_{i}(W)\) given by (3.12) result from geometric weighted averaging of the velocity vectors \(\mathrm{Exp}_{W_{i}}^{-1}\left(L_{W_{k}}(D_{k})\right)\). The velocities represent given data \(D_{i},\ i\in\mathcal{V}\) via the likelihood vectors \(L_{W_{i}}(D_{i})\) given by (3.2). Each choice of the weights \(\omega_{ik}\) in (3.12) associated with every edge \(ik\in\mathcal{E}\) defines an assignment flow \(W(t)\) solving (3.11). Thus these weight parameters determine how individual label assignments by (3.2) and (3.4) are _regularized_. Well-posedness, stability and quantitative estimates of basins of attraction to integral label assignment vectors have been established in [22]. Reliable and efficient algorithms for computing numerically the assignment flow have been devised by [20]. ### Reparametrized Assignment Flows In [23, Prop. 3.6], the following parametrization of the general assignment flow equation (3.11) was introduced, which generalizes the parametrization (3.6) of the single-vertex assignment flow (3.4). \[\dot{W} =R_{W}[\overline{S}],\qquad W(0)=\mathbb{1}_{\mathcal{W}_{c}}, \tag{3.15a}\] \[\dot{\overline{S}} =R_{\overline{S}}[\Omega\overline{S}],\qquad\overline{S}(0)=S( \mathbb{1}_{\mathcal{W}_{c}}), \tag{3.15b}\] with the nonnegative weight matrix corresponding to the weight function (3.8), \[\Omega=(\Omega_{1},\dots,\Omega_{|\mathcal{V}|})^{\top}\in\mathbb{R}^{| \mathcal{V}|\times|\mathcal{V}|},\qquad\qquad\Omega_{ik}:=\begin{cases} \omega_{ik},&\text{if }k\in\mathcal{N}_{i},\\ 0,&\text{otherwise}.\end{cases} \tag{3.16}\] This formulation reveals in terms of (3.15b) the 'essential' part of the assignment flow equation, since (3.15a) depends on (3.15b), but not vice versa. Furthermore, the data and weights show up only in the initial point and in the vector field on the right-hand side of (3.15b), respectively. Henceforth, we solely focus on (3.15b) rewritten for convenience as \[\dot{S}=R_{S}[\Omega S],\qquad S(0)=S_{0}, \tag{3.17}\] where \(S_{0}\) comprises the similarity vectors (3.12) evaluated at the barycenter \(W=\mathbb{1}_{\mathcal{W}_{c}}\). ## 4. Quantum State Assignment Flows In this section, we generalize the assignment flow equations (3.11) and (3.17) to the product manifold \(\mathcal{Q}_{c}\) of density matrices as state space. The resulting equations have a similar mathematical form. Their derivation requires * to determine the form of the Riemannian gradient of functions \(f\colon\mathcal{D}_{c}\to\mathbb{R}\) with respect to the BKM-metric (2.19), the corresponding replicator operator and exponential mappings \(\mathrm{Exp}\) and \(\exp\) together with their differentials (Section 4.1), * to define the single-vertex quantum state assignment flow (Section 4.2), * to devise the general quantum state assignment flow equation for an arbitrary graph (Section 4.3) * and its alternative parametrization (Section 4.4) which generalizes formulation (3.17) of the assignment flow accordingly. A natural question is: What does 'label' mean for a generalized assignment flow evolving on the product manifold \(\mathcal{Q}_{c}\) of density matrices? For the single vertex quantum state assignment flow, i.e. without interaction of these flows on a graph, it turns out that the pure state corresponding to the minimal eigenvalue of the initial density matrix is assigned to the given data point (Proposition 4.13). Coupling non-commuting density matrices over the graph through the novel quantum state assignment flow, therefore, generates an interesting complex dynamics as we illustrate in Section 5. It is shown in Section 4.5 that the restriction of the novel quantum state assignment flow to commuting density matrices recovers the original assignment flow for discrete labels. Throughout this section, we refer to definitions and notions introduced in Section 2.2. ### Riemannian Gradient, Replicator Operator and Further Mappings **Proposition 4.1** (**Riemannian gradient)**.: _Let \(f\colon\mathcal{D}_{c}\to\mathbb{R}\) be a smooth function defined on the manifold (2.13), and \(\widetilde{f}\colon U\to\mathbb{R}\) a smooth extension of \(f\) to an open neighborhood \(U\) of \(\mathcal{D}_{c}\subset\mathbb{C}^{c\times c}\) with \(\widetilde{f}|_{\mathcal{D}_{c}}=f\). Then its Riemannian gradient with respect to the BKM-metric (2.19) is given by_ \[\operatorname{grad}_{\rho}f=\mathbb{T}_{\rho}^{-1}[\partial\widetilde{f}]- \langle\rho,\partial\widetilde{f}\rangle\rho, \tag{4.1}\] _where \(\mathbb{T}_{\rho}^{-1}\) is given by (2.20c) and \(\partial\widetilde{f}\) is the ordinary gradient with respect to the Euclidean structure of the ambient space \(\mathbb{C}^{c\times c}\)._ Proof.: Appendix 7.3. Comparing the result (4.1) with (2.10) motivates the following \[\mathfrak{R}_{\rho}\colon\mathcal{H}_{c}\to\mathcal{H}_{c,0},\qquad\mathfrak{ R}_{\rho}[X]:=\mathbb{T}_{\rho}^{-1}[X]-\langle\rho,X\rangle\rho,\qquad\rho\in \mathcal{D}_{c}\qquad\qquad\text{\bf(replicator map)} \tag{4.2}\] The following lemma shows that the properties (2.9) extend to (4.2). **Lemma 4.2** (**properties of \(\mathfrak{R}_{\rho}\)**)**.: _Let \(\Pi_{c,0}\) denote the orthogonal projection (2.18). Then the replicator map (4.2) satisfies_ \[\Pi_{c,0}\circ\mathfrak{R}_{\rho}=\mathfrak{R}_{\rho}\circ\Pi_{c,0}= \mathfrak{R}_{\rho},\quad\forall\rho\in\mathcal{D}_{c}. \tag{4.3}\] Proof.: Appendix 7.3. Next, using the tangent space \(\mathcal{H}_{c,0}\), we define a parametrization of the manifold \(\mathcal{D}_{c}\) in terms of the mapping \[\Gamma\colon\mathcal{H}_{c,0}\to\mathcal{D}_{c},\qquad\Gamma(X):=\frac{\exp_{ \mathrm{m}}(X)}{\operatorname{tr}\exp_{\mathrm{m}}(X)}=\exp_{\mathrm{m}}\big{(} X-\psi(X)I\big{)},\qquad\qquad\text{\bf($\Gamma$-map)}\] (4.4a) where \[\psi(X):=\log\big{(}\operatorname{tr}\exp_{\mathrm{m}}(X)\big{)}. \tag{4.4b}\] The following lemma and proposition show that the domain of \(\Gamma\) extends to \(\mathbb{R}^{c\times c}\). **Lemma 4.3** (**extension of \(\Gamma\)**)**.: _The extension to \(\mathbb{C}^{c\times c}\) of the mapping \(\Gamma\) defined by (4.4) is well-defined and given by_ \[\Gamma\colon\mathbb{C}^{c\times c}\to\mathcal{D}_{c},\qquad\Gamma(Z)=\Gamma( \Pi_{c,0}[Z]). \tag{4.5}\] Proof.: Appendix 7.3. **Proposition 4.4** (**inverse of \(\Gamma\)**)**.: _The map \(\Gamma\) defined by (4.4) is bijective with inverse_ \[\Gamma^{-1}\colon\mathcal{D}_{c}\to\mathcal{H}_{c,0},\qquad\Gamma^{-1}(\rho)= \Pi_{c,0}[\log_{\mathrm{m}}\rho]. \tag{4.6}\] Proof.: Appendix 7.3. The following lemma provides the diffentials of the mappings \(\Gamma\) and \(\Gamma^{-1}\). **Lemma 4.5** (**differentials \(d\Gamma\), \(d\Gamma^{-1}\))**.: _Let \(H,X\in\mathcal{H}_{c,0}\) with \(\Gamma(H)=\rho\) and \(Y\in T\mathcal{H}_{c,0}\cong\mathcal{H}_{c,0}\). Then_ \[d\Gamma(H)[Y] =\mathbb{T}_{\rho}^{-1}\big{[}Y-\langle\rho,Y\rangle I\big{]}, \qquad\rho=\Gamma(H), \tag{4.7a}\] \[d\Gamma^{-1}(\rho)[X] =\Pi_{c,0}\circ\mathbb{T}_{\rho}[X]. \tag{4.7b}\] Proof.: Appendix 7.3. We finally compute a closed-form expression of the e-geodesic, i.e. the geodesic resp. exponential map induced by the e-connection on the manifold \((\mathcal{D}_{c},g)\). **Proposition 4.6** (**e-geodesics**).: _The e-geodesic emanating at \(\rho\in\mathcal{D}_{c}\) in the direction \(X\in\mathcal{H}_{c,0}\) and the corresponding exponential map are given by_ \[\gamma_{\rho,X}^{(e)}(t) :=\mathrm{Exp}_{\rho}^{(e)}(tX),\quad t\geq 0 (\textbf{e-geodesic}) \tag{4.8a}\] \[\mathrm{Exp}_{\rho}^{(e)}(X) :=\Gamma\big{(}\Gamma^{-1}(\rho)+d\Gamma^{-1}(\rho)[X]\big{)} (\textbf{exponential map})\] (4.8b) \[=\Gamma\big{(}\Gamma^{-1}(\rho)+\Pi_{c,0}\circ\mathbb{T}_{\rho}[ X]\big{)}. \tag{4.8c}\] Proof.: Appendix 7.3. **Corollary 4.7** (**inverse exponential map**).: _The inverse of the exponential mapping (4.8) is given by_ \[\big{(}\,\mathrm{Exp}_{\rho}^{(e)}\,\big{)}^{-1}\colon\mathcal{D}_{c}\to \mathcal{H}_{c,0},\qquad\big{(}\,\mathrm{Exp}_{\rho}^{(e)}\,\big{)}^{-1}(\mu) =d\Gamma\big{(}\Gamma^{-1}(\rho)\big{)}\big{[}\Gamma^{-1}(\mu)-\Gamma^{-1}( \rho)\big{]}. \tag{4.9}\] Proof.: Appendix 7.3. Analogous to (3.3), we define the mapping \(\mathrm{exp}_{\rho}\), where both the subscript and the argument disambiguate the meaning of '\(\mathrm{exp}\)'. **Lemma 4.8** (**exp-map**).: _The mapping defined using (4.8b) and (4.2) by_ \[\mathrm{exp}_{\rho}\colon\mathcal{H}_{0,c}\to\mathcal{D}_{c},\qquad\mathrm{exp }_{\rho}(X):=\mathrm{Exp}_{\rho}^{(e)}\circ\mathfrak{R}_{\rho}[X],\qquad\rho\in \mathcal{D}_{c}\qquad\quad\quad(\mathrm{exp-map})\] (4.10a) _has the explicit form_ \[\mathrm{exp}_{\rho}(X)=\Gamma\big{(}\Gamma^{-1}(\rho)+X\big{)}. \tag{4.10b}\] Proof.: Appendix 7.3. The following lemma provides the explicit form of the differential of the mapping (4.10b) which resembles the corresponding formula (3.5a) of the assignment flow. **Lemma 4.9** (**differential \(d\,\mathrm{exp}_{\rho}\))**.: _The differential of the mapping (4.10) reads with \(\rho\in\mathcal{D}_{c}\), \(X\in\mathcal{H}_{c,0}\) and \(Y\in T\mathcal{H}_{c,0}\cong\mathcal{H}_{c,0}\)_ \[d\,\mathrm{exp}_{\rho}(X)[Y]=\mathfrak{R}_{\mathrm{exp}_{\rho}(X)}[Y]. \tag{4.11}\] Proof.: Appendix 7.3. **Remark 4.10** (**comparing \(\mathrm{exp}\)-maps - I**).: Since (4.11) resembles (3.5a), one may wonder about the connection of (4.10b) and (3.3). In view of (4.4a), we define \[\gamma\colon T_{c,0}\to\mathcal{S}_{c},\qquad\gamma(v):=\frac{e^{v}}{\langle \mathbb{1},e^{v}\rangle}=\mathrm{exp}_{\mathbb{1}_{S_{c}}}(v) \tag{4.12}\] and compute with the expression for its inverse (cf. [13]) \[\gamma^{-1}(p) =\pi_{c,0}\log\frac{p}{\mathbb{1}_{\mathcal{S}_{c}}}=\pi_{c,0}( \log p-\log\mathbb{1}_{\mathcal{S}_{c}})=\pi_{c,0}\log p \tag{4.13a}\] \[\stackrel{{\eqref{eq:2.7}}}{{=}} \log p-\langle\mathbb{1}_{\mathcal{S}_{c}},\log p\rangle\mathbb{1}_{c} \tag{4.13b}\] which resembles (4.6). Moreover, in view of (4.10b), the analogous expression using \(\gamma\), instead of \(\Gamma\), reads \[\gamma\big{(}\gamma^{-1}(p)+v\big{)} =\frac{e^{\pi_{c,0}\log p+v}}{\langle\mathbb{1},e^{\pi_{c,0}\log p +v}\rangle}=\frac{\langle\mathbb{1}_{\mathcal{S}_{c}},\log p\rangle p\cdot e^{v }}{\langle\mathbb{1}_{\mathcal{S}_{c}},\log p\rangle p,e^{v}\rangle}=\frac{p \cdot e^{v}}{\langle p,e^{v}\rangle} \tag{4.14a}\] \[=\mathrm{exp}_{p}(v). \tag{4.14b}\] **Remark 4.11** (**comparing \(\exp\)-maps - II)**.: Using the above definitions and relations, we check equation (2.31a), \(\exp_{\rho}^{(\log)}(Y)=\exp_{\rho}^{(e)}(X)\), where the relation (2.31b) between \(Y\) and \(X\) can now be written in the form \[Y\stackrel{{\eqref{eq:2.31b}}}{{=}}X-\psi\big{(}\log_{\mathrm{m}} (\rho)+\mathbb{T}_{\rho}[X]\big{)}\rho. \tag{4.15}\] Direct computation yields \[\exp_{\rho}^{(\log)}(Y) \stackrel{{\eqref{eq:2.32}}}{{=}}\exp_{\mathrm{m}} (\log_{\mathrm{m}}(\rho)+\mathbb{T}_{\rho}[Y]) \tag{4.16a}\] \[\stackrel{{\eqref{eq:2.31b}}}{{=}}\exp_{\mathrm{m}} \Big{(}\log_{\mathrm{m}}(\rho)+\mathbb{T}_{\rho}[X]-\psi\big{(}\log_{\mathrm{ m}}(\rho)+\mathbb{T}_{\rho}[X]\big{)}\overbrace{\mathbb{T}_{\rho}\circ \underbrace{\mathbb{T}_{\rho}^{-1}[I_{c}]}_{=\rho}}^{=I_{c}}\Big{)}\] (4.16b) \[\stackrel{{\eqref{eq:2.31b}}}{{=}}\Gamma\big{(} \mathbb{I}_{c,0}[\log_{\mathrm{m}}(\rho)]+\Pi_{c,0}\circ\mathbb{T}_{\rho}[X] \big{)}=\Gamma\big{(}\Gamma^{-1}(\rho)+\Pi_{c,0}\circ\mathbb{T}_{\rho}[X] \big{)}\] (4.16c) \[=\exp_{\rho}^{(e)}(X). \tag{4.16d}\] ### Single-Vertex Density Matrix Assignment Flow We generalize the single vertex assignment flow equation (3.4) to the manifold \((\mathcal{D}_{c},g_{\rho})\) given by (2.13) with the BKM metric (2.19). Defining in view of (3.2) the _likelihood matrix_ \[L_{\rho}\colon\mathcal{H}_{c}\to\mathcal{D}_{c},\qquad L_{\rho}(D):=\exp_{ \rho}(-\Pi_{c,0}[D]),\qquad\rho\in\mathcal{D}_{c}, \tag{4.17}\] the corresponding _single vertex quantum state assignment flow (SQSAF)_ equation reads \[\dot{\rho} =\mathfrak{R}_{\rho}[L_{\rho}(D)]\] ( **SQSAF** ) ( 4.18a) \[\stackrel{{\eqref{eq:2.31b}}}{{=}}\mathbb{T}_{\rho}^{ -1}[L_{\rho}(D)]-\langle\rho,L_{\rho}(D)\rangle\rho,\qquad\rho(0)=\mathbb{1}_{ \mathcal{D}_{c}}=\mathrm{Diag}(\mathbb{1}_{\mathcal{S}_{c}}). \tag{4.18b}\] Proposition 4.13 below specifies its properties after a preparatory Lemma. **Lemma 4.12**.: _Assume_ \[D=Q\Lambda_{D}Q^{\top}\in\mathcal{H}_{c}\qquad\text{and}\qquad\rho=Q\Lambda_{ \rho}Q^{\top}\in\mathcal{D}_{c} \tag{4.19}\] _can be simultaneously diagonalized with \(Q\in\mathrm{O}(c)\), \(\Lambda_{D}=\mathrm{Diag}(\lambda_{D})\), \(\Lambda_{\rho}=\mathrm{Diag}(\lambda_{\rho})\) and \(\lambda_{\rho}\in\mathcal{S}_{c}\) since \(\mathrm{tr}\,\rho=1\). Then_ \[L_{\rho}(D)=Q\,\mathrm{Diag}\,\big{(}\exp_{\lambda_{\rho}}(-\lambda_{D})\big{)} Q^{\top}. \tag{4.20}\] Proof.: Appendix 7.3. **Proposition 4.13** (**SQSAF limit)**.: _Let \(D=Q\Lambda_{D}Q^{\top}\) be the spectral decomposition of \(D\) with eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{c}\) and orthonormal eigenvectors \(Q=(q_{1},\ldots,q_{c})\). Assume the minimal eigenvalue \(\lambda_{c}\) is unique. Then the solution \(\rho(t)\) to (4.18) satisfies_ \[\lim_{t\to\infty}\rho(t)=\Pi_{q_{c}}:=q_{c}q_{c}^{\top}. \tag{4.21}\] Proof.: Appendix 7.3. ### Quantum State Assignment Flow This section describes our main result, the definition of a novel flow of coupled density matrices in terms of a parametrized interaction of single vertex flows of the form (4.18) on a given graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\omega)\). We assume the weight function \(\omega\colon\mathcal{E}\to\mathbb{R}_{+}\) to be nonnegative with \(\omega_{ij}=0\) if \(ij\not\in\mathcal{E}\) and \[\sum_{k\in\mathcal{N}_{i}}\omega_{ik}=1, \tag{4.22}\] where we adopt the notation (3.13) for neighborhoods \(\mathcal{N}_{i},\ i\in\mathcal{V}\). Analogous to (3.9), we define the product manifold \[\rho\in\mathcal{Q}_{c}:=\underbrace{\mathcal{D}_{c}\times\cdots\times\mathcal{ D}_{c}}_{[\mathcal{V}]\text{ factors}} \tag{4.23}\] with \(\mathcal{D}_{c}\) given by (2.13). The corresponding factors of \(\rho\) are denoted by \[\rho=(\rho_{i})_{i\in[c]},\quad\rho_{i}\in\mathcal{D}_{c},\quad i\in\mathcal{V}. \tag{4.24}\] \(\mathcal{Q}_{c}\) becomes a Riemannian manifold when equipped with the metric \[g_{\rho}(X,Y):=\sum_{i\in\mathcal{V}}g_{\rho_{i}}(X_{i},Y_{i}),\qquad X,Y\in T \mathcal{Q}_{c}:=\mathcal{H}_{c,0}\times\cdots\times\mathcal{H}_{c,0}, \tag{4.25}\] with \(g_{\rho_{i}}\) given by (2.19) for each \(i\in\mathcal{V}\). We set \[\mathbb{1}_{\mathcal{Q}_{c}}:=(\mathbb{1}_{\mathcal{D}_{c}})_{i\in\mathcal{V} }\in\mathcal{Q}_{c}, \tag{4.26}\] with \(\mathbb{1}_{\mathcal{D}_{c}}\) given by (4.18b). Our next step is to define a _similarity mapping_ analogous to (3.12), \[S\colon\mathcal{V}\times\mathcal{Q}_{c},\qquad S_{i}(\rho):=\operatorname{Exp }_{\rho_{i}}^{(e)}\Big{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\big{(} \operatorname{Exp}_{\rho_{i}}^{(e)}\big{)}^{-1}\big{(}L_{\rho_{k}}(D_{k}) \big{)}\Big{)}, \tag{4.27}\] based on the mappings (4.8b) and (4.17). Thanks to using the exponential map of the e-connection, the matrix \(S_{i}(\rho)\) can be rewritten and computed in a simpler, more explicit form. **Lemma 4.14** (**similarity map**).: _Equation (4.27) is equivalent to_ \[S_{i}(\rho)=\Gamma\Big{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}(\log_{\rm m} \rho_{k}-D_{k})\Big{)}. \tag{4.28}\] Proof.: Appendix 7.3. Expression (4.27), which defines the similarity map, looks like a single iterative step for computing the Riemannian center of mass of the likelihood matrices \(\{L_{\rho_{k}}(D_{k})\colon k\in\mathcal{N}_{i}\}\) if(!) the exponential map of the Riemannian (Levi Cvita) connection were used. Instead, when using the exponential map \(\operatorname{Exp}^{(e)}\), \(S_{i}(\rho)\) may be interpreted as carrying out a single iterative step for the corresponding _geometric mean_ on the manifold \(\mathcal{D}_{c}\). A natural idea therefore is to define the similarity map to be this geometric mean, rather than just by a single iterative step. Surprisingly, analogous to the similarity map (3.12) for categorial distributions (cf. [10]), both definitions are _identical_, as shown next. **Proposition 4.15** (**geometric mean property**).: _Assume that \(\overline{\rho}\in\mathcal{D}_{c}\) solves the equation_ \[0=\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\big{(}\operatorname{Exp}_{\overline{ \rho}}^{(e)}\big{)}^{-1}\big{(}L_{\rho_{k}}(D_{k})\big{)} \tag{4.29}\] _which corresponds to the optimality condition for Riemannian centers of mass [16, Lemma 6.9.4], except for using a different exponential map. Then_ \[\overline{\rho}=S_{i}(\rho) \tag{4.30}\] _with the right-hand side given by (4.27)._ Proof.: Appendix 7.3. We are now in the position to define the _quantum state assignment flow_ along the lines of the original assignment flow (3.11), \[\dot{\rho}=\mathfrak{R}_{\rho}[S(\rho)],\qquad\rho(0)=\mathbb{1}_{\mathcal{Q}_ {c}},\qquad\qquad\text{\bf(QSAF)} \tag{4.31}\] where both the replicator map \(\mathfrak{R}_{\rho}\) and the similarity map \(S(\cdot)\) apply factorwise, \[S(\rho)_{i} =S_{i}(\rho), \tag{4.32a}\] \[\mathfrak{R}_{\rho}[S(\rho)]_{i} =\mathfrak{R}_{\rho_{i}}[S_{i}(\rho)],\quad i\in\mathcal{V} \tag{4.32b}\] with the mappings \(S_{i}\) given by (4.28) and \(\mathfrak{R}_{\rho_{i}}\) by (4.2). ### Reparametrization, Riemannian Gradient Flow The reparametrization of the assignment flow (3.15) for categorial distributions described in Section 3.3 has proven to be useful for characterizing and analyzing assignment flows. Under suitable conditions on the parameter matrix \(\Omega\), the flow performs a Riemannian descent flow with respect to a non-convex potential [11, Prop. 3.9] and has convenient stability and convergence properties [12]. In this section, we derive a similar reparametrization of the quantum state assignment flow (4.31). **Proposition 4.16** (**reparametrization**).: _Define the linear mapping_ \[\Omega\colon\mathcal{Q}_{c}\to\mathcal{Q}_{c},\qquad\Omega[\rho]_{i}:=\sum_{k \in\mathcal{N}_{i}}\omega_{ik}\rho_{k}. \tag{4.33}\] _Then the density matrix assignment flow equation (4.31) is equivalent to the system_ \[\dot{\rho} =\mathfrak{R}_{\rho}[\mu], \rho(0) =\mathbb{1}_{\mathcal{Q}_{c}}, \tag{4.34a}\] \[\dot{\mu} =\mathfrak{R}_{\mu}\big{[}\Omega[\mu]\big{]}, \mu(0) =S(\mathbb{1}_{\mathcal{Q}_{c}}). \tag{4.34b}\] Proof.: Appendix 7.3. For the following, we adopt the _symmetry assumption_ \[\omega_{ij} =\omega_{ji},\qquad\forall i,j\in\mathcal{V} \tag{4.35a}\] \[j\in\mathcal{N}_{i}\quad\Leftrightarrow\quad i\in\mathcal{N}_{ j},\qquad i,j\in\mathcal{V}. \tag{4.35b}\] As a consequence, the mapping (4.33) is self-adjoint, \[\langle\mu,\Omega[\rho]\rangle =\sum_{i\in\mathcal{V}}\langle\mu_{i},\Omega[\rho]_{i}\rangle= \sum_{i\in\mathcal{V}}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\langle\mu_{i},\rho _{k}\rangle=\sum_{i\in\mathcal{V}}\sum_{k\in\mathcal{N}_{i}}\omega_{ki} \langle\mu_{i},\rho_{k}\rangle \tag{4.36a}\] \[=\sum_{k\in\mathcal{V}}\sum_{i\in\mathcal{N}_{k}}\omega_{ki} \langle\mu_{i},\rho_{k}\rangle=\sum_{k\in\mathcal{N}_{i}}\langle\Omega[\mu]_{ k},\rho_{k}\rangle=\langle\Omega[\mu],\rho\rangle. \tag{4.36b}\] **Proposition 4.17** (**Riemannian gradient QSAF flow**).: _Suppose the mapping \(\Omega[\cdot]\) given by (4.33) is self-adjoint with respect to the canonical matrix inner product. Then the solution \(\mu(t)\) to (4.34b) also solves_ \[\dot{\mu}=-\operatorname{grad}_{\mu}J(\mu)\qquad\text{with}\qquad\big{(} \operatorname{grad}_{\mu}J(\mu)\big{)}_{i}=\operatorname{grad}_{\mu_{i}}J(\mu)\] (4.37a) _with respect to the potential_ \[J(\mu):=-\frac{1}{2}\langle\mu,\Omega[\mu]\rangle. \tag{4.37b}\] Proof.: Appendix 7.3. We conclude this section by rewriting the potential in a more explicit, informative form. **Proposition 4.18** (**nonconvex potential**).: _Define_ \[L_{\mathcal{G}}\colon\mathcal{Q}_{c}\to\mathcal{Q}_{c},\qquad L_{\mathcal{G}}: =\operatorname{id}-\Omega \tag{4.38}\] _with \(\Omega\) given by (4.33). Then the potential (4.37b) can be rewritten as_ \[J(\mu) =\frac{1}{2}\big{(}\langle\mu,L_{\mathcal{G}}[\mu]\rangle-\|\mu\|^ {2}\big{)} \tag{4.39a}\] \[=\frac{1}{4}\sum_{i\in\mathcal{V}}\sum_{j\in\mathcal{N}_{i}} \omega_{ij}\|\mu_{i}-\mu_{j}\|^{2}-\frac{1}{2}\|\mu\|^{2}. \tag{4.39b}\] Proof.: Appendix 7.3. ### Recovering the Assignment Flow for Categorial Distributions In the following we show how the assignment flow (3.17) for categorial distributions arises as special case of the quantum state assignment flow, under suitable conditions as detailed below. **Definition 4.19** (commutative submanifold).: Let \[\Pi=\{\pi_{i}\colon i\in[l]\},\qquad l\leq c \tag{4.40}\] denote a set of operators which orthogonally project onto disjoint subspaces of \(\mathbb{C}^{c}\), \[\pi_{i}^{2} =\pi_{i},\quad\forall i\in[l], \tag{4.41a}\] \[\pi_{i}\pi_{j} =0,\quad\forall i,j\in[l],\;i\neq j, \tag{4.41b}\] and which are complete in the sense that \[\sum_{i\in[l]}\pi_{i}=I_{c}. \tag{4.42}\] Given a family \(\Pi\) of operators, we define by \[\mathcal{D}_{\Pi}:=\bigg{\{}\sum_{i\in[l]}\frac{p_{i}}{\operatorname{tr}\pi_ {i}}\pi_{i}\colon p\in\mathcal{S}_{l}\bigg{\}}\subset\mathcal{D}_{c} \tag{4.43}\] the _submanifold of commuting Hermitian matrices_ which can be diagonalized simultaneously. A typical example for a family (4.40) is \[\Pi_{\mathcal{U}}=\{\pi_{i}=u_{i}u_{i}^{*}\colon i\in[c]\}, \tag{4.44}\] where \(\mathcal{U}=\{u_{1},\ldots,u_{c}\}\) is an orthonormal basis of \(\mathbb{C}^{c}\). The following lemma elaborates the bijection \(D_{\Pi}\leftrightarrow\mathcal{S}_{l}\). **Lemma 4.20** (properties of \(\mathcal{D}_{\Pi}\)).: _Let \(\mathcal{D}_{\Pi}\subset\mathcal{D}_{c}\) be given by (4.43) and denote the corresponding inclusion map by \(\iota\colon\mathcal{D}_{\Pi}\hookrightarrow\mathcal{D}_{c}\). Then_ 1. _the submanifold_ \((\mathcal{D}_{\Pi},\iota^{*}g_{\textsc{bkm}})\) _with the induced BKM metric is isometric to_ \((\mathcal{S}_{l},g_{\textsc{fr}})\)_;_ 2. _if_ \(\mu\in\mathcal{D}_{\Pi}\)_, then the tangent subspace_ \(T_{\mu}\mathcal{D}_{\Pi}\) _is contained in the subspace_ \(T_{\mu}^{c}\mathcal{D}_{c}\subseteq T_{\mu}\mathcal{D}_{c}\) _defined by (_2.23b_)._ 3. _Let_ \(\mathcal{U}=\{u_{1},\ldots,u_{c}\}\) _denote an orthonormal basis of_ \(\mathbb{C}^{c}\) _such that for every_ \(\pi_{i}\in\Pi,\;i\in[l]\)_, there are_ \(u_{i_{1}},\ldots,u_{i_{k}}\in\mathcal{U}\) _that form a basis of_ \(\;\operatorname{range}(\pi_{i})\)_. Then there is an inclusion of commutative subsets_ \(\mathcal{D}_{\Pi}\hookrightarrow\mathcal{D}_{\Pi_{\mathcal{U}}}\) _that corresponds to an inclusion_ \(\mathcal{S}_{l}\hookrightarrow\mathcal{S}_{c}\)_._ Proof.: Appendix 7.3. Now we establish that a restriction of the QSAF equation (4.34b) to the commutative product submanifold can be expressed in terms of the AF equation (3.17). Analogous to the definition (4.23) of the product manifold \(\mathcal{Q}_{c}\), we set \[\mathcal{D}_{\Pi,c}=\underbrace{\mathcal{D}_{\Pi}\times\cdots\times\mathcal{ D}_{\Pi}}_{|\mathcal{V}|\text{ factors}}. \tag{4.45}\] If \(\Pi\) is given by an orthonormal basis as in (4.44), we define the unitary matrices \[U =(u_{1},\ldots,u_{c})\in\operatorname{Un}(c), \tag{4.46a}\] \[U_{c} =\operatorname*{\underline{Diag}}(U,\ldots,U) \tag{4.46b}\] **Proposition 4.21** (invariance of \(\mathcal{D}_{\Pi,c}\)).: _Let \(\Pi\) and \(D_{\Pi}\) be given according to Definition 4.19. Then the following holds._ 1. _If_ \(\mu\in\mathcal{D}_{\Pi,c}\subset\mathcal{Q}_{c}\)_, then_ \(\mathfrak{R}\big{[}\Omega[\mu]\big{]}\in T_{\mu}\mathcal{D}_{\Pi,c}\subseteq T _{\mu}\mathcal{Q}_{c}\)_._ 2. _If_ \(\Pi_{\mathcal{U}}\) _has the form (_4.44_), then_ \[\mathfrak{R}\big{[}\Omega[\mu]\big{]}=U_{c}\operatorname{Diag}\big{[}R_{S}[ \Omega S]\big{]}U_{c}^{*},\] (4.47) _where_ \(S\in\mathcal{W}_{c}\) _is determined by_ \(\mu_{i}=U\operatorname{Diag}(S_{i})U^{*},\;i\in\mathcal{V}\)_._ _In particular, the submanifold \(\mathcal{D}_{\Pi,c}\) is preserved by the quantum state assignment flow._ Proof.: Appendix 7.3. It remains to check that under suitable conditions on the data matrices \(D_{i},\ i\in\mathcal{V}\) which define the initial point of (4.34b) by the similarity mapping (Lemma 4.14), the quantum state assignment flow becomes the ordinary assignment flow. **Corollary 4.22** (**recovery of the AF by restriction**).: _In the situation of Proposition 4.21, assume that all data matrices \(D_{i},\ i\in\mathcal{V}\) become diagonal in the same basis \(\mathcal{U}\), i.e._ \[D_{i}=U\operatorname{Diag}(\lambda_{i})U^{*},\quad\lambda_{i}\in\mathbb{R}^{c },\quad i\in\mathcal{V}. \tag{4.48}\] _Then the solution of the QSAF_ \[\dot{\mu}=\mathfrak{R}_{\mu}\big{[}\Omega[\mu]\big{]},\quad\mu(0)=S(\mathbb{1 }_{\mathcal{Q}_{c}}) \tag{4.49}\] _is given by_ \[\mu_{i}(t)=U\operatorname{Diag}\big{(}S_{i}(t)\big{)}U^{*},\quad i\in \mathcal{V}, \tag{4.50}\] _where \(S(t)\) satisfies the ordinary AF equation_ \[\dot{S}=R_{S}[\Omega S],\quad S(0)=S(\mathbb{1}_{\mathcal{W}_{c}}), \tag{4.51}\] _and the initial point is determined by the similarity map (3.12) evaluated at the barycenter \(W=\mathbb{1}_{\mathcal{W}_{c}}\) with the vectors \(\lambda_{i},\,i\in\mathcal{V}\) as data points._ Proof.: Appendix 7.3. ## 5. Experiments and Discussion We report in this section few academical experiments in order to illustrate the novel approach. In comparison to the original formulation, it enables a continuous assignment without the need to specify explicitly prototypical labels beforehand. The experiments highlight the following properties of the novel approach which extend the expressivity of the original assignment flow approach: * geometric _adaptive_ feature vector averaging even when _uniform_ weights are used (Section 5.2); * structure-preserving feature _patch_ smoothing _without_ accessing data at individual _pixels_ (Section 5.3); * seamless incorporation of feature _encoding_ using finite _frames_ (Section 5.3). In Section 6, we indicate the potential for representing spatial feature _context_ via entanglement. Working out more thoroughly the potential for various applications is beyond the scope of this paper, however. ### Geometric Integration In this section, we focus on the geometric integration of the reparameterized flow described by Equation (4.34b). For a reasonable choice of a single stepsize parameter, the scheme is accurate, stable and amenable to highly parallel implementations. We utilize that the e-geodesic from Proposition 4.6 constitutes a retraction [1, Def. 4.1.1 and Prop. 5.4.1] onto the state manifold \(\mathcal{Q}_{c}\). Consequently, the iterative step for updating \(\mu_{t}\in\mathcal{Q}_{c},\ t\in\mathbb{N}_{0}\) and stepsize \(\epsilon>0\) is given by \[(\mu_{t+1})_{i} =\Big{(}\operatorname{Exp}_{\mu_{t}}^{(e)}\big{(}\epsilon\, \mathfrak{R}_{\mu_{t}}\big{[}\Omega[\mu_{t}]\big{]}\big{)}\Big{)}_{i}=\big{(} \operatorname{Exp}_{(\mu_{t})_{i}}^{(e)}\circ\mathfrak{R}_{(\mu_{t})_{i}}\big{[} \epsilon(\Omega[\mu_{t}])_{i}\big{]}\big{)}\] (5.1a) \[\stackrel{{\eqref{eq:def_eq and we conclude in view of (4.5) and (5.2) \[A_{t+1}=A_{t}+\epsilon\Pi_{c,0}\Omega[\Gamma(A_{t})]. \tag{5.4}\] **Remark 5.1**.: We note that the numerical evaluation of the replicator operator (4.2) is not required. This makes the geometric integration scheme, summarized by Algorithm 1, quite efficient. ``` Initialization Determine an initial \(A_{0}\in\mathcal{T}_{c;0}\) and compute \(\mu_{0}\) by \((\mu_{0})_{i}=\Gamma((A_{0})_{i})\in\mathcal{Q}_{c},\ \forall i\in\mathcal{V}\) whilenot convergeddo \((A_{t+1})_{i}=(A_{t})_{i}+\epsilon\Pi_{c,0}(\Omega[\mu_{t}])_{i}\quad\forall i\in \mathcal{V}\) \((\mu_{t+1})_{i}=\Gamma\big{(}(A_{t+1})_{i}\big{)},\quad\forall i\in\mathcal{V}\). ``` **Algorithm 1**Geometric Integration Scheme We list few further implementation details. * A reasonable convergence criterion which measures how close the states are to a rank one matrix, is \(|\operatorname{tr}(\mu_{t})_{i}-\operatorname{tr}(\mu_{t}^{2})_{i}|\leq \varepsilon,\ \forall i\in\mathcal{V}\). * A resonable range for the stepsize parameter is \(\epsilon\leq 0.1\). * In order to remove spurious non-Hermitian numerical rounding errors, we replace each matrix \((\Omega[\mu_{t}]_{i})\) by \(\frac{1}{2}\big{(}(\Omega[\mu_{t}])_{i}+(\Omega[\mu_{t}])_{i}^{*}\big{)}\). Figure 5.1. **(a)** A range of RGB unit color vectors in the positive orthant. **(b)** An image with data according to (a). **(c)** A noisy version of (b) constituting the initial points \(\rho_{i}(0),\ i\in\mathcal{V}\) of the QSAF. **(d)** The labels (pure states) generated by integrating the quantum state assignment flow using uniform weights. **(e)** The vectors depicted by (a) are replaced by the unit vectors corresponding to the vertices of the icosahedron, centered at \(0\). **(f)-(h)** Analogous to (b)-(d), based on (e) instead of (a) and using the same noise level in (g). The colors in (f)-(h) merely visualize the bloch vectors by RGB vectors that result from translating the sphere of (e) to the center \(\frac{1}{2}(1,1,1)^{\top}\) of the RGB cube and scaling it by \(\frac{1}{2}\). We refer to the text for a discussion. * The constraint \(\mathrm{tr}\,\rho=1\) of (2.13) can be replaced by \(\mathrm{tr}\,\rho=\tau\) with any constant \(\tau>1\). This ensures for larger matrix dimensions \(c\) that the entries of \(\rho\) vary in a reasonable numerical range and the stability of the iterative updates. Up to moderate matrix dimensions, say \(c\leq 100\), the matrix exponential in (4.4a) can be computed using any of the basic established algorithms [10, Ch. 10] or available solvers. In addition, depending on size of the neighborhood \(\mathcal{N}_{i}\) induced by the weighted adjacency relation of the underlying graph in (4.22), Algorithm 1 can be implemented in a fine-grained parallel fashion. ### Labeling 3D Data on Bloch Spheres For the purpose of visual illustration, we consider the smoothing of 3D color vectors \(d=(d_{1},d_{2},d_{3})^{\top}\), interpreted as Bloch vectors which parametrize density matrices [1, Section 5.2] \[\rho=\rho(d)=\frac{1}{2}\bigg{(}I+d_{1}\begin{pmatrix}0&1\\ 1&0\end{pmatrix}+d_{2}\begin{pmatrix}0&-i\\ i&0\end{pmatrix}+d_{3}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\bigg{)}\in\mathbb{C}^{2\times 2},\qquad\|d\|\leq 1. \tag{5.5}\] Pure states \(\rho\) correspond to unit vectors \(d,\ \|d\|=1\), whereas vectors \(d,\ \|d\|<1\) parametrize mixed states \(\rho\). Given data \(d_{i}=(d_{i,1},d_{i,2},d_{i,3})^{\top},\ i\in\mathcal{V}\) with \(\|d_{i}\|\leq 1\), we initialized the QSAF at \(\rho_{i}=\rho(d_{i}),\ i\in\mathcal{V}\), and integrated the flow. Each integration step involves geometric state averaging across the graph causing mixed states \(\rho_{i}(t)=\rho(d_{i}(t)),\ i\in\mathcal{V}\), which eventually converge towards pure states. Integration was stopped at time \(t=T\), when \(\min\{\|d_{i}(T)\|\colon i\in\mathcal{V}\}\geq 0.999\). The resulting vectors \(d_{i}(T)\) are visualized as explained in the Figure 5.2. **Left pair:** A random collection of patches with oriented image structure. The colored image displays for each patch its orientation using the color code depicted by the rightmost panel. Each patch is represented by a rank-one matrix \(D\) in (4.17), obtained by vectorizing the patch and taking the tensor product. **Center pair:** The final state of the QSAF obtained by geometric integration with uniform weighting \(\omega_{ik}=\frac{1}{|\mathcal{N}_{i}|},\ \forall k\in\mathcal{N}_{i},\ \forall i\in \mathcal{V}\), of the nearest neighbors states. It represents an image partition but preserves image structure, due to geometric smoothing of patches encoded by non-commutative state spaces. Figure 5.3. **(a)** A random collection of patches with oriented image structure. **(b)** A collection of patches with the same oriented image structure. **(c)** Pixelwise mean of the patches (a) (b) at each location. **(d)** The QSAF recovers a close approximation of (b) (color code: see Fig. 5.2) by iteratively smoothing the states \(\rho_{k},\ k\in\mathcal{N}_{i}\) corresponding to (c) through geometric integration. caption of the Figure 5.1. We point out that the two experiments discussed next are supposed to illustrate the behaviour of the QSAF and the impact of the underlying geometry, rather than a contribution to the literature on the processing of color images. Figure 5.1(c) shows a noisy version of the image (b) used to initialize the quantum state assignment flow (QSAF). Panel (d) shows the labeled image, i.e. the assignment of a pure state (depicted as Bloch vector) to each pixel of the input data (c). Although uniform weights were used and any prior information was absent, the result (d) demonstrates that the QSAF removes the noise and preserves the signal transition fairly well, both for large-scale local image structure (away from the image center) and for small-scale local image structure (close to the image center). This behaviour is quite unusual in comparison to traditional image denoising methods which inevitably require _adaption_ of regularization to the scale of local image structure. In addition, we note that noise removal is 'perfect' for the three extreme points red, green and blue of panel (a), but suboptimal only for the remaining non-extreme points. Panels (f)-(h) show the same results when the data are encoded in a better way, as depicted by (e) using unit vectors not only on the positive orthant but on the whole unit sphere. These data are illustrated by RGB vectors that result from translating the unit sphere (e) to the center \(\frac{1}{2}(1,1,1)^{\top}\) of the RGB color cube \([0,1]^{3}\) and scaling it by \(\frac{1}{2}\). This improved data encoding is clearly visible in panel (g) which displays the _same_ noise level as shown in panel (c). Accordingly, noise removal while preserving signal structure at _all_ local scales is more effectively achieved by the QSAF in (h), in comparison to (d). ### Basic Image Patch Smoothing Figure 5.2 shows an application of the QSAF to a _random_ spatial arrangement (grid graph) of normalized patches, where each vertex represents a patch, not a pixel. Applying vectorization taking the tensor product with itself, each patch is represented as a pure state in terms of a rank-one matrix \(D_{i}\) at the corresponding vertex \(i\in\mathcal{V}\), which constitute the input data in the similarity mapping (4.27). Integrating the flow causes the non-commutative interaction of the associated state spaces \(\rho_{i},\ i\in\mathcal{V}\) through geometric averaging, here with uniform weights (4.22), until convergence towards pure states. The resulting patches are then simply given by the corresponding eigenvector, possibly after reversing the arbitrary sign of each eigenvector component, depending on the distance to the input patch. Figure 5.4. (a) A real image, partitioned into patches of size \(8\times 8\) and \(4\times 4\) pixels, respectively. Each patch is represented as pure state with respect to a Fourier frame (see text). Instead of the nearest neighbor adjacency on a regular grid, each patch is adjacent to its 8 closest patches in the entire collection. Integrating the QSAF and decoding the resulting states (see text) yields the results (b) (\(8\times 8\) patches) and (c) (\(4\times 4\) patches), respectively. Result (b) illustrated the effect of smoothing at the patch level, in the Fourier domain, where as the smaller spatial scale used to compute (c) represents the input data fairly accurately, after significant data reduction. The result shown in Figure 5.2 reveals an interesting behaviour: structure-preserving patch smoothing without accessing explicitly individual pixels. In particular, the flow induces a _partition_ of the patches without any prior assumption on the data. Figure 5.3 shows a variant of the scenario of Figure 5.2 in order to demonstrate in another way the ability to separate local image structure by geometric smoothing at the patch level. Figure 5.4 generalizes the set-up in two ways. Firstly, patches were encoded using the harmonic frame given by the two-dimensional discrete Fourier matrix. Secondly, non-uniform weights \(\omega_{ik}=e^{-\tau\|P_{i}-P_{j}\|_{F}^{2}},\ \tau>0\) were used depending on the distance of adjacent patches \(P_{i},P_{j}\). Specifically, let \(P_{i}\) denote the patch at vertex \(i\in\mathcal{V}\) after removing the global mean and normalization using the Frobenius norm. Then, applying the FFT to each patch and vectorization, formally with the discrete two-dimensional Fourier matrix \(F_{2}=F\otimes F\) (Kronecker product) and followed by stacking the rows, \(\widehat{p}_{i}=F_{2}\operatorname{vec}(P_{i})\), the input data were defined as \(D_{i}=F_{2}\operatorname{Diag}(-|\widehat{p}_{i}|^{2})F_{2}^{*}\), where the squared magnitude \(|\cdot|^{2}\) was computed componentwise. Integrating the flow yields again pure states which were interpreted and decoded accordingly: the eigenvector was used as multiplicative filter of the magnitude of the Fourier transformed patch (keeping its phase), followed by rescaling the norm and adding the mean by approximating the original patch in terms of these two parameters. The results shown as panels (b) and (c) of Figure 5.4 illustrate the effect of 'geometric diffusion' at the patch level through integrating the flow, and how the input data are approximated depending on the chosen spatial scale (patch size), subject to significant data reduction. ## 6. Conclusion We generalized the assignment flow approach for categorial distributions [1] to density matrices on weighted graphs. While the former flows assign to each data point a label selected from a _finite_ set, the latter assign to each data point a generalized 'label' from the _uncountable_ submanifold of pure states. Various further directions of research are indicated by the numerical experiments. This includes the unusual behavior of feature vector smoothing which parametrize complex-valued non-commutative state spaces (Figure 5.1), the structure-preserving interaction of spatially indexed feature patches without accessing individual pixels (Figures 5.2 and 5.3), the use of frames for signal representation and as observables whose expected values are governed by a quantum state assignment flow (Figure 5.4), and the representation of spatial correlations by entanglement and tensorization (Figure 5.5). Extending to the novel quantum assignment flow approach the representation of the original assignment flow in the broader framework of geometric mechanics, as developed recently by [1], defines another promising research project spurred by established concepts of mathematics and physics. From these viewpoints, this paper adds a novel concrete approach based on information theory to the emerging literature on network design based on concepts from quantum mechanics; cf., e.g. [1] and references therein. Our main motivation is the definition of a novel class of 'neural ODEs' [1] in terms of the dynamical systems which generate a quantum state assignment flow. The layered architecture of a corresponding 'neural network' is implicitly given by geometric integration. The inherent smoothness of the parametrization enables to learn the weight parameters from data. This will be explored in our future work along the various lines of research indicated above. ## 7. Proofs ### Proofs of Section 2 Proof of Proposition 2.2.: We verify (2.9) by direct computation. For any \(p\in\mathcal{S}_{c}\), \[R_{p}\mathbb{1}_{c} =\big{(}\operatorname{Diag}(p)-pp^{\top}\big{)}\mathbb{1}_{c}=p- \langle p,\mathbb{1}_{c}\rangle p=0, \tag{7.1a}\] \[R_{p}\pi_{c,0} =R_{p}(I-\mathbb{1}_{c}\mathbb{1}_{\mathcal{S}_{c}}^{\top})=R_{p},\] (7.1b) \[\pi_{c,0}R_{p} =(I-\mathbb{1}_{c}\mathbb{1}_{\mathcal{S}_{c}}^{\top})R_{p}=R_{p }-\frac{1}{c}\mathbb{1}_{c}(R_{p}\mathbb{1}_{c})^{\top}=R_{p}. \tag{7.1c}\] Next we characterize the geometric role of \(R_{p}\) and show (2.10). Let \(p\in\mathcal{S}_{c}\) be parametrized by the local coordinates \[\overline{p} =\varphi(p):=(p_{1},p_{2},\ldots,p_{c-1})^{\top}\in\mathbb{R}_{++ }^{c-1} \tag{7.2a}\] \[p =\varphi^{-1}(\overline{p})=(\overline{p}_{1},\ldots,\overline{p }_{c-1},1-\langle\mathbb{1}_{c-1},\overline{p}\rangle)^{\top}\in\mathcal{S}_{c}. \tag{7.2b}\] Choosing the canonical basis \(e_{1},\ldots,e_{c}\) on \(\mathcal{S}_{c}\subset\mathbb{R}^{c}\), we obtain a basis of the tangent space \(T_{c,0}\) \[e_{j}-e_{c}=d\varphi^{-1}(e_{j}),\qquad j\in[c-1]. \tag{7.3}\] Using these vectors a columns of the matrix \[B:=(e_{1}-e_{c},\ldots,e_{c-1}-e_{c})=\begin{pmatrix}I_{c-1}\\ -\mathbb{1}_{c-1}^{\top}\end{pmatrix}\in\mathbb{R}^{c\times(c-1)}, \tag{7.4}\] one has for any \(v\in T_{c,0}\) \[v =B\overline{v}=\begin{pmatrix}\overline{v}\\ v_{c}\end{pmatrix}=\begin{pmatrix}\overline{v}\\ -\langle\mathbb{1}_{c-1},\overline{v}\rangle\end{pmatrix}, \overline{v} =(v_{1},\ldots,v_{c-1})^{\top} \tag{7.5a}\] \[\overline{v} =B^{\dagger}v, B^{\dagger} =\begin{pmatrix}I_{c-1}&0\end{pmatrix}\pi_{c,0}, \tag{7.5b}\] where \(B^{\dagger}:=(B^{\top}B)^{-1}B^{\top}\) denotes the Moore-Penrose generalized inverse of \(B\). Substituting this parametrization and evaluating the metric (2.4) gives \[g_{p}(u,v) =\langle\overline{u},B^{\top}\operatorname{Diag}(p)^{-1}B \overline{v}\rangle=\Big{\langle}\overline{u},\begin{pmatrix}I_{c-1}&- \mathbb{1}_{c-1}\end{pmatrix}\operatorname{Diag}(p)^{-1}\begin{pmatrix}I_{c-1} \\ -\mathbb{1}_{c-1}^{\top}\end{pmatrix}\overline{v}\Big{\rangle} \tag{7.6a}\] \[=\Big{\langle}\overline{u},\Big{(}\operatorname{Diag}(\overline{ p})^{-1}+\frac{1}{1-\langle\mathbb{1}_{c-1},\overline{p}\rangle}\mathbb{1}_{c-1} \mathbb{1}_{c-1}^{\top}\Big{)}\overline{v}\Big{\rangle}\] (7.6b) \[=:\langle\overline{u},G(\overline{p})\overline{v}\rangle. \tag{7.6c}\] Applying the Sherman-Morrison-Woodbury matrix inversion formula [13, p. 9] \[(A+xy^{\top})^{-1}=A^{-1}-\frac{A^{-1}xy^{\top}A^{-1}}{1+\langle y,A^{-1}x\rangle} \tag{7.7}\] yields \[G(\overline{p})^{-1} =\operatorname{Diag}(\overline{p})-\frac{1}{1-\langle\mathbb{1} _{c-1},\overline{p}\rangle}\frac{\operatorname{Diag}(\overline{p})\mathbb{1}_ {c-1}\mathbb{1}_{c-1}^{\top}\operatorname{Diag}(\overline{p})}{1+\frac{1}{1- \langle\mathbb{1}_{c-1},\overline{p}\rangle}\langle\mathbb{1}_{c-1}, \overline{p}\rangle} \tag{7.8a}\] \[=\operatorname{Diag}(\overline{p})-\operatorname{Diag}(\overline{ p})\mathbb{1}_{c-1}\mathbb{1}_{c-1}^{\top}\operatorname{Diag}(\overline{p})= \operatorname{Diag}(\overline{p})-\overline{p}\,\overline{p}^{\top}\] (7.8b) \[=R_{\overline{p}}. \tag{7.8c}\] Let \(v\in T_{c,0}\). Then, using the equations \[p_{c} \stackrel{{\eqref{eq:2.9}}}{{=}}1-\langle\mathbb{1}_{c-1}, \overline{p}\rangle, \tag{7.9a}\] \[R_{\overline{p}}\mathbb{1}_{c-1} =\overline{p}-\langle\mathbb{1}_{c-1},\overline{p}\rangle \overline{p}=p_{c}\overline{p}, \tag{7.9b}\] we have \[R_{p}v =\begin{pmatrix}R_{\overline{p}}&-p_{c}\overline{p}\\ -p_{c}\overline{p}^{\top}&p_{c}-p_{c}^{2}\end{pmatrix}\begin{pmatrix}\overline{v} \\ v_{c}\end{pmatrix}=\begin{pmatrix}R_{\overline{p}}\overline{v}-v_{c}R_{\overline{p} }\mathbb{1}_{c-1}\\ -\langle R_{\overline{p}}\mathbb{1}_{c-1},\overline{v}\rangle+v_{c}p_{c}( \mathbb{1}_{c-1},\overline{p})\end{pmatrix} \tag{7.10a}\] \[=\begin{pmatrix}R_{\overline{p}}\overline{v}\\ -\langle\mathbb{1}_{c-1},R_{\overline{p}}\overline{v}\rangle\end{pmatrix}-v_{ c}\begin{pmatrix}R_{\overline{p}}\mathbb{1}_{c-1}\\ -\langle\mathbb{1}_{c-1},R_{\overline{p}}\mathbb{1}_{c-1}\rangle\end{pmatrix}\] (7.10b) \[\stackrel{{\eqref{eq:Rp}}}{{=}}BR_{\overline{p}}( \overline{v}-v_{c}\mathbb{1}_{c-1}). \tag{7.10c}\] Now consider any smooth function \(f\colon\mathcal{S}_{c}\to\mathbb{R}\). Then \[\partial_{\overline{p}_{i}}\big{(}f\circ\varphi^{-1}(\overline{p})\big{)}= \sum_{j\in[c]}\partial_{j}f(p)\partial_{\overline{p}_{i}}\varphi^{-1}( \overline{p})\stackrel{{\eqref{eq:Rp}}}{{=}}\partial_{i}f(p)- \partial_{c}f(p), \tag{7.11a}\] \[\partial_{\overline{p}}\big{(}f\circ\varphi^{-1}(\overline{p})\big{)}= \overline{\partial f(p)}-\partial_{c}f(p)\mathbb{1}_{c-1}. \tag{7.11b}\] Comparing the last equation and (7.10) shows that \[R_{p}\partial f(p)=BR_{\overline{p}}\partial_{\overline{p}}\big{(}f\circ \varphi^{-1}(\overline{p})\big{)}\stackrel{{\eqref{eq:Rp}}}{{=}} BG(p)^{-1}\partial_{\overline{p}}\big{(}f\circ\varphi^{-1}(\overline{p})\big{)}, \tag{7.12}\] which proves (2.10). ### Proofs of Section 3 Proof of Lemma 3.2.: Let \(v(t)\in T_{c,0}\) be a smooth curve with \(\dot{v}(t)=u\). Then \[\frac{d}{dt}\exp_{p}\big{(}v(t)\big{)} =\frac{d}{dt}\frac{p\cdot e^{v(t)}}{\langle p,e^{v(t)}\rangle}= \frac{p\cdot u\cdot e^{v(t)}}{\langle p,e^{v(t)}\rangle}-\langle p,u\cdot e^{v (t)}\rangle\frac{p\cdot e^{v(t)}}{\langle p,e^{v(t)}\rangle^{2}} \tag{7.13a}\] \[=\exp_{p}\big{(}v(t)\big{)}\cdot u-\big{\langle}u,\exp_{p}\big{(} v(t)\big{)}\big{\rangle}\exp_{p}\big{(}v(t)\big{)}=R_{\exp_{p}(v(t))}u. \tag{7.13b}\] Similarly, for a smooth curve \(p(t)\in\mathcal{S}_{c}\) with \(\dot{p}(t)=u\), one has \[\frac{d}{dt}\exp_{p(t)}(v) =\frac{d}{dt}\frac{p(t)\cdot e^{v}}{\langle p(t),e^{v}\rangle}= \frac{\dot{p}(t)\cdot e^{v}}{\langle p(t),e^{v}\rangle}-\langle\dot{p}(t),e^{ v}\rangle\frac{p(t)\cdot e^{v}}{\langle p(t),e^{v}\rangle^{2}} \tag{7.14a}\] \[=\exp_{p(t)}(v)\cdot\frac{u}{p(t)}-\Big{\langle}\frac{u}{p(t)}, \exp_{p(t)}(v)\Big{\rangle}\exp_{p(t)}(v)=R_{\exp_{p(t)}(v)}\frac{u}{p(t)}.\qed\] Proof of Theorem 3.3.: Put \[q(t)=L_{p(t)}(D) \tag{7.15}\] where \(p(t)\) solves (3.4). Using (3.2), (3.5b) and (7.15), we obtain \[\dot{q}=d_{p}L_{p(t)}(D)[\dot{p}(t)]=R_{q(t)}\Big{(}\frac{\dot{p}(t)}{p(t)} \Big{)}\stackrel{{\eqref{eq:q(t)}}}{{=}}R_{q(t)}\big{(}q(t)- \langle p(t),q(t)\rangle\mathbb{1}_{c}\big{)}\stackrel{{\eqref{eq: q(t)}}}{{=}}R_{q(t)}q(t), \tag{7.16}\] which shows (3.6b). Write \(p(t)=\exp_{\mathbb{1}_{S_{c}}}(r(t))\). Then differentiating (3.6c) yields with \(r(t)=\int_{0}^{t}q(\tau)d\tau\) \[\dot{p}(t)\stackrel{{\eqref{eq:q(t)}}}{{=}}R_{\exp_{\mathbb{1}_{ S_{c}}}(r(t))}\dot{r}(t)\stackrel{{\eqref{eq:q(t)}}}{{=}}R_{p(t)}q(t) \stackrel{{\eqref{eq:q(t)}}}{{=}}R_{p(t)}L_{p(t)}(D), \tag{7.17}\] which proves the equivalence of (3.4) and (3.6). Proof of Corollary 3.4.: The solution \(p(t)\) to (3.4) is given by (3.6). Proposition (2.2) and Eq. (2.10) show that (3.6b) is the Riemannian ascent flow of the function \(\mathcal{S}_{c}\ni q\mapsto\frac{1}{2}\|q\|^{2}\). The stationary points satisfy \[R_{q}q=(q-\|q\|^{2})\cdot q=0 \tag{7.18}\] and form the set \[Q^{*}:=\Big{\{}q^{*}=\frac{1}{|\mathcal{J}^{*}|}\sum_{j\in\mathcal{J}^{*}}e_{j }\colon\mathcal{J}^{*}\subseteq[c]\Big{\}}. \tag{7.19}\] The case \(\mathcal{J}^{*}=[c]\), i.e. \(q^{*}=\mathbb{1}_{\mathcal{S}_{c}}\), can be ruled out if \(\frac{D}{\langle\mathbb{1}_{c},D\rangle}\neq\mathcal{S}_{c}\), which will always be the case in practice where \(D\) corresponds to real data (measurement, observation). The global maxima correspond to the vertices of \(\Delta_{c}=\overline{\mathcal{S}}_{c}\), i.e. \(|\mathcal{J}^{*}|=1\). The remaining stationary points are local maxima and degenerate, since vectors \(D\) with non-unique minimal component form a negligible null set. In any case, \(\lim_{t\to\infty}p(t)\stackrel{{\eqref{eq:q(t)}}}{{=}}\lim_{t \to\infty}q(t)=q^{*}\), depending on the index set \(\mathcal{J}^{*}\) determined by \(D\). ### Proofs of Section 4 Proof of Proposition 4.1.: The Riemannian gradient is defined by [13, pp. 337] \[0 =df[X]-g_{\rho}(\operatorname{grad}_{\rho}f,X)\overset{\eqref{eq: gradient}}{=}\langle\partial f,X\rangle-\langle\mathbb{T}_{\rho}[\operatorname{grad}_{ \rho}f],X\rangle \tag{7.20a}\] \[=\langle\partial f-\mathbb{T}_{\rho}[\operatorname{grad}_{\rho}f ],X\rangle,\qquad\forall X\in\mathcal{H}_{c,0}. \tag{7.20b}\] Choosing the parametrization \(X=Y-\operatorname{tr}(Y)I\in\mathcal{H}_{c,0}\) with \(Y\in\mathcal{H}_{c}\), we further obtain \[0 =\langle\partial f-\mathbb{T}_{\rho}[\operatorname{grad}_{\rho} f],Y\rangle-\operatorname{tr}(Y)\operatorname{tr}(\partial f-\mathbb{T}_{\rho}[ \operatorname{grad}_{\rho}f]) \tag{7.21a}\] \[=\big{\langle}\partial f-\mathbb{T}_{\rho}[\operatorname{grad}_ {\rho}f]-\operatorname{tr}(\partial f-\mathbb{T}_{\rho}[\operatorname{grad}_ {\rho}f])I,Y\big{\rangle},\quad\forall Y\in\mathcal{H}_{c}. \tag{7.21b}\] The left factor must vanish. Applying the linear mapping \(\mathbb{T}_{\rho}^{-1}\) and solving for \(\operatorname{grad}_{\rho}f\) and gives \[\operatorname{grad}_{\rho}f=\mathbb{T}_{\rho}^{-1}[\partial f]-\operatorname{ tr}(\partial f-\mathbb{T}_{\rho}[\operatorname{grad}_{\rho}f])\mathbb{T}_{\rho}^{ -1}[I]. \tag{7.22}\] Since \(\operatorname{grad}_{\rho}f\in\mathcal{H}_{c,0}\), taking the trace on both sides and using \(\operatorname{tr}\mathbb{T}_{\rho}^{-1}[I]=\operatorname{tr}\rho=1\) yields \[0=\operatorname{tr}\mathbb{T}_{\rho}^{-1}[\partial f]-\operatorname{tr} \partial f+\operatorname{tr}\mathbb{T}_{\rho}[\operatorname{grad}_{\rho}f]. \tag{7.23}\] Substituting the last two summands in the equation before gives \[\operatorname{grad}_{\rho}f =\mathbb{T}_{\rho}^{-1}[\partial f]-(\operatorname{tr}\mathbb{T }_{\rho}^{-1}[\partial f])\rho \tag{7.24a}\] \[=\mathbb{T}_{\rho}^{-1}[\partial f]-\langle\rho,\partial f \rangle\rho, \tag{7.24b}\] where the last equation follows from (2.22). Proof of Lemma 4.2.: The equation \(\Pi_{c,0}\circ\mathfrak{R}_{\rho}=\mathfrak{R}_{\rho}\) follows from \(\mathfrak{R}_{\rho}[X]\in\mathcal{H}_{c,0}\) and hence \[\operatorname{tr}\mathfrak{R}_{\rho}[X]\overset{\eqref{eq: gradient}}{=}\operatorname{tr}\mathbb{T}_{\rho}^{-1}[X]-\langle\rho,X\rangle \operatorname{tr}\rho\overset{\eqref{eq: gradient}}{=}\langle\rho,X\rangle-\langle\rho,X\rangle=0. \tag{7.25}\] Thus \[\Pi_{c,0}\circ\mathfrak{R}_{\rho}[X] =\mathfrak{R}_{\rho}[X]=\mathfrak{R}_{\rho}[X]-\frac{ \operatorname{tr}X}{c}\big{(}\rho-\underbrace{\langle\rho,I\rangle}_{=1} \rho\big{)} \tag{7.26a}\] \[=\mathfrak{R}_{\rho}[X]-\frac{\operatorname{tr}X}{c}\mathfrak{R} _{\rho}[I]=\mathfrak{R}_{\rho}\Big{[}X-\frac{\operatorname{tr}X}{c}I\Big{]} \overset{\eqref{eq: gradient}}{=}\mathfrak{R}_{\rho}\circ\Pi_{c,0}[X].\qed\] Proof of Lemma 4.3.: Using (2.18) we compute \[\exp_{\operatorname{m}}(\Pi_{c,0}[Z])=\exp_{\operatorname{m}}\Big{(}Z-\frac{ \operatorname{tr}Z}{c}I\Big{)}=e^{\frac{\operatorname{tr}Z}{c}}\exp_{ \operatorname{m}}(Z), \tag{7.27}\] where the last equation holds since \(Z\) and \(I\) commute. Substitution into (4.4a) cancels the scalar factor \(e^{\frac{\operatorname{tr}Z}{c}}\) and shows (4.5). Proof of Proposition 4.4.: We show \(\Gamma\circ\Gamma^{-1}=\operatorname{id}_{\mathcal{D}_{c}}\) and \(\Gamma^{-1}\circ\Gamma=\operatorname{id}_{\mathcal{H}_{c,0}}\). As for the first relation, we compute \[\Gamma\circ\Gamma^{-1}(\rho) =\exp_{m}\Big{(}\Gamma^{-1}(\rho)-\psi\big{(}\Gamma^{-1}(\rho) \big{)}I\Big{)} \tag{7.28a}\] \[=\exp_{\operatorname{m}}\Big{(}\log_{\operatorname{m}}\rho- \frac{\operatorname{tr}(\log_{\operatorname{m}}\rho)}{c}I-\log\Big{(} \operatorname{tr}\exp_{\operatorname{m}}\big{(}\log_{\operatorname{m}}\rho- \frac{\operatorname{tr}(\log_{\operatorname{m}}\rho)}{c}I\big{)}\Big{)}I\Big{)} \tag{7.28b}\] and since \(\log_{\operatorname{m}}\rho\) and \(I\) commute \[=\exp_{\operatorname{m}}\Big{(}\log_{\operatorname{m}}\rho- \frac{\operatorname{tr}(\log_{\operatorname{m}}\rho)}{c}I-\log\operatorname{ tr}\big{(}e^{-\frac{1}{c}\operatorname{tr}(\log_{\operatorname{m}}\rho)}\rho \big{)}I\Big{)} \tag{7.28c}\] \[\overset{\operatorname{tr}\rho=1}{=}\exp_{\operatorname{m}}(\log _{\operatorname{m}}\rho)\] (7.28d) \[=\rho. \tag{7.28e}\] As for the second relation, we compute \[\Gamma^{-1}\circ\Gamma(X) =\Pi_{c,0}[\log_{\mathrm{m}}\circ\Gamma(X)]=\Pi_{c,0}\big{[}\log_{ \mathrm{m}}\circ\exp_{\mathrm{m}}\big{(}X-\psi(X)I\big{)}\big{]} \tag{7.29a}\] \[=\Pi_{c,0}[X]-\psi(X)\Pi_{c,0}[I]=\Pi_{c,0}[X]\] (7.29b) \[=X, \tag{7.29c}\] since \(X\in\mathcal{H}_{c,0}\) by assumption. Proof of Lemma 4.5.: In view of the definition (4.4) of \(\Gamma\), we compute using the chain rule \[d\Gamma(H)[Y] =\frac{d}{dt}\exp_{m}\big{(}H+tY-\psi(H+tY)I\big{)}\big{|}_{t=0} \tag{7.30a}\] \[=d\exp_{\mathrm{m}}\big{(}H-\psi(H)I\big{)}\big{[}Y-d\psi(H)[Y]I \big{]}\] (7.30b) \[\stackrel{{\eqref{eq:d-1-1}}}{{=}}\mathbb{T}_{\rho}^ {-1}\big{[}Y-d\psi(H)[Y]I\big{]}. \tag{7.30c}\] Furthermore, \[d\psi(H)[Y] \stackrel{{\eqref{eq:d-2-1}}}{{=}}\frac{1}{ \operatorname{tr}\exp_{m}(H)}\operatorname{tr}(d\exp_{m}(H)[Y]) \tag{7.31a}\] \[\stackrel{{\eqref{eq:d-1-1}}}{{=}}\frac{1}{ \operatorname{tr}\exp_{m}(H)}\operatorname{tr}\big{(}\mathbb{T}_{\exp_{m}(H)} ^{-1}[Y]\big{)},\qquad\exp_{m}(H)\stackrel{{\eqref{eq:d-1-1}}}{{= }}\big{(}\operatorname{tr}\exp_{m}(H)\big{)}\Gamma(H)\] (7.31b) \[\stackrel{{\eqref{eq:d-2-1}}}{{=}}\frac{1}{ \operatorname{tr}\exp_{m}(H)}\langle\exp_{m}(H),Y\rangle\] (7.31c) \[\stackrel{{\eqref{eq:d-1-1}}}{{=}}\langle\Gamma(H), Y\rangle=\langle\rho,Y\rangle, \tag{7.31d}\] where the last equation follows from the assumption \(\rho=\Gamma(H)\). Substitution into (7.30) gives (4.7a). Regarding (4.7b), using the expression (4.6) for \(\Gamma^{-1}\), we compute \[d\Gamma^{-1}(\rho)[X] =\Pi_{c,0}\circ d\log_{\mathrm{m}}(\rho)[X] \tag{7.32a}\] \[\stackrel{{\eqref{eq:d-1-1}}}{{=}}\Pi_{c,0}\circ \mathbb{T}_{\rho}[X], \tag{7.32b}\] which verifies (4.7b). Proof of Proposition 4.6.: The e-geodesic connecting the two points \(Q,R\in\mathcal{D}_{c}\) is given by [11, Section V] \[\Gamma(K+tA),\quad t\in[0,1],\qquad K=\log_{\mathrm{m}}Q,\quad A=\log_{ \mathrm{m}}R-\log_{\mathrm{m}}Q. \tag{7.33}\] Setting \(\Gamma^{-1}(\rho)=\Pi_{c,0}[K]\) and \(\mathbb{T}_{\rho}[X]=A\) yields (4.8c), since the orthogonal projections \(\Pi_{c,0}\) onto \(\mathcal{H}_{c,0}\) are implicitly carried out in (7.33) as well, due to Lemma 4.3. The expression (4.8b) is equal to (4.8c) due to (4.7b). It remains to check that the geodesic emanates at \(\rho\) in the direction \(X\). We compute \[\gamma_{\rho,X}^{(e)}(0) =\Gamma(\Gamma^{-1}(\rho))=\rho \tag{7.34a}\] \[\frac{d}{dt}\gamma_{\rho,X}^{(e)}(0) =\frac{d}{dt}\Gamma\big{(}\Gamma^{-1}(\rho)+td\Gamma^{-1}(\rho)[ X]\big{)}\big{|}_{t=0}\] (7.34b) \[=d\Gamma\big{(}\Gamma^{-1}(\rho)\big{)}\big{[}d\Gamma^{-1}(\rho) [X]\big{]}=\mathrm{id}[X]=X. \tag{7.34c}\] Proof of Corollary 4.7.: Setting \[\mu=\mathrm{Exp}_{\rho}^{(e)}(X)\stackrel{{\eqref{eq:d-1-1}}}{{= }}\Gamma\big{(}\Gamma^{-1}(\rho)+d\Gamma^{-1}(\rho)[X]\big{)} \tag{7.35}\] we solve for \(X\), \[\Gamma^{-1}(\mu) =\Gamma^{-1}(\rho)+d\Gamma^{-1}(\rho)[X] \tag{7.36a}\] \[d\Gamma^{-1}(\rho)[X] =\Gamma^{-1}(\mu)-\Gamma^{-1}(\rho)\] (7.36b) \[X =d\Gamma\big{(}\Gamma^{-1}(\rho)\big{)}\big{[}\Gamma^{-1}(\mu)- \Gamma^{-1}(\rho)], \tag{7.36c}\] which shows (4.9) and where \(d\Gamma(\Gamma^{-1}(\rho))^{-1}=d\Gamma^{-1}(\rho)\) was used to obtain the last equation. Proof of Lemma 4.8.: We compute \[\operatorname{Exp}_{\rho}^{(e)}\circ\mathfrak{R}_{\rho}[X] \stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeqeq:eqeqeq:eq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq: where \(\lambda(t)\) solves the single vertex assignment flow equation (3.4) of the form \[\dot{\lambda}_{\rho}=\mathfrak{R}_{\lambda_{\rho}}L_{\lambda_{\rho}}(\lambda_{D}). \tag{7.43b}\] Corollary 3.4 completes the proof. Proof of Lemma 4.14.: Put \[H_{i}=\Gamma^{-1}(\rho_{i})\stackrel{{\eqref{eq:H_i}}}{{=}}\Pi_{c,0}\log_{\mathrm{m}}\rho_{i},\quad i\in\mathcal{V}. \tag{7.44}\] Then \[\big{(}\operatorname{Exp}_{\rho_{i}}^{(e)}\big{)}^{-1}\big{(}L_{ \rho_{k}}(D_{k})\big{)} \stackrel{{\eqref{eq:H_i}}}{{=}}\big{(} \operatorname{Exp}_{\rho_{i}}^{(e)}\big{)}^{-1}\circ\Gamma\big{(}\Gamma^{-1}( \rho_{k})-D_{k}\big{)} \tag{7.45a}\] \[\stackrel{{\eqref{eq:H_i}}}{{=}}d\Gamma\big{(} \Gamma^{-1}(\rho_{i})\big{)}[\Gamma^{-1}(\rho_{k})-D_{k}-\Gamma^{-1}(\rho_{i})]\] (7.45b) \[=d\Gamma(H_{i})[H_{k}-D_{k}-H_{i}]. \tag{7.45c}\] Substituting this expression into (4.27) yields \[S_{i}(\rho) \stackrel{{\eqref{eq:H_i}}}{{=}}\Gamma\Big{(}H_{i }+\underbrace{d\Gamma^{-1}(\rho_{i})\circ d\Gamma(H_{i})}_{=I}\Big{[}\sum_{k \in\mathcal{N}_{i}}\omega_{ik}(H_{k}-D_{k})-H_{i}\Big{]}\Big{)} \tag{7.46a}\] \[=\Gamma\Big{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}(H_{k}-D_{k}) \Big{)}. \tag{7.46b}\] Substituting (7.44) and omitting the projection map \(\Pi_{c,0}\) due to Lemma 4.3 yields (4.28). Proof of Proposition 4.15.: Substituting as in the proof of Lemma 4.14, we get \[0=d\Gamma\big{(}\Gamma^{-1}(\overline{\rho})\big{)}\Big{[}\sum_{k\in\mathcal{ N}_{i}}\omega_{ik}(\Pi_{c,0}\log_{\mathrm{m}}\rho_{k}-D_{k})-\Gamma^{-1}( \overline{\rho})\Big{]}. \tag{7.47a}\] Since \(d\Gamma\) is one-to-one, the expression inside the brackets must vanish. Solving for \(\overline{\rho}\) and omitting the projection map \(\Pi_{c,0}\), due to Lemma 4.3, gives (4.28). Proof of Proposition 4.16.: Let \(\rho(t)\) solve (4.31) and denote the argument of the replicator operator \(\mathfrak{R}_{\rho}\) on the right-hand side by \[\mu(t):=S\big{(}\rho(t)\big{)}, \tag{7.48}\] which yields (4.34a) and (4.31), respectively. It remains to show (4.34b). Differentiation yields \[\dot{\mu}_{i}=dS_{i}(\rho)[\dot{\rho}] \tag{7.49a}\] \[\stackrel{{\eqref{eq:H_i}}}{{=}}d\Gamma\Big{(}\sum_ {k\in\mathcal{N}_{i}}\omega_{ik}(\log_{\mathrm{m}}\rho_{k}-D_{k})\Big{)}\Big{[} \sum_{k\in\mathcal{N}_{i}}\omega_{ik}\mathbb{T}_{\rho_{k}}[\dot{\rho}_{k}]\Big{]}\] (7.49b) \[\stackrel{{\eqref{eq:H_i}}}{{=}}d\Gamma\Big{(}\Gamma ^{-1}(S_{i}(\rho)\Big{)}\Big{]}\Big{[}\sum_{k\in\mathcal{N}_{i}}\omega_{ik} \mathbb{T}_{\rho_{k}}[\dot{\rho}_{k}]\Big{]}\] (7.49c) \[\stackrel{{\eqref{eq:H_i}}}{{=}}\overline{\Gamma}_{S_ {i}(\rho)}^{-1}\Big{[}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\mathbb{T}_{\rho_{k }}[\dot{\rho}_{k}]-\Big{<}S_{i}(\rho),\sum_{k\in\mathcal{N}_{i}}\omega_{ik} \mathbb{T}_{\rho_{k}}[\dot{\rho}_{k}]\Big{>}I\Big{]}\] (7.49d) \[\stackrel{{\mathrm{T}_{\rho}^{-1}[I]=\rho}}{{=}} \overline{\Gamma}_{S_{i}(\rho)}^{-1}\Big{[}\sum_{k\in\mathcal{N}_{i}}\omega_{ ik}\mathbb{T}_{\rho_{k}}[\dot{\rho}_{k}]\Big{]}-\Big{<}S_{i}(\rho),\sum_{k\in \mathcal{N}_{i}}\omega_{ik}\mathbb{T}_{\rho_{k}}[\dot{\rho}_{k}]\Big{>}S_{i}(\rho)\] (7.49e) \[\stackrel{{\eqref{eq:H_i}}}{{=}}\mathfrak{R}_{S_{i}( \rho)}\Big{[}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\mathbb{T}_{\rho_{k}}[\dot{ \rho}_{k}]\Big{]}\stackrel{{\eqref{eq:H_i}}}{{=}}\sum_{k\in \mathcal{N}_{i}}\omega_{ik}\mathfrak{R}_{\mu_{i}}\big{[}\mathbb{T}_{\rho_{k}}[ \dot{\rho}_{k}]\big{]} \tag{7.49f}\] \[\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq 2. Let Let \(\mu\in\mathcal{D}_{\Pi}\) and \(X\in T_{\mu}\mathcal{D}_{\Pi}\). Suppose the vector \(X\) is represented by a curve \(\eta:(-\varepsilon,\varepsilon)\to\mathcal{D}_{\Pi}\), such that \(\eta(0)=\mu\) and \(\eta^{\prime}(0)=X\). In view of the definition (4.43) of \(\mathcal{D}_{\Pi}\), we thus have \[\eta(t)=\sum_{i\in[l]}\frac{p_{i}(t)}{\mathrm{tr}\pi_{i}}\pi_{i}\quad \implies\quad X=\sum_{i\in[l]}\frac{p_{i}^{\prime}(0)}{\mathrm{tr}\pi_{i}}\pi_ {i}.\] (7.55) Consequently, if \(\mathcal{U}=\{u_{1},...,u_{c}\}\) is a basis of \(\mathbb{C}^{c}\) that diagonalizes \(\mu\), then the tangent vector \(X\) is also diagonal in this basis \(\mathcal{U}\) and \(X\) commutes with \(\mu\), i.e. \([\mu,X]=0\) and \(X\in T_{\mu}^{c}\mathcal{D}_{c}\). This proves (b). 3. The bijection \(\mathcal{D}_{\Pi}\to\mathcal{S}_{l}\) is explicitly given by \[\Phi_{\Pi}\colon\mathcal{D}_{\Pi}\to\mathcal{S}_{l},\qquad\sum_{i\in[l]}\frac {p_{i}}{\mathrm{tr}\pi_{i}}\pi_{i}\mapsto(p_{1},...,p_{l}).\] (7.56) This is bijective by the definiton of \(\mathcal{D}_{\Pi}\). It remains to be shown that it is an isometry. Consider another tangent vector \(Y\in T_{\mu}\mathcal{D}_{\Pi}\). We know that \(\mu,X,Y\) can all be diagonalized in a common eigenbasis. Denote this basis again by \(\mathcal{U}\). Then we can write \[\mu=\sum_{i\in[c]}\tilde{p}_{i}u_{i}u_{i}^{*},\qquad X=\sum_{i\in[c]}\tilde{x} _{i}u_{i}u_{i}^{*},\qquad Y=\sum_{i\in[c]}\tilde{y}_{i}u_{i}u_{i}^{*}\] (7.57) and compute \[\iota^{*}g_{\textsc{hkm},\mu}(X,Y) =\int_{0}^{\infty}\mathrm{tr}\big{(}X(\mu+\lambda I)^{-1}Y(\mu+ \lambda I)^{-1}\big{)}d\lambda\] (7.58a) \[=\sum_{i\in[c]}\int_{0}^{\infty}\mathrm{tr}\bigg{(}\frac{\tilde{ x}_{i}\tilde{y}_{i}}{(\tilde{p}_{i}+\lambda)^{2}}u_{i}u_{i}^{*}\bigg{)}d\lambda\] (7.58b) \[=\sum_{i\in[c]}\frac{\tilde{x}_{i}\tilde{y}_{i}}{\tilde{p}_{i}}.\] (7.58c) Note that the vector \(\tilde{p}=(\tilde{p}_{1},...,\tilde{p}_{c})\) comes from \(\mu\in\mathcal{D}_{\Pi}\). Therefore, the value \(p_{j}/\mathrm{tr}\pi_{j}\) must occur \(\mathrm{tr}\pi_{j}\) times in \(\tilde{p}\), for every \(j\in[l]\). This observation holds for the vectors \(\tilde{x}=(\tilde{x}_{1},...,\tilde{x}_{c})\) and \(\tilde{y}=(\tilde{y}_{1},...,\tilde{y}_{c})\) as well. Thus, the sum above can be reduced to \[\sum_{i\in[c]}\frac{\tilde{x}_{i}\tilde{y}_{i}}{\tilde{p}_{i}}=\sum_{j\in[l]} \frac{x_{j}y_{j}}{p_{j}},\] (7.59) where \((p_{1},...,p_{j})=\Phi(\mu)\), \((x_{1},...,x_{l})=d\Phi[X]\) and \((y_{1},...,y_{l})=d\Phi[Y]\). Taking into account that \((x_{1},...,x_{l})\) and \((y_{1},...,y_{l})\) are the images of \(X,Y\) under the differential \(d\Phi\), we conclude \[\iota^{*}g_{\textsc{hkm},\mu}(X,Y)=\sum_{i\in[l]}\frac{x_{i}y_{i}}{p_{i}}\overset {\eqref{eq:T_1}}{=}g_{\textsc{fr},\Phi(\mu)}(d\Phi(X),d\Phi(Y)).\] (7.60) This proves part (a). 4. Part (c) is about the commutativity of the diagram (7.61) The horizontal arrows can be described as follows. Recall that \(\Pi=\{\pi_{1},...,\pi_{l}\}\). Denote by \(k_{i}=\mathrm{tr}\pi_{i}\) the dimension of the images of the projectors \(\pi_{i}\). For a fixed \(p=(p_{1},...,p_{l})\in\mathcal{S}_{l}\), set \[P=(P_{1},...,P_{c}):=(\underbrace{p_{1}/k_{1},...,p_{1}/k_{1}}_{k_{1}\text{ times}},...,\underbrace{p_{l}/k_{l},...,p_{l}/k_{l}}_{k_{l}\text{ times}})\in\mathcal{S}_{c}.\] (7.62) Then \(\alpha_{\Pi}\) is given by \[\alpha_{\Pi}\bigg{(}\sum_{i\in[l]}\frac{p_{i}}{k_{i}}\pi_{i}\bigg{)}=\sum_{j\in[c] }P_{j}u_{j}u_{j}^{*}\in\mathcal{D}_{\Pi_{\mathcal{U}}}\quad\text{and}\quad \beta_{\Pi}(p_{1},...,p_{l})=(P_{1},...,P_{c}) \tag{7.63}\] The diagram (7.61) commutes by definition of the \(\Phi\) maps. Proof of Lemma 4.21.: 1. Due to the commutativity of the components \(\mu_{i}\) of \(\mu\in\mathcal{Q}\), we can simplify the expression for the vector field of the QSAF as follows. \[\mathfrak{R}_{\mu}[\Omega[\mu]]_{i} \overset{\eqref{eq:QSAF}}{=}\mathfrak{R}_{\mu_{i}}\big{[} \Omega[\mu]_{i}\big{]}\] (7.64a) \[\overset{\eqref{eq:QSAF}}{=}\sum_{k\in\mathcal{N}_{i}}\omega_{ ik}\Big{(}\int_{0}^{1}\mu_{i}^{1-\lambda}\mu_{k}\mu_{i}^{\lambda}d\lambda-\mathrm{tr} (\mu_{i}\mu_{k})\mu_{i}\Big{)}\] (7.64b) \[=\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\big{(}\mu_{i}\mu_{k}- \mathrm{tr}(\mu_{i}\mu_{k})\mu_{i}\big{)}.\] (7.64c) Invoke that \(\mu\in\mathcal{D}_{\Pi,c}\), such that all the components \(\mu_{i}\) can be written as \[\mu_{i}=\sum_{r\in[l]}\frac{p_{r}^{i}}{\mathrm{tr}\pi_{r}}\pi_{r},\qquad p^{i }=(p_{1}^{i},...,p_{l}^{i})\in\mathcal{S}_{l},\quad i\in\mathcal{V}.\] (7.65) Then we can further simplify \[\mu_{i}\mu_{k}=\sum_{r\in[l]}\frac{p_{r}^{i}p_{r}^{k}}{(\mathrm{tr}\pi_{r})^{2 }}\pi_{r}\quad\text{and}\quad\mathrm{tr}(\mu_{i}\mu_{k})=\sum_{r\in[l]}\frac{p _{r}^{i}p_{r}^{k}}{\mathrm{tr}\pi_{r}}\] (7.66) and consequently \[\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\Big{(}\mu_{i}\mu_{k}- \mathrm{tr}(\mu_{i}\mu_{k})\mu_{i}\Big{)} =\sum_{r\in[l]}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\bigg{(}\frac {p_{r}^{k}}{\mathrm{tr}\pi_{r}}-\bigg{(}\sum_{s\in[l]}\frac{p_{s}^{i}p_{s}^{k} }{\mathrm{tr}\pi_{s}}\bigg{)}\bigg{)}\frac{p_{r}^{i}}{\mathrm{tr}\pi_{r}}\pi_{r}\] (7.67a) \[=\sum_{r\in[l]}\frac{x_{r}}{\mathrm{tr}\pi_{r}}\pi_{r},\] (7.67b) where \[x_{r}:=\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\bigg{(}\frac{p_{r}^{k}}{\mathrm{ tr}\pi_{r}}-\bigg{(}\sum_{s\in[l]}\frac{p_{s}^{i}p_{s}^{k}}{\mathrm{tr}\pi_{s}} \bigg{)}\bigg{)}p_{r}^{i}.\] (7.68) Thus, \[\mathfrak{R}_{\mu}[\Omega[\mu]]_{i}=\sum_{r\in[l]}x_{r}/(\mathrm{tr}\pi_{r}) \pi_{r}.\] This has to be compared with the general form of a tangent vector \(X\in T_{\mu_{i}}\mathcal{D}_{\Pi}\), given by (7.55). The only condition the vector \(p^{\prime}(0)\) in (7.55) has to satisfy, is that its components sum to \(0\). This holds for \(x=(x_{1},...x_{l})\) as well. We conclude that \(\mathfrak{R}_{\mu}[\Omega[\mu]]_{i}\) lies in \(T_{\mu_{i}}\mathcal{D}_{\Pi}\) for all \(i\in\mathcal{V}\), or equivalently, \(\mathfrak{R}_{\mu}[\Omega[\mu]]\in T_{\mu}\mathcal{D}_{\Pi,c}\). 2. Write \(\mu_{i}=U\operatorname{Diag}(S_{i})U^{*}\) for all \(i\in\mathcal{V}\) with \(S_{i}\in\mathcal{S}_{c}\), and express \(\mathfrak{R}_{\mu}[\Omega[\mu]]\) in terms of \(S\in\mathcal{W}\) as \[\mathfrak{R}_{\mu}[\Omega[\mu]]_{i} =\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\big{(}\mu_{i}\mu_{k}-\mathrm{ tr}(\mu_{i}\mu_{k})\mu_{i}\big{)}\] (7.69a) \[=U\operatorname{Diag}\bigg{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik} \big{(}S_{i}\cdot S_{k}-\langle S_{i},S_{k}\rangle S_{i}\big{)}\bigg{)}U^{*}\] (7.69b) \[=U\operatorname{Diag}\left(R_{S}[\Omega S]\right)_{i}U^{*}.\] (7.69c) Proof of Corollary 4.22.: Write \(D_{i}=U\operatorname{Diag}{(\lambda_{i})}U^{*}\) for \(\lambda_{i}\in\mathbb{R}^{n}\), diagonalized in the \(U\)-basis. Then the initial condition for the QSAF S-flow (4.34b) is given by \[\mu(0)_{i}=S(\mathbb{1}_{\mathcal{Q}})_{i}\stackrel{{\eqref{eq: QSAF}}}{{=}}\Gamma\bigg{(}\sum_{k\in\mathcal{N}_{i}}\omega_{ik}(-D_{k})\bigg{)}. \tag{7.70}\] Then set \(\tilde{D}_{i}:=\sum_{k\in\mathcal{N}_{i}}\omega_{ik}D_{k}=U\operatorname{ Diag}(\tilde{\lambda}_{i})U^{*}\), where \[\tilde{\lambda}_{i}=\sum_{k\in\mathcal{N}_{i}}\omega_{ik}\lambda_{k}\in \mathbb{R}^{c}. \tag{7.71}\] Recall further that \(\Gamma\) is computed in terms of the matrix exponential as specified by (4.4). Thus, \[\mu(0)_{i}=\Gamma(-\tilde{D}_{i})=\frac{\exp_{\mathrm{m}}(-\tilde{D}_{i})}{ \operatorname{tr}\exp_{\mathrm{m}}(-\tilde{D}_{i})}=\frac{U\exp_{\mathrm{m}}( -\operatorname{Diag}(\tilde{\lambda}_{i}))U^{*}}{\operatorname{tr}(U\exp_{ \mathrm{m}}(-\operatorname{Diag}(\tilde{\lambda}_{i}))U^{*})}=U\frac{ \operatorname{Diag}(\exp(-\tilde{\lambda}_{i}))}{\operatorname{tr}\exp_{ \mathrm{m}}(-\operatorname{Diag}(\tilde{\lambda}_{i}))}U^{*}. \tag{7.72}\] This shows that all the \(\mu(0)^{\prime}_{i}s\) are diagonalized by the same basis \(\mathcal{U}\) and \(\mu(0)\in\mathcal{D}_{\Pi_{\mathcal{U}},c}\) and we can apply Proposition 4.21 (ii). Therefore, the vector field of the quantum state assignment S-flow is also diagonalized in the basis \(\mathcal{U}\) and we solve simply for the diagonal components. The quantum S-flow equation can be written as \[\dot{\mu}_{i}=U\operatorname{Diag}(R_{S_{i}}[\Omega S])U^{*},\qquad\mu(0)_{i} =U\operatorname{Diag}(S(\mathbb{1}_{\mathcal{W}}))U^{*} \tag{7.73}\] with the classical similarity map \(S\) defined in terms of the data vectors \(\lambda_{i}\) and \(\mu_{i}\) related to \(S_{i}\in\mathcal{S}_{c}\) by \(\mu_{i}=U\mathrm{diag}(S_{i})U^{*}\). The solution to this system is \[\mu_{i}(t)=U\operatorname{Diag}(S_{i}(t))U^{*}, \tag{7.74}\] where \(S\in\mathcal{W}\) solves the classical S-flow equation \(\dot{S}=R_{S}[\Omega S]\) and \(S(0)=S(\mathbb{1}_{\mathcal{W}})\)
2309.08185
Multilingual Sentence-Level Semantic Search using Meta-Distillation Learning
Multilingual semantic search is the task of retrieving relevant contents to a query expressed in different language combinations. This requires a better semantic understanding of the user's intent and its contextual meaning. Multilingual semantic search is less explored and more challenging than its monolingual or bilingual counterparts, due to the lack of multilingual parallel resources for this task and the need to circumvent "language bias". In this work, we propose an alignment approach: MAML-Align, specifically for low-resource scenarios. Our approach leverages meta-distillation learning based on MAML, an optimization-based Model-Agnostic Meta-Learner. MAML-Align distills knowledge from a Teacher meta-transfer model T-MAML, specialized in transferring from monolingual to bilingual semantic search, to a Student model S-MAML, which meta-transfers from bilingual to multilingual semantic search. To the best of our knowledge, we are the first to extend meta-distillation to a multilingual search application. Our empirical results show that on top of a strong baseline based on sentence transformers, our meta-distillation approach boosts the gains provided by MAML and significantly outperforms naive fine-tuning methods. Furthermore, multilingual meta-distillation learning improves generalization even to unseen languages.
Meryem M'hamdi, Jonathan May, Franck Dernoncourt, Trung Bui, Seunghyun Yoon
2023-09-15T06:22:37Z
http://arxiv.org/abs/2309.08185v1
# Multilingual Sentence-Level Semantic Search ###### Abstract Multilingual semantic search is the task of retrieving relevant contents to a query expressed in different language combinations. This requires a better semantic understanding of the user's intent and its contextual meaning. Multilingual semantic search is less explored and more challenging than its monolingual or bilingual counterparts, due to the lack of multilingual parallel resources for this task and the need to circumvent "language bias". In this work, we propose an alignment approach: MAML-Align,1 specifically for low-resource scenarios. Our approach leverages meta-distillation learning based on MAML, an optimization-based Model-Agnostic Meta-Learner. MAML-Align distills knowledge from a Teacher meta-transfer model T-MAML, specialized in transferring from monolingual to bilingual semantic search, to a Student model S-MAML, which meta-transfers from bilingual to multilingual semantic search. To the best of our knowledge, we are the first to extend meta-distillation to a multilingual search application. Our empirical results show that on top of a strong baseline based on sentence transformers, our meta-distillation approach boosts the gains provided by MAML and significantly outperforms naive fine-tuning methods. Furthermore, multilingual meta-distillation learning improves generalization even to unseen languages. Footnote 1: We will release our code repository in the camera-ready version. ## 1 Introduction Nowadays, the web offers a wealth of information from multiple sources and in different languages. This makes it increasingly challenging to retrieve reliable information efficiently and accurately. Users across the globe may express the need to retrieve relevant content in languages different from the language of the query or in multiple languages simultaneously. All this burgeons the great demand for multilingual semantic search. Compared to bilingual semantic search, often portrayed as cross-lingual information retrieval Savoy and Braschler (2019); Grefenstette (1998), multilingual or mixed-language semantic search is under-explored and more challenging. It requires not only more semantic understanding but also a stronger alignment between the languages of the query and the contents to be retrieved Roy et al. (2020). The new wave of multilingual semantic search focuses on reducing the need to machine translation through transfer learning. Pre-trained multilingual Transformer-based models such as M-BERT Devlin et al. (2019) and XLM-R Conneau et al. (2020) have been used as off-the-shelf encoders in multilingual semantic search. However, their performance, especially for ad-hoc semantic search, is still lacking Litschko et al. (2022). Knowledge distillation and contrastive-distillation learning approaches are considered as Figure 1: A high-level diagram of our meta-distillation **MAML-Align** framework for multilingual semantic search and some of its application scenarios. We use LARQA Roy et al. (2020) retrieval-based question answering as our benchmark, where the task is to rank and retrieve the most relevant content. We gradually transfer from most to least resourced variants of semantic search. We leverage knowledge distillation to align between the teacher **T-MAML** Finn et al. (2017), specialized in transferring from monolingual to bilingual, and the student **S-MAML** specialized in transferring from bilingual to multilingual semantic search. The applications can either be few-shot or zero-shot depending on the language arrangements used in the evaluation and whether they are used at any stage in MAML-Align. the de-facto approaches to produce better-aligned multilingual sentence representations with reduced need to parallel corpora (Reimers and Gurevych, 2020; Tan et al., 2023). However, they still rely on medium-scaled data including monolingual corpora and back-translation and yield mixed results. Meta-transfer learning, another technique for low-resource learning, has been leveraged for retrieval tasks; however, its application has been restricted to the monolingual case (Lin and Chen, 2020; Laadan et al., 2019; Carvalho et al., 2008). Hybrid approaches of meta-learning and knowledge distillation either involve using meta-learning to improve the student-teacher feedback loop (Zhou et al., 2022; Liu et al., 2022), or leverage knowledge distillation to enhance the portability of MAML networks (Zhang et al., 2020). To the best of our knowledge, we are the first to adapt multilingual meta-transfer learning and to extend an approach based on meta-distillation learning to multilingual semantic search and to a multilingual application in general. In this paper, inspired by M'hamdi et al. (2021), which propose the X-METRA-ADA algorithm to adapt meta-learning to cross-lingual transfer learning for cross-lingual natural language understanding, we propose an adaptation of meta-transfer learning to multilingual semantic search. Given the lack of resourcefulness of semantic search especially in the multilingual case, this encourages us to pursue a meta-learning direction based on MAML. We also explore the combination of meta-learning and knowledge distillation and adapt it to the task of multilingual semantic search (Figure 1). We do that in two stages 1) from monolingual to bilingual and 2) from bilingual to multilingual to create a more gradual feedback loop, which makes it easier to generalize to the multilingual case. We conduct experiments on different semantic search benchmarks on top of a strong baseline based on sentence transformers (Reimers and Gurevych, 2019). Our findings confirm the benefits of the meta-distillation approach compared to naive fine-tuning and MAML. Our **main contributions** are: (1) We are the first to propose a meta-learning approach for multilingual semantic search (SS4.4) and to curate meta-tasks for that effect (SS5.2). (2) We are the first to propose a meta-distillation approach to distill the transfer from monolingual to bilingual to the transfer from bilingual to multilingual semantic search (SS4.4). (3) We systematically compare between several few-shot transfer learning methods and show the gains of our multilingual meta-distillation approach (SS6.1). (4) We also conduct ablation studies involving different language arrangements and different sampling approaches in the meta-task construction (SS6.2). ## 2 Related Work Transfer Learning for Multilingual Semantic SearchMost approaches to multilingual semantic search or cross-lingual information retrieval rely on machine translation to reduce the problem to monolingual search (Lu et al., 2008; Nguyen et al., 2008; Jones et al., 2008). However, such systems are inefficient for multilingual semantic search due to error propagation and overheads from API calls. In addition to that, the number of language combinations in the query and content to be retrieved can get prohibitively large (Savoy and Braschler, 2019). More prominent approaches leverage transfer learning with models like M-BERT and XLM used for question-answer retrieval (Yang et al., 2020), bitext mining (Ziemski et al., 2016; Zweigenbaum et al., 2018), and semantic textual similarity (Hoogeveen et al., 2015; Lei et al., 2016) and show that semantic specialization and pre-fine-tuning on other auxiliary tasks helps. Multilingual Meta-LearningMeta-transfer learning, or "learning to learn" has found favor in cross-lingual transfer learning for numerous downstream applications (Gu et al., 2018; Hsu et al., 2020; Winata et al., 2020; Chen et al., 2020; Xiao et al., 2021). Most recent meta-learning work involving transferring between different languages focuses on cross-lingual meta-learning (Nooralahzadeh et al., 2020; M'hamdi et al., 2021). Meta-transfer learning has been extended multilingually by exploring joint multi-task and multi-lingual transfer (Tarunesh et al., 2021; van der Heijden et al., 2021). Meta-Distillation LearningMeta-learning has also been leveraged to improve the performance of knowledge distillation to help the teacher transfer better to the student (Zhou et al., 2022; Liu et al., 2022). Inversely, knowledge distillation has been leveraged to improve meta-learning, especially MAML, by making it more portable (Zhang et al., 2020). Xu et al. (2021) follow a gradual multi-stage process which is different in scope and approach from our work in that it uses fine-tuning for domain adaptation to interpolate between in domain and out-domain data. In contrast, we apply our approach to a multilingual semantic search in an end-to-end meta-learning framework which gradually meta-transfers between semantic search language variants. Moreover, we show that our approach outperforms naive joint fine-tuning, advocating for a meta-learning approach in the few-shot learning scenario.2 Footnote 2: More detailed related work can be found in Appendix A. ## 3 Meta-Learning Background Given a training dataset \(\mathcal{D}\) made of instances: \(\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\), the goal of a conventional machine learning model is to find the most optimal parameters \(\theta^{*}\) that minimize the loss \(\mathcal{L}\): \[\theta^{*}=\operatorname*{arg\,min}_{\theta}\mathcal{L}(\theta;\omega; \mathcal{D}), \tag{1}\] where \(\omega\) is some already acquired prior knowledge or assumption on how to learn (Hospedales et al., 2020). There are two main distinctions between this conventional machine-learning process and meta-learning. First, machine learning focuses on one task at a time whereas meta-learning optimizes over a distribution of many sub-tasks, referred to as'meta-tasks', sampled to simulate a low-resource scenario. Second, meta-learning effectively learns the prior knowledge jointly with the task by adding an extra layer of abstraction to the process. Each meta-task is defined as a tuple \(T=(S,Q)\), where \(S\) and \(Q\) denote support and query sets, respectively. \(S\) and \(Q\) are sampled to simulate the train and test labeled subsets of instances. Following a bi-level optimization abstraction (as in MAML), the meta-learning process is a sequence of inner loops each followed by an outer loop. The inner loop is specialized in learning task-specific optimizations over the support sets in a batch of meta-tasks; the outer loop, on the other hand, learns the generalization over the query sets in the same batch in a leader-follower manner. The goal is to learn a proper initialization point to generalize to the domain of \(Q\). Meta-learning works with meta-training \(\mathcal{D}_{\text{meta-train}}=\{\mathcal{D}_{\text{support}}^{\text{train}},\mathcal{D}_{\text{query}}^{\text{train}}\}\), meta-testing \(\mathcal{D}_{\text{meta-test}}=\{\mathcal{D}_{\text{support}}^{\text{test}},\mathcal{D}_{\text{query}}^{\text{test}}\}\), and optionally meta-validation \(\mathcal{D}_{\text{meta-valid}}\) datasets. During **meta-training**, we start by learning the optimal prior knowledge \(\omega^{*}\): \[\omega^{*}=\operatorname*{arg\,min}_{\omega}\mathcal{L}(\omega|\mathcal{D}_{ \text{meta-train}}). \tag{2}\] This learned prior knowledge is leveraged along with the support set in the meta-testing dataset \(\mathcal{D}_{\text{support}}^{\text{test}}\) during **meta-testing** to fastly adapt to \(\mathcal{D}_{\text{query}}^{\text{test}}\) (without optimizing on it like in meta-training), as follows: \[\theta^{*}=\operatorname*{arg\,min}_{\theta}\mathcal{L}(\theta|\omega^{*}, \mathcal{D}_{\text{support}}^{\text{test}}). \tag{3}\] ## 4 Methodology In this section, we start by defining the task of sentence-level semantic search and its different categories (SS4.1), its language variants (SS4.2), and supervision degrees (SS4.3). Then, we present our optimization-based meta-distillation learning algorithm MAML-Align and show how it extends from the original MAML algorithm (SS4.4). ### Task Formulation Our base task is sentence-level semantic search. Given a sentence query \(q\) from a pool of queries \(\mathcal{Q}\), the goal is to find relevant content \(r\) from a pool of candidate contents \(\mathcal{R}\). The queries are of sentence length and retrieved contents are either sentences or small passages of few sentences. In terms of the format of the queries and contents, there are two main categories of semantic search: (1) **Symmetric Semantic Search.** Query \(q\) and relevant content \(r\) have approximately the same length and format. (2) **Asymmetric Semantic Search.**\(q\) and \(r\) are not of the same length or format. For example, \(q\) can be a question and \(r\) a passage answering that. ### Task Language Variants In the context of languages, we distinguish between three variants of semantic search at evaluation time (also illustrated in Figure 1): (1) **Mono-lingual Semantic Search (mono).** The pools of queries and candidate contents \(\mathcal{Q}\) and \(\mathcal{R}\) are from the same known and fixed language \(\boldsymbol{\ell}_{\mathcal{Q}}=\boldsymbol{\ell}_{\mathcal{R}}\in\mathcal{L}\). (2) **Bilingual Semantic Search (bi).** The pools of queries and candidate contents are sampled from two different languages \(\{\boldsymbol{\ell}_{\mathcal{Q}},\boldsymbol{\ell}_{\mathcal{R}}\}\in \mathcal{L}^{2}\), such that \(\boldsymbol{\ell}_{\mathcal{Q}}\neq\boldsymbol{\ell}_{\mathcal{R}}\). (3) **Multilingual Semantic Search (multi).** This is the problem of retrieving relevant contents from a pool of candidates from a subset of multiple languages \(\mathcal{L}_{\mathcal{R}}\subseteq\mathcal{L}\) to a query expressed in a subset of multiple languages \(\mathcal{L}_{\mathcal{Q}}\subseteq\mathcal{L}\). Unlike monolingual and bilingual semantic search, multilingual semantic search doesn't restrict or condition on which languages can be used in the queries or the candidate contents. Therefore, it is more challenging and requires stronger multilingual alignment (Roy et al., 2020). ### Supervision Degrees In the absence of enough training data for the task, we distinguish between three degrees of supervision of semantic search: * **Zero-Shot Learning.** This resembles ad-hoc semantic search in that it doesn't involve any fine-tuning specific to the task of semantic search. Rather, off-the-shelf pre-trained language models are used directly to find relevant content to a specific query. This still uses some supervision in the form of parallel sentences used to pre-train those off-the-shelf models. In the context of multilingual semantic search, we include in the zero-shot learning case any evaluation on languages not seen during fine-tuning. * **Few-Shot Learning.** Few-shot learning is used in the form of a small fine-tuning dataset. In the context of multilingual semantic search, we talk about a few-shot evaluation for any language seen either in the arrangement of the query or the contents to be retrieved during fine-tuning. ### Meta-Learning Models Original MAML Algorithm.Our first variant is a direct adaptation of MAML to multilingual semantic search. We use the procedure outlined in Algorithm 1. We start by sampling a batch of meta-tasks from a meta-dataset distribution \(\mathcal{D}_{\mathbb{X}\sim\mathbb{X}}\), which simulates the transfer from \(X\) to \(X^{\prime}\). \(X\) and \(X^{\prime}\) denote different task language variants of semantic search (monolingual, bilingual, multilingual, or any combination of that). We start by initializing our meta-learner parameters \(\theta\) with the pre-trained learner parameters \(\theta_{B}\). For each meta-batch, we perform an inner loop (Algorithm 2) over each meta-task \(T_{j}=(S_{j},Q_{j})\), separately, where we update \(\theta_{j}\) using \(S_{j}^{X}\) for \(n\) steps. At the end of the inner loop, we compute the gradients with respect to the loss of \(\theta_{j}\) on \(Q_{j}^{X^{\prime}}\). After finishing a pass over all meta-tasks of the batch, we perform one outer loop by summing over all pre-computed gradients and updating \(\theta\). ``` 1:Task set distribution \(\mathcal{D}_{\mathbb{X}\sim\mathbb{X}}\) simulating transfer from X to \(X^{\prime}\) task language variants, pre-trained learner \(B\) with parameters \(\theta_{B}\), and meta-learner \(M\) with parameters (\(\theta\), \(\alpha\), \(\beta\), \(n\)). 2:Initialize \(\theta\leftarrow\theta_{B}\) 3:while not done do 4: Sample a batch of tasks \(\mathcal{T}=\{T_{1},\ldots T_{b}\}\sim\mathcal{D}_{\mathbb{X}\sim\mathbb{X}}\) 5: Sample batch of tasks \(\mathcal{T}_{\mathbb{Y}\sim\mathbb{Z}}=\{T_{1},\ldots T_{b}\}\sim\mathcal{D}_{ \mathbb{Y}\sim\mathbb{Z}}\) 6:\(\mathcal{L}_{T_{j}^{X}}^{\mathbb{X}},\mathcal{L}_{T_{j}^{Y}}^{\mathbb{Y}}=\)INNER_LOOP(\(\mathcal{T}_{\mathbb{X}\sim\mathbb{Y}}\), \(\theta,\alpha,n\)) 7:\(\mathcal{L}_{T_{j}^{Y}}^{\mathbb{Y}},\mathcal{L}_{T_{j}^{Y}}^{\mathbb{Z}}=\)INNER_LOOP(\(\mathcal{T}_{\mathbb{Y}\sim\mathbb{Z}}\), \(\theta\prime,\alpha,n\prime\)) 8:\(\mathcal{L}_{task}=\sum_{j=1}^{b}\frac{c_{T_{j}^{Y}}^{Q_{T_{j}^{Y}}}(B_{\theta _{j}})+c_{T_{j}^{Z}}^{Q_{T_{j}^{Z}}}(B_{\theta_{j}})}{2}\) 9:\(\mathcal{L}_{kd}=KL(\sum_{j=1}^{b}\mathcal{L}_{T_{j}^{Y}}^{Q_{T_{j}^{Y}}}(B_{ \theta_{j}}),\sum_{j=1}^{b}\mathcal{L}_{T_{j}^{Y}}^{\mathbb{S}^{Y}}(B_{\theta _{j}}))\) 10: Update \(\theta\leftarrow\theta-\beta\nabla_{\theta}(\mathcal{L}_{task}+\lambda \mathcal{L}_{kd})\) 11:endwhile ``` **Algorithm 1** MAML: Transfer Learning from X to \(X^{\prime}\) (X\(\rightarrow\) \(X\)) ``` 1:Task set distribution \(\mathcal{D}_{\mathbb{X}\sim\mathbb{X}}\) simulating transfer from X to \(X^{\prime}\) task language variants, pre-trained learner \(B\) with parameters \(\theta_{B}\), and meta-learner \(M\) with parameters (\(\theta\), \(\alpha\), \(\beta\), \(n\)). 2:Initialize \(\theta\leftarrow\theta_{B}\) 3:while not done 4: Sample a batch of tasks \(\mathcal{T}=\{T_{1},\ldots T_{b}\}\sim\mathcal{D}_{\mathbb{X}\sim\mathbb{X}}\) 5:\(\mathcal{L}_{T_{j}^{X}}^{\mathbb{S}^{X}},\mathcal{L}_{T_{j}^{Y}}^{Q_{T_{j}^{Y}}}(B_ {\theta_{j}})\)= INNER_LOOP(\(\mathcal{T}\), \(\theta\), \(\alpha\), \(n\)) 6: Outer Loop: Update \(\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{j=1}^{b}\mathcal{L}_{T_{j}^{Y}}^{Q _{X^{\prime}}}(B_{\theta_{j}})\) 7:endwhile ``` **Algorithm 2** INNER_LOOP Figure 2 shows a conceptual comparison between MAML-Align and MAML. ## 5 Experimental Setup In this section, we describe the downstream datasets and models (SS5.1), their formulation as meta-tasks (SS5.2), and the different baselines and model variants used in the evaluation (SS5.3). ### Downstream Benchmarks We evaluate our proposed approaches over the following combination of multilingual and bilingual sentence-level semantic search datasets for which we describe the downstream models used:3 Footnote 3: More details on the base model architectures can be found in Appendix B More experimental details on the datasets are and hyperparameters used in Appendix C. paraphrase-multilingual-mpnet-base-v2, according to our preliminary evaluation of different Sentence Transformers models.6 Footnote 6: [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers) in Table 5 in Appendix C. Footnote 7: We use the translate.pseudo-test provided for XQuAD dataset by XTREME benchmark [https://console.cloud.google.com/storage/browser/xtreme_translations](https://console.cloud.google.com/storage/browser/xtreme_translations). * _S-BERT+Fine-tune_: On top of S-BERT, we fine-tune jointly and directly on the support and query sets of each meta-task in \(\mathcal{D}_{\text{meta-train}}\) and \(\mathcal{D}_{\text{meta-valid}}\). This few-shot baseline makes for a fair comparison with the meta-learning approaches. Internal Variants.We design the following meta-learning variants: * _S-BERT+MAML_: On top of S-BERT, we apply MAML (following Algorithm 1). At each episode, we conduct a meta-training followed by a meta-validation phase. * _S-BERT+MAML-Align_: On top of S-BERT, we apply MAML-Align (following Algorithm 3). Similarly, at each episode, we conduct a meta-training followed by a meta-validation phase. External Evaluation.To assess the impact of using machine translation models with or without meta-learning and the impact of machine translation from higher-resourced data, we explore Translate-Train (T-Train), where we translate English data in SQUADen7 and STSBex8 to the evaluation languages. We then either use translated data in all languages or in each language separately as a data augmentation technique. Footnote 7: We use the translated dataset from the original English STSB [https://github.com/PhilipMay/stsb-multi-mt/](https://github.com/PhilipMay/stsb-multi-mt/). ## 6 Results & Analysis In this section, we present the results obtained using different meta-learning model variants compared to the baselines in multilingual, bilingual, and monolingual task language variants. All experiments are evaluated using 5-fold cross-validation and then the mean and standard deviation are reported. Following XTREME-R Ruder et al. (2021) and SemEval-2017 Cer et al. (2017), scores are reported using mean average precision at 20 (**mAP@20**) and Pearson correlation coefficient percentage (**Pearson's r \(\times\) 100**) for LAReQA and STSBbMulti, respectively. ### Multilingual, Bilingual, and Monolingual Performance Evaluation Table 1 summarizes multilingual, bilingual, and monolingual performances across different baselines and model variants for both semantic search benchmarks. On average, we notice that MAML-Align achieves better results than MAML or S-BERT zero-shot base model and significantly better than Fine-tune. It is worth noting that we report the results for MAML using trans mode, which is trained over a combination of mono\(\rightarrow\)bi and bi\(\rightarrow\)multi in the meta-training and meta-validation stages, respectively. This suggests that MAML-Align helps more in bridging the gap between those transfer modes. We observe that fine-tuning baselines are consistently weak compared to different meta-learning model variants, especially for LAReQA. We conjecture that fine-tuning is overfitting to the small amounts of training data, unlike meta-learning approaches which are more robust against that. However, for STSBbMulti, the gap between fine-tuning and meta-learning while still existing and to the favor of meta-learning is a bit reduced. We hypothesize that even meta-learning models are suffering from meta-overfitting to some degree in this case for STSBbMulti. We notice that MAML on top of machine-translated data boosts the performance on LAReQA in all evaluation task language evaluation variants and reaches the best compromise in terms of multilingual, bilingual, and monolingual performances. At the same time, not all languages used in the machine-translated data provide an equal boost to the performance, as shown by the average performance, due to noisy translations for certain languages. Although there is usually a correlation between different models in terms of their monolingual, bilingual, and multilingual performances, there is a slight drop in the monolingual and bilingual performances for MAML-Align compared to the zero-shot baseline. This means that there is still a compromise and gaps between multilingual, monolingual, and bilingual performances. This suggests that we should advocate for a balanced evaluation over different modes to get better insights into which models are more robust and consistent. Figure 3 highlights a more fine-grained comparison between different model categories on two languages and language pairs for each benchmark.9 We notice that the gain in favor of meta-learning approaches is consistent across different languages and language pairs and also applies to languages used for zero-shot learning. Footnote 9: More fine-grained results for all languages and for both benchmarks can be found in Tables 7 and 8 in Appendix D. ### Ablation Studies Due to the lack of parallelism in \(\text{STSB}_{\text{Multi}}\) making a multilingual evaluation not possible, we focus hereafter on LARQA in the remaining analysis and ablation studies. Figure 4 shows the results across different modes of transfer for Fine-tune and MAML. Among all transfer modes, trans, mono\(\rightarrow\)bi, and mono\(\rightarrow\)mono have the best gains, whereas bi\(\rightarrow\)multi and mixt are the weakest forms of transfer. trans is the best meta-transfer mode, es \begin{table} \end{table} Table 1: This is a comparison of different few-shot learning, zero-shot baselines, and machine translation models under a variety of language configuration scenarios. For LARQA and \(\text{STSB}_{\text{Multi}}\), we report mAP@20 and Pearson’s r \(\times\) 100, respectively. All results are evaluated over 5-fold cross-validation and averaged over multiple language choices. The same model checkpoint is used for all three task language evaluation variants for each row and dataset (except when the average is reported). mono, bi, and multistand for monolingual, bilingual, and multilingual semantic search. trans denotes the meta-transfer mode that uses mono\(\rightarrow\)bi and bi\(\rightarrow\)multi in meta-training and meta-validation, respectively. Models in (*) are our main contribution. (**) means that we use machine-translated data to do that experiment as \(\text{STSB}_{\text{Multi}}\) is not a parallel corpus. Best and second-best results for each benchmark and evaluation mode are highlighted in **bold** and _italicized_ respectively, whereas the best results across each model category are underlined. Ranks from best to worst are given in each model and evaluation mode.9 Figure 3: mAP@20 and Pearson’s r \(\times\) 100 5-fold cross-validated multilingual performance evaluation evaluated on LARQA and \(\text{STSB}_{\text{Multi}}\) on the first and last two subplots, respectively. The first two subplots show the performance evaluation on Arabic and Russian used in few-shot and zero-shot evaluations, respectively, whereas the two subplots in the second-row showcase monolingual and bilingual performances on Arabic-Arabic and Turkish-English where Arabic, Turkish, and English are all covered in few-shot learning. There are consistent gains in favor of meta-learning and meta-distillation learning compared to their fine-tuning counterparts on top of off-the-shelf model (S-BERT only) for all types of evaluations. pecially for MAML and this suggests that curating different transfer modes for different meta-learning processes is beneficial and leads to better generalization than fine-tuning on them jointly. mixt is weaker than trans and this implies that jointly optimizing different forms of transfers of meta-tasks makes it harder for MAML to learn to converge or generalize. MAML-Align is shown to be better for combining different optimization objectives. Figure 5 shows a multilingual performance comparison between different sampling modes in meta-tasks constructions. In each meta-task, we either sample the query set that is the most similar to its corresponding support set (_Similar_) or randomly (_Random_). We hypothesize that the sampling approach plays a role in stabilizing the convergence and generalization of meta-learning. While we were expecting that sampling for each support set a query set that is the most similar to it would help meta-learning converge faster and thus generalize better, it generalized worse on the multilingual performance in this case. On the other hand, random sampling generalizes better to out-of-sample test distributions leading to lower biases between languages in the multilingual evaluation mode. Figure 6 shows the results for different sampling modes of negative examples in the triplet loss. For each support and query set in each meta-task, we either sample random, hard, or semi-hard triplets to test the added value of triplet sampling in few-shot learning. While we expect training with more hard triplets to help converge the triplet loss in MAML, the multilingual performance using this type of sampling falls short of random sampling. This is due to the fact that more sophisticated ways of triplet loss sampling usually require a more careful hyperparameter tuning to pick the right amount of triplets. For few-shot learning applications, this usually results in a significant reduction in the number of training examples, which could further hurt the generalization performance. In future work, we plan to investigate hybrid sampling approaches to monitor at which point in meta-learning the training should focus more on hard or easy triplets. This could be done by proposing a regime for making the sampling of meta-tasks dynamic and flexible to also combat meta-over-fitting. ## 7 Conclusion In this work, we adapt multilingual meta-transfer learning combining MAML and knowledge distillation to multilingual semantic search. Our experiments show that our multilingual meta-knowledge distillation approach outperforms both vanilla MAML and fine-tuning approaches on top of a strong sentence transformers model. We evaluate comprehensively on two types of multilingual semantic search and show improvement over sentence transformers even for languages not covered during meta-learning. Figure 4: mAP@20 multilingual 5-fold cross-validated performance on LARQA between different meta-transfer modes for Fine-tune and MAML models. The gap is large between Fine-tune and MAML across all meta-transfer modes and is even larger to the favor of MAML when trans mode (the composed mode that mixes between mono\(\rightarrow\)bi and bi\(\rightarrow\)multi in the meta-training and meta-validation, respectively) is used. Figure 5: mAP@20 multilingual 5-fold cross-validated performance on LARQA between different query set sampling modes in meta-tasks for MAML and MAML. We notice that random query sampling has better generalization for both models. Figure 6: mAP@20 5-fold cross-validated multilingual performance over different triplet negative sampling modes on LARQA tested on different languages using MAML-Align. We provide both average numbers and standard deviation intervals. Random sampling seems best on average for few-shot learning, whereas hard sampling is more stable across cross-validation splits. ### Limitations Due to the lack of time and resources, exploring different combinations of languages in the construction of the query and the content to be retrieved is not feasible. On top of that, performing extensive hyperparameters search for different model variants, modes of transfer, language combinations, etc is not feasible. We follow a consistent configuration of the hyperparameters for each of the two downstream tasks which we deem to be a fair comparison across all setups, model variants. The insights from this study are tied to the experimental setup that we describe extensively in the main paper and appendix. We also have memory constraints when it comes to training meta-learning algorithms to deal with ranking and retrieval of sentences from multiple languages at the same time for one query. Our memory constraints make it challenging to explore more sophisticated state-of-the-art Sentence Transformers such as sentence-T5 or GPT Sentence Embeddings SGPT [22, 23]. Applying MAML as an upstream model on top of T5-based downstream model makes it even more computationally infeasible. Our main goal is to show the advantage of meta-learning and since our upstream approach is model-agnostic that can be continuously adapted to novel embedding approaches as they evolve. There is also a shortage of large-scale multilingual semantic search datasets, especially for the symmetric case and especially at the phrase level. This makes our evaluation a bit restricted to the bilingual and monolingual for symmetric semantic search. In future work, we plan to construct and annotate semantic search for ambiguous short queries aligned at the multilingual case.
2301.13387
Deep Learning for Reference-Free Geolocation for Poplar Trees
A core task in precision agriculture is the identification of climatic and ecological conditions that are advantageous for a given crop. The most succinct approach is geolocation, which is concerned with locating the native region of a given sample based on its genetic makeup. Here, we investigate genomic geolocation of Populus trichocarpa, or poplar, which has been identified by the US Department of Energy as a fast-rotation biofuel crop to be harvested nationwide. In particular, we approach geolocation from a reference-free perspective, circumventing the need for compute-intensive processes such as variant calling and alignment. Our model, MashNet, predicts latitude and longitude for poplar trees from randomly-sampled, unaligned sequence fragments. We show that our model performs comparably to Locator, a state-of-the-art method based on aligned whole-genome sequence data. MashNet achieves an error of 34.0 km^2 compared to Locator's 22.1 km^2. MashNet allows growers to quickly and efficiently identify natural varieties that will be most productive in their growth environment based on genotype. This paper explores geolocation for precision agriculture while providing a framework and data source for further development by the machine learning community.
Cai W. John, Owen Queen, Wellington Muchero, Scott J. Emrich
2023-01-31T03:37:47Z
http://arxiv.org/abs/2301.13387v1
# Deep Learning for Reference-Free Geolocation of ###### Abstract A core task in precision agriculture is the identification of climatic and ecological conditions that are advantageous for a given crop. The most succinct approach is geolocation, which is concerned with locating the native region of a given sample based on its genetic makeup. Here, we investigate genomic geolocation of _Populus trichocarpa_, or poplar, which has been identified by the US Department of Energy as a fast-rotation biofuel crop to be harvested nationwide. In particular, we approach geolocation from a reference-free perspective, circumventing the need for compute-intensive processes such as variant calling and alignment. Our model, MashNet, predicts latitude and longitude for poplar trees from randomly-sampled, unaligned sequence fragments. We show that our model performs comparably to Locator, a state-of-the-art method based on aligned whole-genome sequence data. MashNet achieves an error of 34.0 km\({}^{2}\) compared to Locator's 22.1 km\({}^{2}\). MashNet allows growers to quickly and efficiently identify natural varieties that will be most productive in their growth environment based on genotype. This paper explores geolocation for precision agriculture while providing a framework and data source for further development by the machine learning community. ## 1 Introduction Pollen dispersal in natural populations of _Populus trichocarpa_, as well as other species, results in correlations between geography and genetic variation. These correlations can be leveraged to predict geographic origin of a sample from genetic data as demonstrated in previous studies [1][2]. To date, all studies have achieved this prediction task using aligned, whole-genome sequence data. Here, we demonstrate our novel tool MashNet that predicts geographic origin from unaligned sequence fragments. We compare it to the current state of the art implementation, Locator [1], which uses a deep learning architecture on aligned sequences. Our method performs similarly despite using more noisy sequence read-only information. Sequence alignment is a necessary procedure to transform short read fragments into genome-scale information. Modern technology is only capable of sequencing small sections of DNA, so large-scale genotyping of individuals using sequencing data requires _post hoc_ alignment and variant-calling algorithms, usually relative to a well-established reference genome sequence [3]. These algorithms are computationally intensive procedures that create major bottlenecks between sample collection and downstream analysis of variant data. Further, although reference genomes are increasingly common due to advances in both technology and assembly algorithms[4], they still require large amounts of sequence data and resource intensive _de novo_ assembly. These demands prevent many non-model organisms from being sequenced. Our approach is alignment-free and therefore can be applied to the many non-model organisms currently without a reference genome. It also circumvents the need for variant-calling algorithms allowing researchers to more rapidly analyze samples. For example, one can envision sampling natural genetic diversity in a species, and then using computational methods to suggest the ancestral origin(s) of unknown samples. This process is called geolocation. A simple spatial-climate map, such as the Koppen-Geiger climate system [5], could then map origin locations to desired growing environments. Being able to pinpoint these environments is key to precision agriculture. In this study, we focus on _Populus trichocarpa_ (poplar) because of interest from the Department of Energy (DOE) in developing it as a fast-rotation biofuel crop to be viable nationwide [6]. Poplar's species range extends from southern California all the way to British Columbia encompassing a latitudinal range of 38.88 to 54.25 degrees [7]. This range includes a diversity of macro and micro-environments that have likely shaped subpopulations of this species. Our goal is to predict the latitudinal and longitudinal coordinates of these genotypes from their sequence data, a task known as genomic geolocation. Geolocation has applications in precision agriculture. When considering a new site for a tree nursery it is desirable to clone samples well-suited to that environment. Given that these trees have often been previously cloned, and relocated to common gardens and greenhouses for commercial use and agricultural research, it can be difficult to obtain meta-data locating them to their origin environment. MashNet resolves this issue allowing growers to rapidly identify the origin location of their trees, and identify which will be most productive in the new climate. In this work, we present MashNet, a deep learning-based model that can perform accurate geolocation of poplar trees. The model uses a multi-task neural architecture to jointly predict latitude and longitude coordinates for each sample. Importantly, this method uses Mash sketches [8], an alignment-free feature extraction method that randomly samples k-mers from sequencing read data. We demonstrate that MashNet can use alignment-free Mash sketches to compete with WGS-based methods. We open source our methods and data while highlighting the importance of this task to precision agriculture. ## 2 Methods ### Data We consider 1,252 poplar genotypes from a representative sampling of the latitudinal distribution of its species range (see Figure 1 panel A). Genome re-sequencing, alignment and variant-calling of this population was previously described by Zhang et al. [3]. We use these aligned and variant called sequences in Locator as a performance benchmark for our alignment-free method. MashNet is trained on unaligned reads. Out of the total 1,252 samples, 1,024 have reads that are publicly available for download from the NCBI's Sequence Read Archive (SRA). A map of sample ID's to SRA key is included with the meta data in our Github 3. During training, meta data labels with ground-truth latitude and longitude coordinates for all 1,252 samples are used. These are also included on our GitHub repository. Unfortunately, we are unable to publicly host the aligned WGS used to train and test Locator, as well as the remaining 228 sample reads. This is due to current access restrictions. Footnote 3: All codes and data found at [https://github.com/owencqueen/MashPredict](https://github.com/owencqueen/MashPredict) Associated with each sample are several meta-data variables. The first is river system, which corresponds to the nearby river from which each sample was originally collected. This variable in particular shows strong signal, as is evidenced by Figure 1C, which projection of each sample colored by its associated river system. This projection illustrates the correlation between origin location and genotype that we will leverage to geolocate these samples. ### MinHashing Unaligned Reads A major innovation of this work is achieving prediction from unaligned reads. We accomplished this using the Mash software [8]. This process uses read fragments to create a reduced representation of the genome, _i.e._, a "sketch" of the genome, which has been shown to accurately reflect genome-wide structure [8]. It does this by randomly sub-sampling k-mer's from the read fragments using a minHash-based approach. When using Mash the user must define the k-mer length to use, as well as the number of hash functions to store which determines the sketch size (\(s\)). For our study, we chose a k-mer length of 21. This is the default in Mash and their studies demonstrate this k-mer length robustly maps to Average Nucleotide Identity (an alignment-based measure of mutation distance) across different sketch sizes. Mash states that "Increasing sketch size improves the accuracy of Mash estimates, especially for divergent genomes" [8]. To test this, we ran MASH at four different sketch sizes: \(s\)=500, 2000, 4000, and 50,000. We trained and tested our prediction algorithms across all four sketch sizes to compare performance (see Table 1 and Figure 2). Once sketched, we devised a novel application of the Mash output. The input to Mash is a dataset of \(n\) samples of reads \(R_{i}\) that correspond to sequencing reads for a given poplar tree, \(\mathcal{D}=\{R_{1},...,R_{n}\}\). Assuming no hash collisions, each hash function \(H_{i}\) is a unique identifier for a 21 length k-mer. Figure 1: A) Map of the origin location of all 1,252 poplar samples. B) Reduced set of 919 samples used for PCA-UMAP clustering by river system. Downsampling from 1,252 samples is achieved by retaining only river systems with \(\geq\) 35 members. C) PCA-UMAP embedding of 919 clustering samples colored by river system. Mash samples \(s\) random k-mers per \(R_{i}\), thereby resulting in a set of \(s\) hash functions, known as a sketch: \(M_{i}=\{H_{1}^{i},...,H_{s}^{i}\}\). \(s\) is a user-defined parameter called sketch size that is discussed in subsequent sections. This procedure is repeated for every sample in \(\mathcal{D}\) to build a set of sketches \(\{M_{1},...,M_{n}\}\). Now, a union is taken over all hash functions in each sketch in order to construct a set of hash functions \(\mathcal{H}=\bigcup_{i=1}^{n}M_{i}\). Note that \(|\mathcal{H}|\) is guaranteed to be upper-bounded by \(s\times n\), but often \(|\mathcal{H}|\ll s\times n\) because there are common k-mers shared across samples \(R_{i}\). Typically, these sketches are used for a simple pairwise comparison of genomes to estimate genetic distance. For a pair of genotypes, this is done by set comparison of the hash functions in each genome sketch, such as a Jaccard index. Here, instead of only looking at pairwise comparisons, we look at set membership across the entire population. This is achieved by building a presence-absence matrix for the hash functions in each sketch. Taking the set of all hash functions \(\mathcal{H}\), we construct a vector by placing a 1 if the hash is found in sketch \(M_{i}\) and a 0 if it is not found in sketch \(M_{i}\). Formally, each vector representation \(V_{i}\) corresponding to a sketch \(M_{i}\) is defined by \(V_{i}=\{\mathbbm{1}_{[H_{j}\in M_{i}]}|H_{j}\in\mathcal{H}\}\) where the indicator function \(\mathbbm{1}\) sets the value to 1 if \(H_{j}\in M_{i}\) and 0 otherwise. This converts each set \(M_{i}\) to a constant-size binary vector \(V_{i}\). Assuming no hash collisions, this means our matrix represents a random sampling of k-mers, with a 1 indicating that k-mer as present in a genotype and 0 indicating its absence. This provides a binary input matrix for our deep learning architecture MashNet. ### MashNet Model MashNet is a neural network for prediction and representation of Mash sketches. This network takes the binary Mash matrix as input and performs predictions for latitude and longitude. The model architecture consists of a combination of linear and LayerNorm [10] layers followed by ELU [11] activation functions. We also chose to use a Batch Normalization [12] layer to process the input, following Locator's [1] similar decision. We empirically found that this architecture improved performance on the sparse Mash sketch input (see Figure 2). MashNet can be used for prediction of any phenotype, but we chose to focus it on geolocation, _i.e._, predicting latitude and longitude coordinates for each sample. As the output of the network, we have a multi-task learning setup, where we jointly predict both latitude and longitude in the same forward pass. The MashNet model \(F\) takes a vectorized Mash sketch \(V_{i}\) as input and outputs a coordinate \(\mathbb{R}^{2}\). Our loss function is a simple Absolute Error (AE) with equal weight for both latitude and longitude, _i.e._, \(L=L_{\text{lat}}+L_{\text{long}}\), where \(L_{\text{lat}}\) is the AE for latitude and \(L_{\text{long}}\) is the AE for longitude. ### Experiments and Comparison Models For geolocation, we compare MashNet to several other non-neural models. First, we use \(k\)-nearest neighbors (kNN) on the Mash distances. Mash computes pairwise distances with a set-based distance function that approximates the Jaccard index between each sample, as discussed in [8]. We compute this pairwise distance matrix and use this as a distance metric in the kNN prediction. Additionally, XGBoost and ElasticNet algorithms are employed on the binarized Mash sketches. For each model, we perform a search over a hyperparameter space to optimize model performance: for kNN, we search over k values, for XGBoost and ElasticNet, we search over parameters controlling regularization strength and learning rate. We also compare several WGS methods against models trained on sketch-based inputs. First, we use a state-of-the-art method Locator [1], which was designed for direct geolocation prediction from WGS data. Finally, we use XGBoost [13] and ElasticNet [14] algorithms on a principal component analysis (PCA)-reduced representation. PCA is used to reduce the WGS representations because of the large size and high level of sparsity. PCA is a widely established technique in bioinformatics, and it has previously shown to be effective in compressing WGS samples [9]. Each experiment is performed with 30 separate 5-fold cross validations, each with individual random seeds. Performance metrics are averaged across all folds for one cross validation, and we report the mean and standard error across all 30 cross validations for each separate experiment. Each error in Table 1 is reported as mean absolute error (MAE) in kilometers, which is calculated from latitude and longitude coordinates via geodesic distance provided by the geopy package [15]. We only use 5 trials of cross validation on Locator because of prohibitively long runtimes. For MashNet, we standard scale the latitude and longitude before training and inverse scale the outputs to compute errors. This standard scaling approach involves transforming the data to a normal distribution with mean\(=0\) and standard deviation\(=1\). It seemed to have no detectable effect on performance for alternative models. ## 3 Results Locator is the best-performing model, pinpointing the location to within 22.1km\({}^{2}\) of error. ElasticNet and XGBoost, which are both trained on PCA-reduced versions of the WGS SNPs, perform worse than Locator on the geolocation task. Within the Mash-based predictors, MashNet outperforms all methods, regardless of the sketch size. kNN performs better than both ElasticNet and XGBoost; this is likely because distance is defined based on the set-based metric used in the original Mash publication [8]. ElasticNet consistently outperforms XGBoost, with XGBoost being the least predictive model for Mash-based input data. Comparing across WGS and Mash-based predictors, WGS predictors perform better overall. This result is expected given the longer-range structure that is elucidated during the alignment procedure. \begin{table} \begin{tabular}{c|c|c|c} & Locator & ElasticNet & XGBoost \\ \hline WGS & 22.10\(\pm\)1.37 & 236.54\(\pm\)0.02 & 37.77\(\pm\)0.09 \\ \end{tabular} \end{table} Table 1: Mean absolute error in kilometers\({}^{2}\) for various models trained on whole-genome sequence inputs (1a) and Mash sketch-encoded vectors (1b). _Table 1a_ ElasticNet and XGBoost are trained on PCA-reduced versions of SNP data obtained after sequence alignment. _Table 1b_ sketch size is shown in units of 1000 sketches. kNN is trained on Jaccard distance between each sample while all other methods are trained on vectorized Mash sketches. Figure 2: Inspecting errors across varying sketch sizes for all algorithms applied to unaligned read fragments. However, several key patterns emerge. First, MashNet still outperforms both WGS-based ElasticNet and XGBoost when using a sketch size of 50,000. This highlights the utility and capacity of MashNet and neural networks for geolocation, even from noisy data such as Mash sketches. Second, on the WGS data XGBoost outperforms ElasticNet, but on the Mash-based input ElasticNet performs better. This is most likely due to the differences in data geometry. The Mash-based input data are sparse, binary vectors while PCA-reduced WGS inputs are dense with fewer dimensions. The geolocation task is highly nonlinear, so in the dense WGS setting, we expect a tree-based model (XGBoost) to perform better than a linear model (ElasticNet). We also perform benchmarking across different numbers of Mash sketches. Sketch size is an important tuning factor when using MashNet. As seen in Table 1, performance increases with increasing sketch size. In Mash, compute time to build a sketch is largely invariant to sketch size, however overall computational costs will increase due to higher dimensional input being passed to downstream prediction models. This is a trade-off that must be managed. In general, traditional, non-deep learning-based methods (ElasticNet and XGBoost) perform poorly on Mash sketches, highlighting the need for an alternative such as our model MashNet. However, the set-based distance metric leveraged by the original Mash publication has been further validated here, showing a clear ability to recover significant predictive signal using kNN, which even outperforms more sophisticated methods such as ElasticNet and XGBoost. ## 4 Discussion The genome sciences contain many applications for reference-free prediction using computational techniques. To the best knowledge of the authors, this study is one of the first attempts at trait prediction from unaligned read fragments. Innovations in this space have the potential for large impact on topics ranging from precision agriculture to medical diagnostic tools. In this study, we present a solution to the challenging task of geolocation of poplar trees from unaligned read fragments. We approach this problem by leveraging a commonly-used bioinformatics tool, Mash, and create a framework that can circumvent the computationally expensive procedures of genome assembly and short read alignment. Our solution, MashNet, uses a neural network to predict latitude and longitude coordinates for each sample, achieving within 12.1 km\({}^{2}\) prediction accuracy to the state-of-the-art whole-genome sequence-based method, Locator [1]. Future studies will attempt to improve our predictive capacity using unaligned reads. The initial studies undertaken in this paper outline two paths to improvement. The first is to try to pre-identify important k-mers on which screening should be focused. For example, in currently unpublished work we have identified regulatory hotspots through genome-wide association (GWAS) mapping of climatic variation. We hypothesize that if we could sample k-mer's directly from these hotspots--and not randomly as we do currently-- we could focus on the higher variance regions and therefore significantly boost prediction performance. However, this approach would require _a priori_ knowledge of the genomic location of these hotspots and therefore pre-existing aligned WGS data. Thus, while such a hybrid approach would likely improve predictive performance, it would also nullify the generalizability of our MashNet approach to non-model organisms. A second approach would be to increase the sketch size of the minHashing procedure. In Figure 2, we observe that there seems to be a performance plateau associated with increasing sketch size. We hypothesize this occurs once sufficient sampling coverage of the genome has been achieved. This suggests that while increasing sketch size would lead to performance gains, these gains are likely to be marginal. This presents an open question: MashNet can predict locations within 34km\({}^{2}\), but could a more advanced technique predict these locations with less error? Given the importance of the geolocation task for precision agriculture, we present this as an **open problem for the machine learning community**. Our tool, MashNet, demonstrates how deep learning can achieve impressive results on reference-free geolocation tasks, even when compared to state-of-the-art models based on WGS representations. We believe that more advanced tools can be developed for this area and used to improve prediction accuracy of the ideal ecosystem in which a crop should be grown. We open-source the codebase and datasets used for this study with the hope that future development will focus on new techniques for representing unaligned, fragmented reads for machine learning, as well as more sophisticated prediction architectures.
2308.00196
Testing theories of the glass transition with the same liquid, but many kinetic rules
We study the glass transition by exploring a broad class of kinetic rules that can significantly modify the normal dynamics of super-cooled liquids, while maintaining thermal equilibrium. Beyond the usual dynamics of liquids, this class includes dynamics in which a fraction $(1-f_R)$ of the particles can perform pairwise exchange or 'swap moves', while a fraction $f_P$ of the particles can only move along restricted directions. We find that (i) the location of the glass transition varies greatly but smoothly as $f_P$ and $f_R$ change and (ii) it is governed by a linear combination of $f_P$ and $f_R$. (iii) Dynamical heterogeneities are not governed by the static structure of the material. Instead, they are similar at the glass transition across the ($f_R$, $f_P$) diagram. These observations are negative items for some existing theories of the glass transition, particularly those reliant on growing thermodynamic order or locally favored structure, and open new avenues to test other approaches.
Cristina Gavazzoni, Carolina Brito, Matthieu Wyart
2023-07-31T23:25:23Z
http://arxiv.org/abs/2308.00196v4
# Testing theories of the glass transition with the same liquid, but many kinetic rules ###### Abstract We study the glass transition by exploring a broad class of kinetic rules that can significantly modify the normal dynamics of super-cooled liquids, while maintaining thermal equilibrium. Beyond the usual dynamics of liquids, this class includes dynamics in which a fraction \((1-f_{R})\) of the particles can perform pairwise exchange or'swap moves', while a fraction \(f_{P}\) of the particles can only move along restricted directions. We find that (i) the location of the glass transition varies greatly but smoothly as \(f_{P}\) and \(f_{R}\) change and (ii) it is governed by a linear combination of \(f_{P}\) and \(f_{R}\). (iii) Dynamical heterogeneities are not governed by the static structure of the material. Instead, they are similar at the glass transition across the \((f_{R},f_{P})\) diagram. These observations are negative items for some existing theories of the glass transition, particularly those reliant on growing thermodynamic order or locally favored structure, and open new avenues to test other approaches. Understanding why liquids glass formers cease to flow near their glass transition \(T_{g}\) remains a challenge. At that point, the relaxation time \(\tau_{\alpha}\) beyond which stress relaxes is of order of minutes, which is fifteen decades larger than at high temperatures. From \(\tau_{\alpha}\), the activation energy \(E_{a}\) can be defined as \(\tau_{\alpha}=t_{0}\exp(E_{a}/T)\), where \(t_{0}\) is a microscopic time scale and \(T\) is the temperature (in the units of the Boltzmann constant). In liquids called fragile, \(E_{a}\) can increase five-fold or more under cooling [1; 2; 3; 4]. As the dynamics slows down, it also becomes more and more heterogeneous, corresponding to a growing length scale \(\xi\)[5; 6; 7; 8]. Contrasting theories seek to explain these two facts. In the first class of views, including Adam-Gibbs [9] and Random First Order Theory (RFOT) [10; 11; 12], the increase of activation energy stems from the emergence of some order on a growing length \(\xi\), that must be destroyed by cooperative motion on that scale to relax the material. \(E_{a}\) can then be expressed in terms of purely thermodynamic quantities, independently of the kinetic rules governing the dynamics. Some real space approaches associate such a growing order to locally favored structures [13; 14]. A second viewpoint seeks to capture the mechanism of dynamical facilitation, whereby the relaxation of a given region speeds up the relaxation of regions nearby. Kinetically constrained models [15; 16], such as the East model, capture this effect and suggest a scenario [17; 18; 19] in which thermodynamics plays almost no role, but dynamics is heterogeneous and the growth of activation energy stems from non-local rearrangements taking place over \(\xi\). At odd with these two views, a third approach that includes free volume [20] or elastic [21; 22; 23; 24] models, assumes that the activation energy is not controlled by a growing length scale. Instead, it is governed by the energy cost of elementary rearrangements of a few particles jumping over a barrier. The elastic coupling between rearrangements [25; 26; 27; 28; 29; 30; 31; 32; 33] leads to a correlated dynamics [34] that can be described in terms of avalanches of activated events [35]. Molecular dynamics simulations of models of supercooled liquids have been extremely informative to characterize the glass transition [36], yet distinct views on this phenomenon have been hard to definitely contrast [12]. Our present goal is to show that for some popular models of liquids, a very broad class of kinetic rules can be considered, which can continuously (and very significantly) speed up or slow-down the normal dynamics, while preserving thermal equilibrium. Although these rules would be hard to implement in actual experiments, they are equivalent to dynamics with purely local rules, and as such theories of the glass transition should apply to them. This approach thus opens an avenue to test more stringently theories of glassy dynamics. Specifically, our work builds on'swap' Monte Carlo algorithms. In these algorithms, pairs of particles can exchange positions, in addition to their usual translation moves [37; 38; 39]. For continuously polydisperse systems, these algorithms can speed up the dynamics by 15 orders of magnitude or more [39], and can change the glass transition temperature \(T_{g}\) by up to a factor of two. It allows one to explore glasses with a stability similar to that reached in experiments. Our central result is to introduce a family of kinetic rules, where a fraction \(f_{R}\) of the particles cannot swap, and a fraction \(f_{P}\) of the particles can only move along randomly-chosen hyperplanes. We provide systematic measurements of the dynamics in the \((f_{R},f_{P})\) diagram in two and three dimensions, that includes the normal dynamics \((1,0)\) as well as swap \((0,0)\). As an example we focus on hard particles, where the dynamics is controlled by the packing fraction \(\phi\). Central observations are that: (i) the relaxation time \(\tau_{\alpha}\) can be immensely increased or decreased with respect to the normal dynamics. (ii) It appears to be controlled by a linear combination of the parameters \((f_{R},f_{P})\). In particular, no observations single out the normal dynamics in this diagram. \(\tau_{\alpha}\) does not display a plateau in its vicinity, where kinetic constraints would be irrelevant. (iii) Dynamical heterogeneities are similar at the glass transition throughout this phase diagram, despite the fact that the glass transition packing fraction \(\phi_{G}\) can vary very significantly. We argue that the above points in particular are negative items for theories based on a growing thermodynamic order. We discuss how devoted studies could be used in these models to test alternative views of the glass transition. **Changing continuously the kinetic rules of liquids:** Swap moves lead to a considerable speed up of the dynamics [37; 38; 39]. Importantly, despite its apparent non-local character, swap dynamics can be conceived as a purely local dynamics. Following [37; 40; 41], swap is equivalent to considering identical particles endowed with an additional 'breathing' degree of freedom, allowing them to change their size according to some chemical potential \(\mu(R)\). Indeed, letting pairs of particles exchange is equivalent, in the thermodynamic limit, to letting individual particles exchange with a bath of particles of all possible sizes \(R\). \(\mu(R)\) is then chosen to obtain the desired polydispersity, which is continuous for continuously poly-disperse particles [37; 40]. Adding such a degree of freedom per particle dramatically softens the energy landscape [40; 42; 43; 44] while preserving thermodynamic and structural properties. It also affects the dynamics: following the center of the particles, and considering that a swap move corresponds to a change of the size of two particles but not of their position, leads to the following observation. Dynamical correlations grow under cooling as for the normal dynamics, but the correlation length starts growing at a much smaller temperature [39]. To study more systematically such effects and their consequences, we introduce the parameter \(f_{R}\in[0,1]\), characterizing the number of particles that cannot swap (see also [45][46]). Following this logic, we propose to add even more kinetic constraints by restricting the motion of a fraction \(f_{P}\in[0,1]\) of the particles, as illustrated in Fig.1. Each such particle is forbidden to move along one random direction associated to it, and for an infinite system they would be each confined along a random hyperplane. Overall, the number of degrees of freedom for a system of \(N\) particles is then \(N(d-f_{P}+1-f_{R})\). Note that for the periodic boundary condition we consider below, this dynamics is ergodic as such hyperplanes visit the neighborhood of any point with probability one, if their orientation is randomly chosen. For dynamics that satisfy detailed balance like ours, it ensures that thermal equilibrium will eventually be reached. Thus structural and thermodynamic properties are only governed by \(\phi\), independently of the choices of kinetic rules embodied in \((f_{R},f_{P})\). Note that other procedures were proposed to reduce the number of degrees of freedom, such as pinning particles starting from an equilibrated system as proposed e.g. in [47]. Yet in that case ergodicity is obviously broken once pinned particles are chosen, and the dynamical properties of the system are not translation-invariant anymore. In the present work, we specifically perform Monte Carlo simulations of systems with \(N\) continuously poly-disperse hard spheres particles of packing fraction \(\phi\) in a regular box of linear size \(L\), with periodic boundary conditions. For hard particles, the actual value of the temperature is irrelevant (it simply rescales time), and \(\phi\) is the good controlled parameter. The choice of polydispersity is shown in the Appendix. We use for system sizes \(N_{d=2}=484\) and \(N_{d=3}=512\), which allow to perform extensive simulations as \(\phi\), \(f_{P}\) and \(f_{R}\) are varied, while having small finite-size effects for the dynamical range considered [48]. Our Monte Carlo algorithm follows previous choices [39]: it involves displacements and swap moves, where the latter are attempted with a prob Figure 1: Diagram indicating the number of degrees of freedom for spatial dimension \(d=2\) (a) and \(d=3\) (c) as a function of the fraction \(f_{R}\) of particles that cannot swap and the fraction \(f_{P}\) of particles whose translation motion is restricted. Panel (b) shows the particles positions at various time points within some time interval for \(d=2\). Particles that can move freely in all direction are shown in red, and particles that are restricted to linear motion appear in blue. For \(d=3\), this constraint is more gentle, as particles can still move on planes as sketched in (c). ability of 20%. The magnitude \(\delta l\) of translation moves is chosen such that the acceptance ratio is 75%. Figure 1-(a,c) shows a schematic diagram of the different dynamics we explore in \(d=2\) and \(d=3\) respectively, and indicate in color the associated number of degrees of freedom. Figure1-(b) illustrates an example of the particles trajectories at a short time for \(d=2\), revealing which particles are restricted to linear motion, and which ones are not. To achieve rapidly equilibration for any choice of \((\phi,f_{R},f_{P})\), we first use our fastest Monte Carlo, corresponding to \((\phi,f_{R}=0,f_{P}=0)\). Then our algorithm is run with the desired values of \((f_{R},f_{P})\) for \(10^{9}\) steps. To check that equilibration was reached, we compare relevant observables (such as correlation functions, see Appendix for details) in the first and second half of the run, and test for consistency. **The normal dynamics continuously speeds up as more particles swap:** In order to characterize the dynamics of the system for \(d=3\) we consider the usual self-scattering correlation function \(F_{s}(k,t)=\left<\frac{1}{N}\sum_{i}e^{i{\bf k}\cdot[{\bf r}_{j}(t)-{\bf r}_{j }(0)]}\right>\), where \({\bf r}_{j}(t)\) is the position of the particle \(j\) at time \(t\) and the wave vector \({\bf k}\) satisfies \(|{\bf k}|=2\pi/\langle\sigma\rangle\). For \(d=2\), this definition is not well-suited (long-wavelength vibrational modes bring \(F_{s}(k,t)\) to zero for large \(t\) and \(N\), even in a crystal). This problem is fixed as is usually done by considering the relative motion of particle with respect to their neighbors. It can be achieved by introducing the correlation \(C(t)=\frac{1}{N}\sum_{i}C^{i}(t)\), with \(C^{i}(t)=\frac{1}{n_{i}(t)}\sum_{\langle ij\rangle}W(1-\frac{({\bf r}_{j}(t)- {\bf r}_{j}(t_{0}))-({\bf r}_{i}(t_{0}))}{\langle R\rangle})\) where \(\langle R\rangle\) is the average radius of the particles, \(n_{i}(t)\) is the number of neighbors of the particle \(i\) at time \(t\)- defined as all particles \(j\) for which \(|{\bf r}_{j}(t)-{\bf r}_{i}(t)|<3\langle R\rangle\) and \(W(x)\) is the Heaviside function. To extract a relaxation time \(\tau_{\alpha}\), \(F_{s}(k,t)\) and \(C(t)\) are fitted with an stretched exponential function \(f(t)=exp(-(t/\tau_{\alpha})^{\beta})\), where \(\tau_{\alpha}\) is the relaxation time. The glass packing fraction \(\phi_{G}(f_{R},f_{P})\) is then defined such that \(\tau_{\alpha}=\tau_{\alpha}^{*}\equiv 10^{7}\) Monte Carlo steps per particle. The associated stretch exponents \(\beta_{G}\) are reported in the Appendix as a function of \((f_{R},f_{P})\). For \(d=3\), our choice of polydispersity and Monte-Carlo algorithm follows closely previous works [39; 49], where it was shown that swap can speed up the dynamics by 15 decades or more. Here we observe a giant speed up in our \(d=2\) system as well, that continuously builds up as \(f_{R}\) decreases toward the swap case \(f_{R}=0\) starting from the normal dynamics \(f_{R}=1\). Fig.2 shows the relaxation times \(\tau_{\alpha}\) as a function of the packing fraction \(\phi\) for different values of \(f_{R}\). It is notable that \(\tau_{\alpha}\) depends very significantly on \(f_{R}\), but that this dependence is continuous. Finally, the speed-up of swap can be approximately extrapolated to the case where the normal dynamics would reach experimental time scales (i.e. would increase by 15 decades). The range of the corresponding \(\phi_{G}^{exp}\) can be estimated by fitting the curve \(\tau_{\alpha}(\phi)\) of the normal dynamics \(f_{R}=1,f_{P}=0\) with plausible functional forms for \(\tau_{\alpha}(\phi)\), including the Vogel-Fulcher-Tamman \(\tau_{\alpha}\sim\exp\left(\frac{A}{\phi_{VFT}-\phi}\right)\) or a form with non-singular activation energy \(\tau_{\alpha}=\tau_{\infty}^{\prime}\exp\left(A^{\prime}(\phi_{-}\phi)^{2}\right)\). In agreement with previous such inferences [39; 49], we obtain that the swap dynamics \(f_{R}=1\) has only slowed-down by 3 to 6 decades at \(\phi_{G}^{exp}\): the speed up conferred by swap is very high. **Dependence of the glass transition packing fraction \(\phi_{G}\) with kinetic rules:** We now extend our study of the dynamics on the full phase diagram \((f_{R},f_{P})\). Fig.3-(a,e) show \(\tau_{\alpha}\)_vs_\(\phi\) for such dynamics as the parameters \((f_{R},f_{P})\) are varied, both for \(d=2\) and \(d=3\). Fig.3-(b,f) represents the corresponding value of \(\phi_{G}\) in color, as extrapolated from the different measurements made as indicated by circles. The most remarkable results are that (i) \(\phi_{G}\) varies very significantly, and continuously throughout the phase diagram. In particular, there is no evidence for a region surrounding the normal dynamics where kinetic constraints would not matter and \(\phi_{G}\) would plateau. The normal dynamics can be made continuously faster or slower. (ii) Observing these two diagrams, it is apparent that most of the variation of \(\phi_{G}\) is captured by a linear combination of \(f_{R}\) and \(f_{P}\). This hypothesis is tested in Fig.3-(d,h), where it is shown that \(\phi_{G}(f_{R},f_{P})=\phi_{G}(x)\), where \(x\) can be thought of as an effective number of constraints \(x=f_{P}+C(d)f_{R}\), where the coefficient \(C(d)\leqslant 1\) characterize the relative effect of breathing degrees of freedom _v.s._ translational ones. (iii) qualitative observations are independent of the spatial dimension \(d\). **Dynamical heterogeneities are not governed by structural properties:** Various theoretical approaches propose that dynamical heterogeneities are controlled by equilibrated structural properties, such as a point to set length [11; 12] or the extension of locally-favored structure [13; 14]. In our set-up, these properties depend only on \(\phi\), and not on the values of parameters \(f_{R},f_{P}\). Here instead we find that for the relaxation times we can probe and the system we consider, dynamical heterogeneities are not governed by \(\phi\) only, but instead depend strongly on \(f_{R},f_{P}\). In fact, these heterogeneities are qualitatively similar at very different values of \(\phi\), as long as the same slowing down has occurred. This result is illustrated in Fig.4 by considering all systems at their respective glass transition \(\phi_{G}\). Both the patterns of relaxation and their length scale are similar at \(\phi_{G}\), despite the large variation of the latter. **Discussion:** Swap Monte Carlo algorithms can be restricted to local moves with no significant effects on the dynamics [39]. More fundamentally, they are equivalent to a purely local kinetic rule where particles can adapt their radii [37; 40]. For theories based on a growing order on some length \(\xi_{coop}\) such as RFOT, or approaches based on locally favored structures, barriers are cooperative and can be expressed in terms of thermodynamic quantities alone. They should be present for any local kinetic rules [44], including those studied here. Thus in these approaches, the core mechanism slowing down the dynamics near the glass transition should not depend on the choice of \((f_{R},f_{P})\). Authors in [50] acknowledged that swap and normal dynamics should asymptotically relax at the same pace according to RFOT, but countered that pre-asymptotic corrections (not currently described within this theory) may cause the observed difference. The main difficulty with this view is that the dynamics is very different as the parameters \((f_{R},f_{P})\) change: as shown in Fig.2, these dynamics do not become equivalent even after a slowing down of 15 decades accessible experimentally. To have predictive power, RFOT or theories based on thermodynamic quantities should specify a value for the parameters \((f_{R},f_{P})\) for which they apply. However currently, they don't. Our observation that \(\phi_{G}\) continuously depends on \((f_{R},f_{P})\), and does not plateau to some constant value around the normal dynamics \((1,0)\), shows that the normal dynamics is just one among many other dynamics. This point underlines the lack of predictive power of RFOT ot related theories - at least for the systems of continuously polydisperse particles studied here. By contrast, for theories based on kinetic constraints or on local barriers (known to depend on kinetic rules [40]), the fact that \(\phi_{G}\) should continuously vary with the amount of constraints is evident. The normal dynamics is slower simply because it is a kinetically constrained version of swap dynamics. Similar conclusions stem from our observation that dynamical heterogeneities are not controlled by \(\phi\) only, but depend systematically on \((f_{R},f_{P})\), and thus not controlled by a length scale that would directly appear in the structure. Indeed, dynamical correlations exist at large length scale already at small packing fraction for very constrained dynamics as shown above. Likewise, they are small for the swap dynamics at significant packing fraction for which the normal dynamics is already very correlated [39]. Thus, although locally favored structure may affect dynamical heterogeneities in specific systems (such as two-dimensional systems of discs that can display hexatic order [14]), it does not appear to be the case for continuously polydisperse particles, at least in the range of time scale that can be probed numerically. **Conclusion:** RFOT is a mean-field theory of the glass transition, which has shown undeniable successes. It is exact in infinite dimension [51; 52], correctly captures aspects of the thermodynamics of super-cooled liquids [53], and presents a dynamical transition [54; 11; 55] akin to mode coupling theory, that describes some aspects of liq uid dynamics at intermediate temperatures [56]. Yet our results support that its description of activation near the glass transition does not apply for the continuously poly-disperse systems studied here. Although our conclusions are restricted to these specific systems where swap is so performant, these models are known to capture the key facts associated with the glass transition [57]. The model we introduced, with its very large variation of dynamics with different kinetic rules, offers new opportunities to test different theories of the glass transition, extending previous observations that were only considered for a single kinetic rule. For example, a debated issue in the field is what observable predicts best the regions which will flow first, the so-called dynamical propensity. Candidates include short-time vibrations [58], the local yield stress [23] or special structures picked up by machine learning methods [59; 60]. All these quantities will strongly vary with kinetic rules- which one predicts best propensity throughout the plane \((f_{R},f_{P})\) will shed light on the relationship between structure, elasticity and dynamics. Most importantly, numerical tests put forward to test theories of the slowing down of the dynamics (i.e. how the activation energy \(E_{a}\) depends on temperature or packing fraction) can now be made much more stringent by varying kinetic rules. The list includes kinetically constrained models, that can be tested by analyzing irreversible events [61]. Likewise, the notion that glassy dynamics correspond to local rearrangements was supported by the measurement of the density of state of local barriers [24]; and these barriers were argued to be governed alternatively by global [21] or local [22] elasticity, or by the amplitude of vibrational motion [62]. Varying \((f_{R},f_{P})\) in these measurements will indicate which viewpoint is most likely correct. Overall, we have added a new axis to the glass transition problem by varying continuously kinetic rules, affecting strongly observations and giving a new handle to decide in the future which theory of the glass transition actually applies. ###### Acknowledgements. We thank the Simons collaboration for discussions, in particular L. Berthier, G. Biroli, M. Ozawa and C. Scalliet. We thank J.P. Bouchaud, T. deGeus, W. Ji, M. Muller, M. Pica Ciamarra, M. Popovic, A. Tahei and A. Rosso for discussions, and J. Kurchan for exchanges at the beginning of this project. MW acknowledges support from the Simons Foundation Grant (No. 454953 Matthieu Wyart) and from the SNSF under Grant No. 200021-165509. C.B. and C.G. thank the Brazilian agency CAPES and CNPq for the financial support..
2303.17798
Controlling structures, deformations and homotopy theory for averaging algebras
An averaging operator on an associative algebra $A$ is an algebraic abstraction of the time average operator on the space of real-valued functions defined in time-space. In this paper, we consider relative averaging operators on a bimodule $M$ over an associative algebra $A$. A relative averaging operator induces a diassociative algebra structure on the space $M$. The full data consisting of an associative algebra, a bimodule and a relative averaging operator is called a relative averaging algebra. We define bimodules over a relative averaging algebra that fits with the representations of diassociative algebras. We construct a graded Lie algebra and a $L_\infty$-algebra that are respectively controlling algebraic structures for a given relative averaging operator and relative averaging algebra. We also define cohomologies of relative averaging operators and relative averaging algebras and find a long exact sequence connecting various cohomology groups. As applications, we study deformations and abelian extensions of relative averaging algebras. Finally, we define homotopy relative averaging algebras and show that they induce homotopy diassociative algebras.
Apurba Das
2023-03-31T05:11:54Z
http://arxiv.org/abs/2303.17798v1
# Controlling structures, deformations and homotopy theory for averaging algebras ###### Abstract. An averaging operator on an associative algebra \(A\) is an algebraic abstraction of the time average operator on the space of real-valued functions defined in time-space. In this paper, we consider relative averaging operators on a bimodule \(M\) over an associative algebra \(A\). A relative averaging operator induces a diassociative algebra structure on the space \(M\). The full data consisting of an associative algebra, a bimodule and a relative averaging operator is called a relative averaging algebra. We define bimodules over a relative averaging algebra that fits with the representations of diassociative algebras. We construct a graded Lie algebra and a \(L_{\infty}\)-algebra that are respectively controlling algebraic structures for a given relative averaging operator and relative averaging algebra. We also define cohomologies of relative averaging operators and relative averaging algebras and find a long exact sequence connecting various cohomology groups. As applications, we study deformations and abelian extensions of relative averaging algebras. Finally, we define homotopy relative averaging algebras and show that they induce homotopy diassociative algebras. 2020 MSC classifications: 16D20, 16W99, 16E40, 16S80. Keywords: Averaging algebras, Diassociative algebras, \(L_{\infty}\)-algebras, Deformations, Homotopy structures. ###### Contents * 1 Introduction * 2 Background on diassociative algebras * 3 Relative averaging operators and relative averaging algebras * 4 The controlling algebra and cohomology for relative averaging operators * 5 The controlling algebra and cohomology for relative averaging algebras * 6 Deformations of relative averaging algebras * 7 Abelian extensions of relative averaging algebras * 8 Homotopy relative averaging algebras and homotopy diassociative algebras ## 1. Introduction The notion of averaging operator was first implicitly studied by O. Reynolds in 1895 [34] in the turbulence theory of fluid dynamics. In the mathematical study of turbulence theory, such an operator appears as the time average operator of real-valued functions defined in time-space \[f(x,t)\mapsto\overline{f}(x,t)=\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^{T}f(x,t+ \tau)d\tau.\] The explicit description of an averaging operator was first defined by Kampe de Feriet [16]. Let \(A\) be an associative algebra. A linear map \(P:A\to A\) is said to be an averaging operator on \(A\) if \[P(a)\cdot P(b)=P(P(a)\cdot b)=P(a\cdot P(b)),\text{ for }a,b\in A. \tag{1}\] A pair \((A,P)\) consisting of an associative algebra \(A\) and an averaging operator \(P:A\to A\) is called an averaging algebra. In the last century, most studies on averaging operators had been done for various areas of functional analysis and applied mathematics. For the convenience of the reader, here we mention a few. In [4] G. Birkhoff showed that a positive bounded projection operator on the Banach algebra \(C(X)\) of real-valued continuous functions on a compact Hausdorff space \(X\), is an idempotent averaging operator. Later, J. L. Kelly [18] characterizes idempotent averaging operators on the algebra \(C_{\infty}(X)\) of real-valued continuous functions on a locally compact Hausdorff space \(X\) that vanish at the infinity. In [13, 29], J. L. B. Gamlen and J. B. Miller discussed spectrum and resolvent sets of averaging operators on Banach algebras. Besides all these, averaging operators are also studied in connection with probability theory. In [30] S.-T. C. Moy finds relations between averaging operators, conditional expectations and general integration theory. In the algebraic study of averaging operators, B. Brainerd [6] considered the conditions for which an averaging operator can be realized as a generalization of the integral operator on the ring of real-valued measurable functions. In 2000, W. Cao [7] studied averaging operators from algebraic and combinatorial points of view. In particular, he studied free commutative averaging algebras and described the induced Lie and Leibniz algebras. During the same period, J.-L. Loday [25] introduced a notion of diassociative algebra to study the universal enveloping algebra of a Leibniz algebra. A diassociative algebra is a vector space equipped with two associative multiplications satisfying three other associative-like compatibilities. M. Aguiar [1] showed that an averaging operator on an associative algebra induces a diassociative algebra structure. The general algebraic study of averaging operators on any binary operad and their relations with bisuccessors, duplicators and Rota-Baxter operators on operad was systematically developed in [2, 31, 32]. More recently, J. Pei and L. Guo [33] constructed free associative averaging algebras using a class of bracketed words, called averaging words, and discovered their relations with Schroder numbers. Averaging operators also appeared in the context of Lie algebras. They are often called embedding tensors and find important connections with Leibniz algebras, tensor hierarchies and higher gauge theories [5, 23, 36]. In [20] Kotov and Strobl construct a \(L_{\infty}\)-algebra from an embedding tensor that explains the tensor hierarchy of the bosonic sector of gauged supergravity theories. In [35] Y. Sheng, R. Tang and C. Zhu studied the cohomology and deformations of embedding tensors by considering the controlling algebras. See also [8, 37] for cohomological study of Rota-Baxter operators. In this paper, we consider the notion of a relative averaging operator as a generalization of an averaging operator. Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. A linear map \(P:M\to A\) is called a relative averaging operator (on \(M\) over the algebra \(A\)) if it satisfies the identity (8), which generalizes (1). A triple \((A,M,P)\) consisting of an associative algebra \(A\), an \(A\)-bimodule \(M\) and a relative averaging operator \(P\) is called a relative averaging algebra. For our convenience, we denote a relative averaging algebra \((A,M,P)\) by the notation \(M\xrightarrow{P}A\). We give some characterizations of a relative averaging operator. We also construct the free relative averaging algebra over any 2-term chain complex \(V\xrightarrow{f}W\). Next, we show that a relative averaging algebra naturally induces a diassociative algebra structure. Conversely, any diassociative algebra is induced from a suitable relative averaging algebra constructed from the given diassociative algebra. We also define a notion of bimodule over a relative averaging algebra and construct the corresponding semidirect product. We show that a bimodule over a relative averaging algebra gives rise to some representations of the induced diassociative algebra. Then we first focus on the cohomology of relative averaging operators. Given an associative algebra \(A\) and an \(A\)-bimodule \(M\), we construct a graded Lie algebra that characterizes relative averaging operators as its Maurer-Cartan elements. This graded Lie algebra is obtained by applying the derived bracket construction to the graded Lie algebra constructed by Majumdar and Mukherjee [28]. Using this characterization, we can define the cohomology of a relative averaging operator. We further show that this cohomology can be seen as the cohomology of the induced diassociative algebra with coefficients in a suitable representation. We also remark that this cohomology can be used to study formal deformations of the relative averaging operator by keeping the underlying algebra and the bimodule intact. Next, we focus on the cohomology of relative averaging algebras. To do this, we first construct a \(L_{\infty}\)-algebra that characterizes relative averaging algebras as its Maurer-Cartan elements. This helps us to define the cohomology of a relative averaging algebra (with coefficients in the adjoint bimodule). Subsequently, we consider the cohomology of a relative averaging algebra with coefficients in an arbitrary bimodule. In the next, we give two applications of our cohomology theory of relative averaging algebras. At first, we consider deformations of a relative averaging algebra \(M\xrightarrow{P}A\), where we simultaneously deform the associative multiplication on \(A\), the \(A\)-bimodule structure on \(M\) and the relative averaging operator \(P\). In particular, we consider formal and infinitesimal deformations of a relative averaging algebra. Our main result in deformation theory classifies the equivalence classes of infinitesimal deformations of a relative averaging algebra \(M\xrightarrow{P}A\) by the second cohomology group \(H^{2}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\). Another application of the cohomology theory is to classify abelian extensions. More precisely, we consider abelian extensions of a relative averaging algebra \(M\xrightarrow{P}A\) by a bimodule and show that isomorphism classes of such abelian extensions are classified by the second cohomology group of the relative averaging algebra \(M\xrightarrow{P}A\) with coefficients in the bimodule. In the final part of the paper, we define homotopy relative averaging operators. Like relative averaging operators are defined on bimodules over an associative algebra, a homotopy relative averaging operator is defined on a representation space over an \(A_{\infty}\)-algebra. Given an \(A_{\infty}\)-algebra and a representation of it, we construct a suitable \(L_{\infty}\)-algebra \((\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\). This \(L_{\infty}\)-algebra is a generalization of the graded Lie algebra that characterizes relative averaging operators as Maurer-Cartan elements. Motivated by this, we define a homotopy relative averaging operator as a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\). A triple consisting of an \(A_{\infty}\)-algebra, a representation and a homotopy relative averaging operator is called a homotopy relative averaging algebra. We show that a homotopy relative averaging algebra induces a \(Diass_{\infty}\)-algebra (strongly homotopy diassociative algebra) structure. This generalizes our previous result that a relative averaging algebra induces a diassociative algebra. Finally, we show that a \(Diass_{\infty}\)-algebra gives rise to a homotopy relative averaging algebra. **Note.** It is important to mention that Wang and Zhou [41] recently considered the cohomology and deformation theory of an averaging algebra. In their approach, they only considered the fact that an averaging algebra induces two new associative algebra structures. However, they have not used the fact that the induced two associative structures form a diassociative algebra. In the present paper, we first point out the intimate relationships between averaging algebras and diassociative algebras (see Proposition 3.18). In our knowledge, diassociative algebra is the key object for the study of (relative) averaging algebras. In Proposition 4.4, we show that the cohomology of a relative averaging operator can be seen as the cohomology of the induced diassociative algebra with coefficients in a suitable representation. In Theorem 4.7, we find a cohomological relation between relative averaging operators and induced diassociative algebras. Since Wang and Zhou didn't consider the full diassociative algebra (as the induced structure), the above important results no longer exist in their approach. In our paper, we showed many other results which indicate that diassociative algebras are required to study relative averaging operators (cf. Proposition 3.12, Theorem 4.1, Theorem 5.7, Theorem 5.8). Thus, we believe that our approach and the constructions (including all the graded Lie algebras and \(L_{\infty}\)-algebras) are appropriate to deal with relative averaging algebras. **Organization of our paper.** In Section 2, we recall some basic preliminaries on the cohomology of diassociative algebras. In Section 3, we consider relative averaging operators and relative averaging algebras which are the main objects of our study. We also define and study bimodules over a relative averaging algebra. The Maurer-Cartan characterizations and cohomology of relative averaging operators and relative averaging algebras are respectively considered in Sections 4 and 5. Applications of cohomology include deformations and abelian extensions of relative averaging algebras which are respectively studied in Sections 6 and 7. Finally, in Section 8, we define homotopy relative averaging algebras and find relations with \(Diass_{\infty}\)-algebras. All vector spaces, linear maps and tensor products are over a field \(\mathbf{k}\) of characteristic \(0\). ## 2. Background on diassociative algebras In this section, we recall some necessary background on diassociative algebras. In particular, we describe the cohomology theory of diassociative algebras. Our main references are [11, 25, 28]. The notion of diassociative algebras was introduced by Loday in the study of Leibniz algebras [25]. The cohomology theory of diassociative algebras was developed by Frabetti [11] and further studied by Majumdar and Mukherjee [28]. ### Definition A **diassociative algebra** is a vector space \(D\) equipped with two bilinear operations \(\dashv,\vdash:D\times D\to D\) that satisfy the following five identities \[\left\{\begin{array}{c}(a\dashv b)\dashv c=a\dashv(b\dashv c)=a\dashv(b \vdash c),\\ (a\vdash b)\dashv c=a\vdash(b\dashv c),\\ (a\dashv b)\vdash c=(a\vdash b)\vdash c=a\vdash(b\vdash c),\end{array}\right. \tag{2}\] for \(a,b,c\in D\). A diassociative algebra as above may be denoted by the triple \((D,\dashv,\vdash)\) or simply by \(D\). It follows from the above definition that both the bilinear operations \(\dashv\) and \(\vdash\) in a diassociative algebra are associative products. Moreover, these two associative products additionally satisfy three associative-like identities. Thus, any associative algebra \(A\) can be regarded as a diassociative algebra in which both the operations \(\dashv\) and \(\vdash\) are the given associative multiplication on \(A\). Let \((D,\dashv,\vdash)\) be a diassociative algebra. A **representation** of the diassociative algebra is a vector space \(M\) equipped with four bilinear operations (called action maps) \[\dashv:D\times M\to M,\ \ \vdash:D\times M\to M,\ \ \dashv:M\times D \to M\ \ \text{and}\ \ \vdash:M\times D\to M\] that satisfy fifteen identities, where each set of five identities corresponds to the identities in (2) with exactly one of \(x,y,z\) replaced by an element of \(M\). It follows that any diassociative algebra \((D,\dashv,\vdash)\) is a representation of itself, called the adjoint representation. Before we recall the cohomology of diassociative algebras, we need some notations about planar binary trees. A planar binary tree with \(n\)-vertices (often called an \(n\)-tree) is a planar tree with \((n+1)\) leaves, one root and each vertex trivalent. Let \(Y_{n}\) be the set of all \(n\)-trees (for \(n\geq 1\)) and \(Y_{0}\) be the set consisting of a root only, i.e. \[Y_{0} =\] \[Y_{3} =\] Note that the cardinality of \(Y_{n}\) is given by the \(n\)-th Catalan number \(\frac{(2n)!}{(n+1)!\ n!}\). The grafting of a \(m\)-tree \(y_{1}\in Y_{m}\) and a \(n\)-tree \(y_{2}\in Y_{n}\) is a \((m+n+1)\)-tree \(y_{1}\lor y_{2}\in Y_{m+n+1}\) obtained by joining the roots of \(y_{1}\) and \(y_{2}\) to a new vertex and creating a new root from that vertex. Given an \(n\)-tree \(y\in Y_{n}\), we label the \(n+1\) leaves of \(y\) by \(\{0,1,2,\ldots,n\}\) from left to right. For each \(n\geq 1\) and \(0\leq i\leq n\), there is a map (called the \(i\)-th face map) \(d_{i}:Y_{n}\to Y_{n-1}\), \(y\mapsto d_{i}y\) which is obtained by removing the \(i\)-th leaf of \(y\). Also there are maps \(\star_{i}:Y_{n}\to\{\dashv,\vdash\}\), \(y\mapsto\star_{i}^{y}\) (for \(0\leq i\leq n\)) which are defined as follows: Let \((D,\dashv,\vdash)\) be a diassociative algebra and \(M\) be a representation of it. For each \(n\geq 0\), we define the space \(CY^{n}(D,M)\) of all \(n\)-cochains by \(CY^{n}(D,M):=\mathrm{Hom}(\mathbf{k}[Y_{n}]\otimes D^{\otimes n},M)\). Then there is a map \(\delta_{\mathrm{Dias}}:CY^{n}(D,M)\to CY^{n+1}(D,M)\) given by \[\delta_{\mathrm{Dias}}(f) (y;a_{1},\dots,a_{n+1})=a_{1}\star_{0}^{y}f(d_{0}y;a_{2},\dots,a_ {n+1})\] \[+\sum_{i=1}^{n}(-1)^{i}\ f(d_{i}y;a_{1},\dots,a_{i}\star_{i}^{y}a_ {i+1},\dots,a_{n+1})\ +\ (-1)^{n+1}\ f(d_{n+1}y;a_{1},\dots,a_{n})\star_{n+1}^{y}a_{n+1},\] for \(f\in CY^{n}(D,M)\), \(y\in Y_{n+1}\) and \(a_{1},\dots,a_{n+1}\in D.\) Then it has been shown by Frabetti [11] that \((\delta_{\mathrm{Dias}})^{2}=0\). In other words, \(\{CY^{\bullet}(D,M),\delta_{\mathrm{Dias}}\}\) is a cochain complex. The corresponding cohomology is called the **cohomology** of the diassociative algebra \((D,\dashv,\vdash)\) with coefficients in the representation \(M\). We denote the \(n\)-th cohomology group by \(H^{n}_{\mathrm{Dias}}(D,M)\). In [28] Majumdar and Mukherjee constructed a graded Lie algebra whose Maurer-Cartan elements correspond to diassociative algebra structures on a given vector space (see also [42]). To describe their graded Lie bracket in a more simple form, we define maps \(R_{0}^{m;i,n}:Y_{m+n-1}\to Y_{m}\) and \(R_{i}^{m;i,n}:Y_{m+n-1}\to Y_{n}\) (for \(m,n\geq 1\) and \(1\leq i\leq m\)) by \[R_{0}^{m;i,n}(y) =\widehat{d_{0}}\circ\widehat{d_{1}}\circ\dots\circ\widehat{d_{i- 1}}\circ d_{i}\circ\dots\circ d_{i+n-2}\circ\widehat{d_{i+n-1}}\circ\dots \circ\widehat{d_{m+n-1}}(y), \tag{4}\] \[R_{i}^{m;i,n}(y) =d_{0}\circ d_{1}\circ\dots\circ d_{i-2}\circ\widehat{d_{i-1}} \circ\dots\circ\widehat{d_{i+n-1}}\circ d_{i+n}\circ\dots\circ d_{m+n-1}(y), \tag{3}\] where \(\ \widehat{\ }\) means that the term is missing from the expression. Let \(D\) be a vector space (not necessarily a diassociative algebra). They showed that the graded vector space \(CY^{\bullet}(D,D)=\oplus_{n=1}^{\infty}CY^{n}(D,D)=\oplus_{n=1}^{\infty} \mathrm{Hom}(\mathbf{k}[Y_{n}]\otimes D^{\otimes n},D)\) inherits a degree \(-1\) graded Lie bracket (which we call the Majumdar-Mukherjee bracket) given by \[[f,g]_{\mathsf{MM}}:=\big{(}\sum_{i=1}^{m}(-1)^{(i-1)(n-1)}f\circ_{i}g\big{)} -(-1)^{(m-1)(n-1)}\big{(}\sum_{i=1}^{n}(-1)^{(i-1)(m-1)}g\circ_{i}f\big{)}, \tag{5}\] where \[(f\circ_{i}g)(y;a_{1},\dots,a_{m+n-1})=f\big{(}R_{0}^{m;i,n}(y);a_{1},\dots,a _{i-1},g(R_{i}^{m;i,n}(y);a_{i},\dots,a_{i+n-1}),a_{i+n},\dots,a_{m+n-1}\big{)}, \tag{6}\] for \(f\in CY^{m}(D,D)\), \(g\in CY^{n}(D,D)\), \(y\in Y_{m+n-1}\) and \(a_{1},\dots,a_{m+n-1}\in D\) (see also [10,42] for more details). In other words, \(\big{(}CY^{\bullet+1}(D,D)=\oplus_{n=0}^{\infty}CY^{n+1}(D,D),[\,\ ]_{\mathsf{MM}}\big{)}\) is a graded Lie algebra. An element \(\pi\in CY^{2}(D,D)\) determines (and determined by) two bilinear maps \(\dashv,\vdash:D\times D\to D\) given by \[a\dashv b=\pi\big{(}\bigvee;a,b\big{)}\quad\text{ and }\quad a\vdash b=\pi \big{(}\bigvee;a,b\big{)},\text{ for }a,b\in D. \tag{7}\] Then it has been shown in [28] that \(\pi\) defines a Maurer-Cartan element in the above-graded Lie algebra if and only if \((\dashv,\vdash)\) defines a diassociative algebra structure on \(D\). ### Remark Let \((D,\dashv,\vdash)\) be a diassociative algebra. Consider the corresponding Maurer-Cartan element \(\pi\) in the graded Lie algebra \(\big{(}CY^{\bullet+1}(D,D)=\oplus_{n=0}^{\infty}CY^{n+1}(D,D),[\,\ ]_{\mathsf{MM}}\big{)}\). Then the coboundary map \(\delta_{\mathrm{Dias}}:CY^{n}(D,D)\to CY^{n+1}(D,D)\) of the diassociative algebra \(D\) with coefficients in the adjoint representation is simply given by \[\delta_{\mathrm{Dias}}(f)=(-1)^{n-1}[\pi,f]_{\mathsf{MM}},\text{ for }f\in CY^{n}(D,D).\] ## 3. Relative averaging operators and relative averaging algebras In this section, we first introduce relative averaging operators, relative averaging algebras and provide various examples. Next, we consider the close relationship between relative averaging algebras and diassociative algebras. Finally, we define and study bimodules over relative averaging algebras. **3.1 Definition**.: (i) Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. A **relative averaging operator** on \(M\) over the algebra \(A\) is a linear map \(P:M\to A\) that satisfies \[P(u)\cdot P(v)=P\big{(}P(u)\cdot_{M}v\big{)}=P(u\cdot_{M}P(v)),\text{ for }u,v \in M. \tag{8}\] Here \(\cdot\) denotes the associative multiplication on \(A\) and \(\cdot_{M}\) denotes both the left and right \(A\)-actions on \(M\). (ii) A **relative averaging algebra** is a triple \((A,M,P)\) consisting of an associative algebra \(A\), an \(A\)-bimodule \(M\) and a relative averaging operator \(P:M\to A\). For our convenience, we denote a relative averaging algebra \((A,M,P)\) by the notation \(M\xrightarrow{P}A\). Hence \((A,M,P)\) and \(M\xrightarrow{P}A\) represent the same mathematical structure. **3.2 Definition**.: Let \(M\xrightarrow{P}A\) and \(M^{\prime}\xrightarrow{P^{\prime}}A^{\prime}\) be two relative averaging algebras. A **morphism** of relative averaging algebras from \(M\xrightarrow{P}A\) to \(M^{\prime}\xrightarrow{P^{\prime}}A^{\prime}\) consists of a pair \((\varphi,\psi)\) of an algebra morphism \(\varphi:A\to A^{\prime}\) and a linear map \(\psi:M\to M^{\prime}\) satisfying \[\psi(a\cdot_{M}u)=\varphi(a)\cdot_{M^{\prime}}^{A^{\prime}}\psi(u),\quad\psi(u \cdot_{M}a)=\psi(u)\cdot_{M^{\prime}}^{A^{\prime}}\varphi(a)\text{ \ and \ }\varphi\circ P=P^{\prime}\circ\psi,\text{\ \ for all }a\in A,u\in M.\] Here \(\cdot_{M^{\prime}}^{A^{\prime}}\) denotes both the left and right \(A^{\prime}\)-actions on \(M^{\prime}\). It is said to be an **isomorphism** if both \(\varphi\) and \(\psi\) are linear isomorphisms. **3.3 Example**.: Any averaging algebra \((A,P)\) can be regarded as a relative averaging algebra \(A\xrightarrow{P}A\), where \(A\) is equipped with the adjoint \(A\)-bimodule structure. Thus, a relative averaging algebra is a generalization of an averaging algebra. **3.4 Example**.: Let \(A\) be an associative algebra. Then the tensor product \(A\otimes A\) can be equipped with an \(A\)-bimodule structure with the left and right \(A\)-actions respectively given by \[c\cdot_{A\otimes A}(a\otimes b)=c\cdot a\otimes b\text{\ \ and\ \ }(a\otimes b) \cdot_{A\otimes A}c=a\otimes b\cdot c,\text{\ for }a\otimes b\in A\otimes A,c\in A.\] Consider the map \(P:A\otimes A\to A\) given by \(P(a\otimes b)=a\cdot b\), for \(a\otimes b\in A\otimes A\). For any \(a\otimes b\), \(a^{\prime}\otimes b^{\prime}\in A\otimes A\), we have \[P(a\otimes b)\cdot P(a^{\prime}\otimes b^{\prime})=a\cdot b\cdot a^{\prime} \cdot b^{\prime}=\begin{cases}=P\big{(}a\cdot b\cdot a^{\prime}\otimes b^{ \prime}\big{)}=P\big{(}P(a\otimes b)\cdot_{A\otimes A}(a^{\prime}\otimes b^{ \prime})\big{)},\\ =P\big{(}a\otimes b\cdot a^{\prime}\cdot b^{\prime}\big{)}=P\big{(}(a\otimes b )\cdot_{A\otimes A}P(a^{\prime}\otimes b^{\prime})\big{)}.\end{cases}\] This shows that \(P:A\otimes A\to A\) is a relative averaging operator. Thus, \(A\otimes A\xrightarrow{P}A\) is a relative averaging algebra. **3.5 Example**.: Let \(A\) be an associative algebra. Then the space \(\underline{A\oplus\cdots\oplus A}\) is an \(A\)-bimodule where the left (resp. right) \(A\)-action is given by componentwise left (resp. right) multiplication map. Then it is easy to see that the map \[P:A\oplus\cdots\oplus A\to A,\text{ }P\big{(}(a_{1},\ldots,a_{n})\big{)}=a_{1}+ \cdots+a_{n},\text{ for }(a_{1},\ldots,a_{n})\in A\oplus\cdots\oplus A\] is a relative averaging operator. In other words, \(A\oplus\cdots\oplus A\xrightarrow{P}A\) is a relative averaging algebra. **3.6 Example**.: Let \(A\) be an associative algebra. Then for any \(1\leq i\leq n\), the \(i\)-th projection map \(P_{i}:A\oplus\cdots\oplus A\to A\), \((a_{1},\ldots,a_{n})\mapsto a_{i}\) is a relative averaging operator. That is, \(A\oplus\cdots\oplus A\xrightarrow{P_{i}}A\) is a relative averaging algebra. **3.7 Example**.: Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. Suppose \(G\) is a finite group and there are maps \(G\times A\to A\), \((g,a)\mapsto{}^{g}a\) and \(G\times M\to A\), \((g,u)\mapsto{}^{g}u\) that satisfy \[{}^{g}(a\cdot_{M}u)={}^{g}a\cdot{}^{g}u,\ \ \ {}^{g}(u\cdot_{M}a)={}^{g}u\cdot{}^{g}a \ \ \text{and}\ \ {}^{g}(h{}^{u})={}^{gh}u,\] for \(a\in A\), \(u\in M\) and \(g,h\in G\). We define a map \(P:M\to A\) by \(P(u)=\sum_{g\in G}{}^{g}u\). For any \(u,v\in M\), we observe that \[P\big{(}P(u)\cdot_{M}v\big{)} =\sum_{h\in G}{}^{h}\big{(}(\sum_{g\in G}{}^{g}u)\cdot_{M}v\big{)} =\sum_{h\in G}\big{(}\sum_{g\in G}{}^{hg}u\big{)}\cdot{}^{h}v=\big{(}\sum_{g \in G}{}^{g}u\big{)}\cdot\big{(}\sum_{h\in G}{}^{h}v\big{)}=P(u)\cdot P(v),\] \[P\big{(}u\cdot_{M}P(v)\big{)} =\sum_{h\in G}{}^{h}\big{(}u\cdot_{M}(\sum_{g\in G}{}^{g}v)\big{)} =\sum_{h\in G}{}^{h}u\cdot\big{(}\sum_{h\in G}{}^{hg}v\big{)}=\big{(}\sum_{h \in G}{}^{h}u\big{)}\cdot\big{(}\sum_{g\in G}{}^{g}v\big{)}=P(u)\cdot P(v).\] This shows that \(P:M\to A\) is a relative averaging operator, equivalently, \(M\xrightarrow{P}A\) is a relative averaging algebra. **3.8 Example**.: Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. Let \(P:M\to A\) be an \(A\)-bimodule map, i.e. \[P(a\cdot_{M}u)=a\cdot P(u)\ \ \text{and}\ \ P(u\cdot_{M}a)=P(u)\cdot a,\ \ \text{for}\ a\in A,\ u\in M.\] Then it is easy to see that \(M\xrightarrow{P}A\) is a relative averaging algebra. **3.9 Example**.: In [26] Loday and Pirashvili introduced the category \(\mathcal{LM}\) whose objects are linear maps between vector spaces. In other words, an object in \(\mathcal{LM}\) is of the form \(V\xrightarrow{f}W\), where \(V,W\) are vector spaces and \(f\) is a linear map. They equip \(\mathcal{LM}\) with a tensor product which makes it a tensor category. It has been observed that an associative object in \(\mathcal{LM}\) is given by a datum \(M\xrightarrow{f}A\), where \(A\) is an associative algebra, \(M\) is an \(A\)-bimodule and \(f\) is an \(A\)-bimodule map. Thus, it turns out that an associative object in \(\mathcal{LM}\) is a relative averaging algebra. **3.10 Example**.: (Crossed modules of associative algebras [40]) A crossed module of associative algebras is a quadruple \((A,M,\cdot_{M},d)\) in which \(A,M\) are both associative algebras and \(M\) is also equipped with an \(A\)-bimodule structure (with both the left and right \(A\)-actions on \(M\) being denoted by \(\cdot_{M}\)) and \(d:M\to A\) is an algebra morphism that satisfy \[d(a\cdot_{M}u)=a\cdot du,\ \ \ d(u\cdot_{M}a)=du\cdot a,\ \ \ (du)\cdot_{M}v=u\cdot_{M}(dv)=u\diamond v,\ \text{for}\ a\in A,u,v\in M.\] Here \(\diamond\) denotes the associative multiplication on \(M\). Thus, it follows from Example 3.8 that \(M\xrightarrow{d}A\) is a relative averaging algebra. It has been observed in [3, 40] that crossed modules of associative algebras are equivalent to'strict' associative 2-algebras. Hence by following the previous example, one can construct relative averaging algebras from strict associative 2-algebras. In the following, we give some characterizations of relative averaging operators. We start with the following useful result. **3.11 Proposition**.: _Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. Then the direct sum \(A\oplus M\) inherits a diassociative algebra structure with the operations_ \[(a,u)\dashv(b,v)=(a\cdot b,u\cdot_{M}b)\ \ \text{and}\ \ (a,u)\vdash(b,v)=(a\cdot b,a\cdot_{M}v),\ \text{for}\ (a,u),(b,v)\in A\oplus M.\] _We denote this diassociative algebra simply by \(A\oplus_{\operatorname{\mathrm{Diass}}}M.\)_ Proof.: For any \((a,u),(b,v),(c,w)\in A\oplus M\), we have \[\big{(}(a,u)\dashv(b,v)\dashv(c,w) =\big{(}a\cdot b,u\cdot_{M}b\big{)}\dashv(c,w)=\big{(}(a\cdot b) \cdot c,(u\cdot_{M}b)\cdot_{M}c\big{)},\] \[(a,u)\dashv\big{(}(b,v)\dashv(c,w)\big{)} =(a,u)\dashv\big{(}b\cdot c,v\cdot_{M}c\big{)}=\big{(}a\cdot(b \cdot c),u\cdot_{M}(b\cdot c)\big{)},\] \[(a,u)\dashv\big{(}(b,v)\vdash(c,w)\big{)} =(a,u)\dashv\big{(}b\cdot c,b\cdot_{M}w\big{)}=\big{(}a\cdot(b \cdot c),u\cdot_{M}(b\cdot c)\big{)}.\] Thus, it follows that \[\big{(}(a,u)\dashv(b,v)\big{)}\dashv(c,w)=(a,u)\dashv\big{(}(b,v)\dashv(c,w) \big{)}=(a,u)\dashv\big{(}(b,v)\vdash(c,w)\big{)}.\] Similarly, one can show that \[\big{(}(a,u)\vdash(b,v)\big{)}\dashv(c,w)=(a,u)\vdash\big{(}(b,v)\dashv(c,w) \big{)},\] \[\big{(}(a,u)\dashv(b,v)\big{)}\vdash(c,w)=\big{(}(a,u)\vdash(b,v)\big{)}\vdash( c,w)=(a,u)\vdash\big{(}(b,v)\vdash(c,w)\big{)}.\] This completes the proof. **3.12 Proposition**.: _Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. A linear map \(P:M\to A\) is a relative averaging operator (on \(M\) over the algebra \(A\)) if and only if the graph \(\operatorname{Gr}(P)=\{(P(u),u)|u\in M\}\) is a subalgebra of the diassociative algebra \(A\oplus_{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \cdot for \(w_{1}^{\prime}\cdots w_{p}^{\prime}\in T(W)\) and \(w_{-m}\cdots w_{-1}\otimes v_{0}\otimes w_{1}\cdots w_{n}\in T(W)\otimes V\otimes T (W).\) Then it is easy to see that \(T(W)\otimes V\otimes T(W)\xrightarrow{\mathcal{P}(f)}T(W)\) is a relative averaging algebra, where \[\mathcal{P}(f)(w_{-m}\cdots w_{-1}\otimes v_{0}\otimes w_{1}\cdots w_{n})=w_{- m}\cdots w_{-1}f(v_{0})w_{1}\cdots w_{n}.\] **3.14 Remark**.: Let \(V\) be any vector space. Consider the \(2\)-term chain complex \(V\xrightarrow{\mathrm{id}_{V}}V\). Then it follows that \(T(V)\otimes V\otimes T(V)\xrightarrow{\mathcal{P}(\mathrm{id}_{V})}T(V)\) is a relative averaging algebra. **3.15 Definition**.: Let \(V\xrightarrow{f}W\) be a \(2\)-term chain complex. The **free relative averaging algebra** over \(V\xrightarrow{f}W\) is a relative averaging algebra \(\mathcal{M}(V)\xrightarrow{\mathcal{P}(f)}\mathcal{A}(W)\) equipped with a morphism \((i,j)\) of complexes from \(V\xrightarrow{f}W\) to \(\mathcal{M}(V)\xrightarrow{\mathcal{P}(f)}\mathcal{A}(W)\) that satisfy the following universal condition: for any relative averaging algebra \(M\xrightarrow{P}A\) and a morphism \((\varphi,\psi)\) of complexes from \(V\xrightarrow{f}W\) to \(M\xrightarrow{P}A\), there exists a morphism \((\widetilde{\varphi},\widetilde{\psi})\) of relative averaging algebras from \(\mathcal{M}(V)\xrightarrow{\mathcal{P}(f)}\mathcal{A}(W)\) to \(M\xrightarrow{P}A\) that makes the following diagram commutative **3.16 Proposition**.: _Let \(V\xrightarrow{f}W\) be a \(2\)-term chain complex. Then the relative averaging algebra_ \[T(W)\otimes V\otimes T(W)\xrightarrow{\mathcal{P}(f)}T(W)\text{ is free over the chain complex }V\xrightarrow{f}W.\] Proof.: We define maps \(i:W\to T(W)\) and \(j:V\to T(W)\otimes V\otimes T(W)\) by \[i(w)=w\ \text{ and }\ j(v)=1\otimes v\otimes 1,\text{ for }w\in W,v\in V.\] Let \(M\xrightarrow{P}A\) be any relative averaging algebra and \((\varphi,\psi)\) be a morphism of complexes from \(V\xrightarrow{f}W\) to \(M\xrightarrow{P}A\). We define maps \(\widetilde{\varphi}:T(W)\to A\) and \(\widetilde{\psi}:T(W)\otimes V\otimes T(W)\to M\) by \[\widetilde{\varphi}(w_{1}\cdots w_{n})=\varphi(w_{1})\cdots\varphi(w_{n}),\] \[\widetilde{\psi}(w_{-m}\cdots w_{-1}\otimes v_{0}\otimes w_{1}\cdots w_{n})= \big{(}\varphi(w_{-m})\cdots\varphi(w_{-1})\big{)}\cdot_{M}\psi(v_{0})\cdot_{ M}\big{(}\varphi(w_{1})\cdots\varphi(w_{n})\big{)},\] for \(w_{1}\cdots w_{n}\in T(W)\) and \(w_{-m}\cdots w_{-1}\otimes v_{0}\otimes w_{1}\cdots w_{n}\in T(W)\otimes V \otimes T(W)\). Then it is easy to see that the pair \((\widetilde{\varphi},\widetilde{\psi})\) is a morphism of relative averaging algebras from \(T(W)\otimes V\otimes T(W)\xrightarrow{\mathcal{P}(f)}T(W)\) to \(M\xrightarrow{P}A\) and satisfies the universal condition. ### Functorial relations with diassociative algebras **3.17 Proposition**.: _(i) Let \(M\xrightarrow{P}A\) be a relative averaging algebra. Then the vector space \(M\) carries a diassociative algebra structure with the bilinear operations_ \[u\dashv_{P}v:=u\cdot_{M}P(v)\ \text{ and }\ u\vdash_{P}v:=P(u)\cdot v,\text{ for }u,v\in M. \tag{9}\] _We denote this diassociative algebra simply by \(M_{P}\)._ _(ii) Let \(M\xrightarrow{P}A\) and \(M^{\prime}\xrightarrow{P^{\prime}}A^{\prime}\) be two relative averaging algebras and \((\varphi,\psi)\) be a morphism between them. Then \(\psi:M\to M^{\prime}\) is a morphism between induced diassociative algebras (from \(M_{P}\) to \(M_{P^{\prime}}^{\prime}\))._ Proof.: (i) Since \(P:M\to A\) is a relative averaging operator, it follows from Proposition 3.12 that \(\mathrm{Gr}(P)\) is a subalgebra of the diassociative algebra \(A\oplus_{\mathrm{Diss}}M\). The inherited diassociative structure on \(\mathrm{Gr}(P)\) is given by \[(P(u),u)\dashv(P(v),v)=\big{(}P(u)\cdot P(v),u\cdot_{M}P(v)\big{)}\ \ \text{and}\ \ (P(u),u)\vdash(P(v),v)=\big{(}P(u)\cdot P(v),P(u)\cdot_{M}v\big{)},\] for \(u,v\in M\). As the vector space \(M\) is isomorphic to \(\operatorname{Gr}(P)\) via \(u\rightsquigarrow(P(u),u)\), for \(u\in M\), we have a diassociative algebra structure on \(M\) which is precisely given by (9). (ii) For any \(u,v\in M\), we have \[\psi(u\dashv_{P}v) =\psi(u\cdot_{M}P(v))=\psi(u)\cdot_{M^{\prime}}^{A^{\prime}}\varphi P (v)=\psi(u)\cdot_{M^{\prime}}^{A^{\prime}}P^{\prime}(\psi(v))=\psi(u)\dashv_{P^ {\prime}}\psi(v),\] \[\psi(u\vdash_{P}v) =\psi(P(u)\cdot_{M}v)=\varphi P(u)\cdot_{M^{\prime}}^{A^{\prime}} \psi(v)=P^{\prime}(\psi(u))\cdot_{M^{\prime}}^{A^{\prime}}\psi(v)=\psi(u) \vdash_{P^{\prime}}\psi(v).\] This proves that \(\psi:M_{P}\to M_{P^{\prime}}^{\prime}\) is a morphism of diassociative algebras. The above proposition shows that there is a functor \(\mathcal{F}:\mathbf{rAvg}\to\mathbf{Diass}\) from the category of relative averaging algebras to the category of diassociative algebras. In the following, we will construct a functor in the other direction. Let \((D,\dashv,\vdash)\) be a diassociative algebra. Let \(D_{\operatorname{Ass}}\) be the quotient of \(D\) by the ideal generated by the elements \(a\dashv b-a\vdash b\), for \(a,b\in D\). Then \(D_{\operatorname{Ass}}\) is an associative algebra, where the product is given by \([a]\cdot[b]:=[a\dashv b]=[a\vdash b]\), for \([a],[b]\in D_{\operatorname{Ass}}\). Moreover, the vector space \(D\) is a \(D_{\operatorname{Ass}}\)-bimodule, where the left and right \(D_{\operatorname{Ass}}\)-actions on \(D\) are respectively given by \[[a]\cdot_{D}b=a\vdash b\ \ \text{and}\ \ b\cdot_{D}[a]=b\dashv a,\ \text{for}\ [a]\in D_{ \operatorname{Ass}},b\in D.\] With these notations, the quotient map \(q:D\to D_{\operatorname{Ass}}\) is a relative averaging operator as \[q(a)\cdot q(b)=[a]\cdot[b]=\begin{cases}=[a\vdash b]=[[a]\cdot_{D}b]=q\big{(} q(a)\cdot_{D}b\big{)},\\ =[a\dashv b]=[a\cdot_{D}[b]]=q\big{(}a\cdot_{D}q(b)\big{)},\end{cases}\] for \(a,b\in D\). Thus, \(D\xrightarrow{q}D_{\operatorname{Ass}}\) is a relative averaging algebra. Moreover, the induced diassociative algebra structure on \(D\) coincides with the given one, as \[a\dashv_{q}b=a\cdot_{D}q(b)=a\dashv b\ \ \text{and}\ \ a\vdash_{q}b=q(a)\cdot_{D}b=a \vdash b,\ \text{for}\ a,b\in D.\] Let \((D,\dashv,\vdash)\) and \((D^{\prime},\dashv,\vdash^{\prime})\) be two diassociative algebras and \(\psi:D\to D^{\prime}\) be a morphism between them. Then it is easy to verify that the pair \((\varphi,\psi)\) is a morphism of relative averaging algebras from \(D\xrightarrow{q}D_{\operatorname{Ass}}\) to \(D^{\prime}\xrightarrow{q^{\prime}}D^{\prime}_{\operatorname{Ass}}\), where \(\varphi:D_{\operatorname{Ass}}\to D^{\prime}_{\operatorname{Ass}}\) is given by \(\varphi([a])=[\psi(a)]\), for \([a]\in D_{\operatorname{Ass}}\). This construction yields a functor \(\mathcal{G}:\mathbf{Diass}\to\mathbf{rAvg}\) from the category of diassociative algebras to the category of relative averaging algebras. **3.18 Proposition**.: _The functor \(\mathcal{G}:\mathbf{Diass}\to\mathbf{rAvg}\) is left adjoint to the functor \(\mathcal{F}:\mathbf{rAvg}\to\mathbf{Diass}\). More precisely, for any diassociative algebra \((D,\dashv,\vdash)\) and a relative averaging algebra \(M\xrightarrow{P}A\), we have_ \[\operatorname{Hom}_{\mathbf{Diass}}(D,M_{P})\ \cong\ \operatorname{Hom}_{\mathbf{rAvg}}(D \xrightarrow{q}D_{\operatorname{Ass}},M\xrightarrow{P}A).\] Proof.: Let \(\psi\in\operatorname{Hom}_{\mathbf{Diass}}(D,M_{P}).\) We define a map \(\varphi^{\psi}:D_{\operatorname{Ass}}\to A\) by \(\varphi^{\psi}([a])=P(\psi(a))\), for \([a]\in D_{\operatorname{Ass}}\). Then it is easy to see that \(\varphi^{\psi}\) is an algebra morphism. Moreover, we have \[\psi([a]\cdot_{D}b) =\psi(a\vdash b)=\psi(a)\vdash_{P}\psi(b)=P\psi(a)\cdot_{M}\psi(b) =\varphi^{\psi}([a])\cdot_{M}\psi(b),\] \[\psi(b\cdot_{D}[a]) =\psi(b\dashv a)=\psi(b)\dashv_{P}\psi(a)=\psi(b)\cdot_{M}P\psi(a) =\psi(b)\cdot_{M}\varphi^{\psi}([a]),\] for \([a]\in D_{\operatorname{Ass}}\), \(b\in D\). Further, \(\varphi^{\psi}\circ q=P\circ\psi\). Thus, \((\varphi^{\psi},\psi)\in\operatorname{Hom}_{\mathbf{rAvg}}(D\xrightarrow{q}D_{ \operatorname{Ass}},M\xrightarrow{P}A)\). On the other hand, if \((\varphi,\psi)\in\operatorname{Hom}_{\mathbf{rAvg}}(D\xrightarrow{q}D_{ \operatorname{Ass}},M\xrightarrow{P}A)\), then \(\psi\in\operatorname{Hom}_{\mathbf{Diass}}(D,M_{P})\). Finally, the above two correspondences are inverses to each other. **Bimodules over relative averaging algebras.** Here we introduce bimodules over relative averaging algebras. We show that a bimodule over a relative averaging algebra gives two representations of the induced diassociative algebra. **3.19 Definition**.: Let \(M\xrightarrow{P}A\) be a relative averaging algebra. A **bimodule** over it consists of a tuple \((N\xrightarrow{Q}B,l,r)\) in which \(N\xrightarrow{Q}B\) is a \(2\)-term chain complex with both \(B\) and \(N\) are \(A\)-bimodules, and \(l:M\times B\to N\) and \(r:B\times M\to N\) are bilinear maps (called the pairing maps) satisfying \[l(a\cdot_{M}u,b)=a\cdot_{N}l(u,b), l(u\cdot_{M}a,b)=l(u,a\cdot_{B}b), l(u,b\cdot_{B}a)=l(u,b)\cdot_{N}a, \tag{11}\] \[r(a\cdot_{B}b,u)=a\cdot_{N}r(b,u), r(b\cdot_{B}a,u)=r(b,a\cdot_{M}u), r(b,u\cdot_{M}a)=r(b,u)\cdot_{N}a, \tag{10}\] and \[P(u)\cdot_{B}Q(n)=Q\big{(}P(u)\cdot_{N}n\big{)}=Q\big{(}l(u,Q(n ))\big{)}, \tag{13}\] \[Q(n)\cdot_{B}P(u)=Q\big{(}r(Q(n),u)\big{)}=Q\big{(}n\cdot_{N}P( u)\big{)}, \tag{12}\] for \(a\in A\), \(b\in B\), \(u\in M\) and \(n\in N\). Sometimes we denote a bimodule as above by the complex \(N\xrightarrow{Q}B\) when the bilinear maps \(l\) and \(r\) are clear from the context. **3.20 Example**.: (Adjoint bimodule) Let \(M\xrightarrow{P}A\) be a relative averaging algebra. Then it is easy to see that the tuple \((M\xrightarrow{P}A,l_{\mathrm{ad}},r_{\mathrm{ad}})\) is a bimodule over the relative averaging algebra \(M\xrightarrow{P}A\), where the pairing maps \(l_{\mathrm{ad}}:M\times A\to M\) and \(r_{\mathrm{ad}}:A\times M\to M\) are respectively the given right and left \(A\)-actions on \(M\). This is called the adjoint bimodule. **3.21 Example**.: (Bimodule over an averaging algebra [41]) Let \((A,P)\) be an averaging algebra. A bimodule over it consists of a pair \((M,Q)\) in which \(M\) is an \(A\)-bimodule and \(Q:M\to M\) is a linear map satisfying for \(a\in A\), \(u\in M\), \[P(a)\cdot_{M}Q(u)=Q(P(a)\cdot_{M}u)=Q(a\cdot_{M}Q(u)),\] \[Q(u)\cdot_{M}P(a)=Q(Q(u)\cdot_{M}a)=Q(u\cdot_{M}P(a)).\] This is equivalent to the fact that the tuple \((M\xrightarrow{Q}M,\cdot_{M},\cdot_{M})\) is a bimodule over the relative averaging algebra \(A\xrightarrow{P}A\). Let \(A\) be an associative algebra. Given an element \(\mathbf{r}=\sum r_{(1)}\otimes r_{(2)}\in A\otimes A\), we consider the following three elements \[\mathbf{r}_{13}\mathbf{r}_{12}= \sum r_{(1)}\cdot\widetilde{r}_{(1)}\otimes\widetilde{r}_{(2)} \otimes r_{(2)},\qquad\mathbf{r}_{12}\mathbf{r}_{23}=\sum r_{(1)}\otimes r_{ (2)}\cdot\widetilde{r}_{(1)}\otimes\widetilde{r}_{(2)}\] \[\text{and}\ \ \mathbf{r}_{23}\mathbf{r}_{13}=\sum r_{(1)} \otimes\widetilde{r}_{(1)}\otimes\widetilde{r}_{(2)}\cdot r_{(2)}\ \text{ of}\ \ A\otimes A\otimes A.\] Here \(\sum\widetilde{r}_{(1)}\otimes\widetilde{r}_{(2)}\) is another copy of \(\mathbf{r}\). An element \(\mathbf{r}\in A\otimes A\) is called an **averaging element** if it satisfies \[\mathbf{r}_{13}\mathbf{r}_{12}=\mathbf{r}_{12}\mathbf{r}_{23}=\mathbf{r}_{23} \mathbf{r}_{13}. \tag{14}\] Let \(r=\sum r_{(1)}\otimes r_{(2)}\in A\otimes A\) be an averaging element. Then the map \(P:A\to A\) defined by \(P(a)=\sum r_{(1)}\cdot a\cdot r_{(2)}\), for \(a\in A\), is an averaging operator on \(A\). To see this, we observe that \[P(a)\cdot P(a^{\prime})= \sum r_{(1)}\cdot a\cdot r_{(2)}\cdot\widetilde{r}_{(1)}\cdot a^ {\prime}\cdot\widetilde{r}_{(2)}\] \[= \begin{cases}=\sum r_{(1)}\cdot\widetilde{r}_{(1)}\cdot a\cdot \widetilde{r}_{(2)}\cdot a^{\prime}\cdot r_{(2)}&\text{(since $\mathbf{r}_{13}\mathbf{r}_{12}=\mathbf{r}_{12} \mathbf{r}_{23}$)}\ =P(P(a)\cdot a^{\prime}),\\ =\sum r_{(1)}\cdot a\cdot\widetilde{r}_{(1)}\cdot a^{\prime}\cdot\widetilde{r}_ {(2)}\cdot r_{(2)}&\text{(since $\mathbf{r}_{12}\mathbf{r}_{23}=\mathbf{r}_{23} \mathbf{r}_{13}$)}\ =P(a\cdot P(a^{\prime})),\end{cases}\] for \(a,a^{\prime}\in A\). In other words, \((A,P)\) is an averaging algebra. If \(M\) is any \(A\)-bimodule, we define a linear map \(Q:M\to M\) by \(Q(u)=\sum r_{(1)}\cdot_{M}u\cdot_{M}r_{(2)}\), for \(u\in M\). Then it is easy to verify that \((M,Q)\) is a bimodule over the averaging algebra \((A,P)\). **3.22 Example**.: Let \(M\xrightarrow{P}A\) and \(M^{\prime}\xrightarrow{P^{\prime}}A^{\prime}\) be two relative averaging algebras, and let \((\varphi,\psi)\) be a morphism between them (see Definition 3.2). Then the tuple \((M^{\prime}\xrightarrow{P^{\prime}}A^{\prime},l,r)\) is a bimodule over the relative averaging algebra \(M\xrightarrow{P}A\), where the \(A\)-bimodule structure on \(A^{\prime}\) is induced by the algebra morphism \(\varphi:A\to A^{\prime}\), and the \(A\)-bimodule structure on \(M^{\prime}\) is given by \(a\cdot_{M^{\prime}}m^{\prime}=\varphi(a)\cdot_{M^{\prime}}^{A^{\prime}}m^{\prime}\) and \(m^{\prime}\cdot_{M^{\prime}}a=m^{\prime}\cdot_{M^{\prime}}^{A^{\prime}}\varphi(a)\), for \(a\in A\), \(m^{\prime}\in M^{\prime}\). Moreover, the pairing maps \(l:M\times A^{\prime}\to M^{\prime}\) and \(r:A^{\prime}\times M\to M^{\prime}\) are respectively given by \[l(u,a^{\prime})=\psi(u)\cdot_{M^{\prime}}^{A^{\prime}}a^{\prime}\quad\text{ and }\quad r(a^{\prime},u)=a^{\prime}\cdot_{M^{\prime}}^{A^{\prime}}\psi(u),\text{ for }u\in M,a^{\prime}\in A^{\prime}.\] Note that any bimodule over an associative algebra can be dualized. More generally, if \(A\) is an associative algebra and \(M\) is an \(A\)-bimodule then the dual space \(M^{*}\) can be equipped with an \(A\)-bimodule structure with left and right \(A\)-actions given by \[(a\cdot_{M^{*}}f)(u)=f(u\cdot_{M}a)\text{ \ and \ }(f\cdot_{M^{*}}a)(u)=f(a \cdot_{M}u),\text{ for }a\in A,\ f\in M^{*},\ u\in M.\] In the following result, we give the dual construction of a bimodule over a relative averaging algebra. **3.23 Proposition**.: _Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. Then \((B^{*}\xrightarrow{Q^{*}}N^{*},l^{*},r^{*})\) is also a bimodule, where \(B^{*},N^{*}\) are equipped with dual \(A\)-bimodule structures and the pairings \(l^{*}:M\times N^{*}\to B^{*}\) and \(r^{*}:N^{*}\times M\to B^{*}\) are respectively given by_ \[l^{*}(u,f_{N})(b)=f_{N}(r(b,u))\text{ \ and \ }r^{*}(f_{N},u)(b)=f_{N}(l(u,b)), \text{ \ for }u\in M,\ f_{N}\in N^{*},\ b\in B.\] Proof.: For any \(a\in A\), \(u\in M\), \(f_{N}\in N^{*}\) and \(b\in B\), we first observe that \[l^{*}(a\cdot_{M}u,f_{N})(b)=f_{N}(r(b,a\cdot_{M}u))\stackrel{{( \ref{eq:M})}}{{=}}f_{N}(r(b\cdot_{B}a,u))=l^{*}(u,f_{N})(b\cdot_{B}a)=(a\cdot_ {B^{*}}l^{*}(u,f_{N}))(b),\] \[l^{*}(u\cdot_{M}a,f_{N})(b)=f_{N}(r(b,u\cdot_{M}a))\stackrel{{( \ref{eq:M})}}{{=}}f_{N}(r(b,u)\cdot_{N}a)=(a\cdot_{N^{*}}f_{N})(r(b,u))=l^{*} (u,a\cdot_{N^{*}}f_{N})(b),\] \[l^{*}(u,f_{N}\cdot_{N^{*}}a)(b)=f_{N}(a\cdot_{N}r(b,u))\stackrel{{ (\ref{eq:M})}}{{=}}f_{N}(r(a\cdot_{B}b,u))=l^{*}(u,f_{N})(a\cdot_{B}b)=(l^{*} (u,f_{N})\cdot_{B^{*}}a)(b).\] This shows that the identities in (10) hold for the dual structure. Similarly, one can verify the identities in (11) for the dual structure. Finally, for any \(u\in M\), \(f_{B}\in B^{*}\) and \(n\in N\), we have \[\big{(}P(u)\cdot_{N^{*}}Q^{*}(f_{B})\big{)}(n) =Q^{*}(f_{B})\big{(}n\cdot_{N}P(u)\big{)}\] \[=f_{B}\big{(}Q(n\cdot_{N}P(u))\big{)}\] \[=\begin{cases}=f_{B}\big{(}Q(n)\cdot_{B}P(u)\big{)}=\big{(}P(u) \cdot_{B^{*}}f_{B}\big{)}(Q(n))=Q^{*}\big{(}P(u)\cdot_{B^{*}}f_{B}\big{)}(n), \\ =f_{B}\big{(}Q\circ r(Q(n),u)\big{)}=l^{*}\big{(}u,Q^{*}(f_{B})\big{)}(Q(n))=Q^{ *}\big{(}l^{*}(u,Q^{*}(f_{B}))\big{)}(n).\end{cases}\] Similarly, we have \[\big{(}Q^{*}(f_{B})\cdot_{N^{*}}P(u)\big{)}(n) =Q^{*}(f_{B})\big{(}P(u)\cdot_{N}n\big{)}\] \[=f_{B}\big{(}Q(P(u)\cdot_{N}n)\big{)}\] \[=\begin{cases}=f_{B}\big{(}Q\circ l(u,Q(n))\big{)}=r^{*}\big{(}Q^ {*}(f_{B}),u\big{)}(Q(n))=Q^{*}\big{(}r^{*}(Q^{*}(f_{B}),u)\big{)}(n),\\ =f_{B}\big{(}P(u)\cdot_{B}Q(n)\big{)}=\big{(}f_{B}\cdot_{B^{*}}P(u)\big{)}(Q(n))= Q^{*}\big{(}f_{B}\cdot_{B^{*}}P(u)\big{)}(n).\end{cases}\] This shows that the identities in (12) and (13) also hold for the dual structure. Hence \((B^{*}\xrightarrow{Q^{*}}N^{*},l^{*},r^{*})\) is a bimodule over the relative averaging algebra \(M\xrightarrow{P}A\). Let \(M\xrightarrow{P}A\) be a relative averaging algebra. Then \((A^{*}\xrightarrow{P^{*}}M^{*},l^{*},r^{*})\) is a bimodule, where the pairings \(l^{*}:M\times M^{*}\to A^{*}\) and \(r^{*}:M^{*}\times M\to A^{*}\) are respectively given by \[l^{*}(u,f_{M})(a)=f_{M}(a\cdot_{M}u)\text{ \ and \ }r^{*}(f_{M},u)(a)=f_{M}(u \cdot_{M}a),\text{ \ for }u\in M,\ f_{M}\in M^{*},\ a\in A.\] Note that this bimodule is dual to the adjoint bimodule given in Example 3.20. Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. Since \(B\) is an \(A\)-bimodule, one can consider the semidirect product algebra \(A\oplus B\) with the product \[(a,b)\cdot_{\ltimes}(a^{\prime},b^{\prime})=\big{(}a\cdot a^{\prime},a\cdot_{B} b^{\prime}+b\cdot_{B}a^{\prime}\big{)},\text{ for }(a,b),(a^{\prime},b^{\prime})\in A\oplus B.\] It has been shown in [9] that the vector space \(M\oplus N\) carries a bimodule structure over the semidirect product algebra \(A\oplus B\) with left and right \((A\oplus B)\)-actions are respectively given by \[(a,b)\triangleright(u,n)=(a\cdot_{M}u,a\cdot_{N}n+r(b,u))\text{ \ and \ }(u,n) \triangleleft(a,b)=(u\cdot_{M}a,l(u,b)+n\cdot_{N}a), \tag{15}\] for \((a,b)\in A\oplus B\) and \((u,n)\in M\oplus N\). With these notations, we have the following result. **3.24 Theorem**.: _(Semidirect product) Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. Then \(M\oplus N\xrightarrow{P\oplus Q}A\oplus B\) is a relative averaging algebra._ Proof.: We have already seen that \(A\oplus B\) is an associative algebra (with the semidirect product structure) and \(M\oplus N\) is an \((A\oplus B)\)-bimodule with left and right actions given by (15). Next, for any \((u,n),(u^{\prime},n^{\prime})\in M\oplus N\), we observe that \[(P\oplus Q)(u,n)\cdot_{\ltimes}(P\oplus Q)(u^{\prime},n^{\prime}) =(P(u),Q(n))\cdot_{\ltimes}(P(u^{\prime}),Q(n^{\prime}))\] \[=\big{(}P(u)\cdot P(u^{\prime}),P(u)\cdot_{B}Q(n^{\prime})+Q(n) \cdot_{B}P(u^{\prime})\big{)}\] \[=\big{(}P(P(u)\cdot_{M}u^{\prime}),Q(P(u)\cdot_{N}n^{\prime})+Q( r(Q(n),u^{\prime}))\big{)}\] \[=(P\oplus Q)\big{(}(P(u)\cdot_{M}u^{\prime},P(u)\cdot_{N}n^{ \prime}+r(Q(n),u^{\prime})\big{)}\] \[=(P\oplus Q)\big{(}((P\oplus Q)(u,n))\triangleright(u^{\prime},n^{ \prime})\big{)}.\] Also, we have \[(P\oplus Q)(u,n)\cdot_{\ltimes}(P\oplus Q)(u^{\prime},n^{\prime}) =\big{(}P(u)\cdot P(u^{\prime}),P(u)\cdot_{B}Q(n^{\prime})+Q(n) \cdot_{B}P(u^{\prime})\big{)}\] \[=\big{(}P(u\cdot_{M}P(u^{\prime})),Q(l(u,Q(n^{\prime})))+Q(n\cdot _{N}P(u^{\prime}))\big{)}\] \[=(P\oplus Q)\big{(}u\cdot_{M}P(u^{\prime}),l(u,Q(n^{\prime}))+n \cdot_{N}P(u^{\prime})\big{)}\] \[=(P\oplus Q)\big{(}(u,n)\triangleleft((P\oplus Q)(u^{\prime},n^{ \prime}))\big{)}.\] This proves that \(P\oplus Q:M\oplus N\to A\oplus B\) is a relative averaging operator. In other words, \(M\oplus N\xrightarrow{P\oplus Q}A\oplus B\) is a relative averaging algebra. **3.25 Proposition**.: _Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. Then the vector space \(N\) carries a representation of the induced diassociative algebra \(M_{P}\) with the action maps given by_ \[\begin{cases}\dashv M_{P}\otimes N\to N,&u\vdash n=l(u,Q(n)),\\ \vDash:M_{P}\otimes N\to N,&u\vdash n=P(u)\cdot_{N}n,\\ \vDash:N\otimes M_{P}\to N,&n\vdash u=n\cdot_{N}P(u),\\ \vDash:N\otimes M_{P}\to N,&n\vdash u=r(Q(n),u).\end{cases} \tag{16}\] Proof.: To prove the result, we consider the semidirect product relative averaging algebra \(M\oplus N\xrightarrow{P\oplus Q}A\oplus B\) given in Theorem 3.24. Then it follows that the vector space \(M\oplus N\) carries a diassociative algebra structure (denoted by \((M\oplus N)_{P\oplus Q}\)) with the operations \[(u,n)\dashv_{P\oplus Q}(u^{\prime},n^{\prime}) =(u,n)\triangleleft\big{(}P(u^{\prime}),Q(n^{\prime})\big{)}= \big{(}u\dashv_{P}u^{\prime},l(u,Q(n^{\prime}))+n\cdot_{N}P(u^{\prime})\big{)},\] \[(u,n)\vdash_{P\oplus Q}(u^{\prime},n^{\prime}) =\big{(}P(u),Q(n)\big{)}\triangleright(u^{\prime},n^{\prime})= \big{(}u\vdash_{P}u^{\prime},P(u)\cdot_{N}n^{\prime}+r(Q(n),u^{\prime})\big{)},\] for \((u,n),(u^{\prime},n^{\prime})\in M\oplus N\). This shows that the diassociative algebra \(M_{P}\) has a representation on \(N\) with the structure maps (16), and the diassociative algebra \((M\oplus N)_{P\oplus Q}\) is nothing but the semidirect product of the diassociative algebra \(M_{P}\) with the representation \(N\) **3.26 Proposition**.: _Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. Then the vector space \(B\) can be given a representation of the induced diassociative algebra \(M_{P}\) with the action maps given by_ (17) \[\begin{cases}\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 1 26.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05\dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \dashvcent 126.05 \vcent 126.05 \vcent 126.05 \vcent 126.05 \vcent 126.05 \vcent 126.05 \vcent 126.05 126.05 \vcent 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.05 126.0526.052 126.0526.05 126.0526.0526 126.05 126.0526.0526 126.0526.0526 126.0526.0526 126.0526.0526 126.0526.0526 126.0526 126. for \(f\in CY^{m}(M,A)\) and \(g\in CY^{n}(M,A)\). In terms of \(\circ_{i}\) operations (see (6)), the above bracket is \[\llbracket f,g\rrbracket =\sum_{i=1}^{m}(-1)^{(i-1)n}f\circ_{i}(\Delta\circ_{1}g)-\sum_{i=1 }^{m}(-1)^{in}f\circ_{i}(\Delta\circ_{2}g)\] \[\quad-(-1)^{mn}\big{\{}\sum_{i=1}^{n}(-1)^{(i-1)m}g\circ_{i}( \Delta\circ_{1}f)-\sum_{i=1}^{n}(-1)^{im}g\circ_{i}(\Delta\circ_{2}f)\big{\}}\] \[\quad+(-1)^{mn}(\Delta\circ_{1}f)\circ_{m+1}g-(\Delta\circ_{1}g) \circ_{n+1}f.\] Explicitly, the bracket is given by \[\llbracket f,g\rrbracket(y;u_{1},\ldots,u_{m+n})\] \[=\sum_{i=1}^{m}(-1)^{(i-1)n}f\bigg{(}R_{0}^{m;i,n+1}(y);u_{1}, \ldots,u_{i-1},\] \[\qquad\qquad\qquad\qquad\qquad\Delta\big{(}R_{0}^{2;1,n}R_{i}^{m; i,n+1}(y);g\big{(}R_{1}^{2;1,n}R_{i}^{m;i,n+1}(y);u_{i},\ldots,u_{i+n-1}\big{)},u_{ i+n}\big{)},u_{i+n+1},\ldots,u_{m+n}\bigg{)}\] \[-\sum_{i=1}^{m}(-1)^{in}f\bigg{(}R_{0}^{m;i,n+1}(y);u_{1},\ldots,u _{i-1},\] \[\qquad\qquad\qquad\qquad\qquad\Delta\big{(}R_{0}^{2;2,n}R_{i}^{m; i,n+1}(y);u_{i},g\big{(}R_{2}^{2;2,n}R_{i}^{m;i,n+1}(y);u_{i+1},\ldots,u_{i+n}\big{)} \big{)},u_{i+n+1},\ldots,u_{m+n}\bigg{)}\] \[-(-1)^{mn}\sum_{i=1}^{n}(-1)^{(i-1)m}g\bigg{(}R_{0}^{n;i,m+1}(y);u _{1},\ldots,u_{i-1},\] \[\qquad\qquad\qquad\qquad\Delta\big{(}R_{0}^{2;1,m}R_{i}^{n;i,m+1 }(y);f\big{(}R_{1}^{2;1,m}R_{i}^{n;i,m+1}(y);u_{i},\ldots,u_{i+m-1}\big{)},u_{ i+m}\big{)},u_{i+m+1},\ldots,u_{m+n}\bigg{)}\] \[+(-1)^{mn}\sum_{i=1}^{n}(-1)^{im}g\bigg{(}R_{0}^{n;i,m+1}(y);u_{1},\ldots,u_{i-1},\] \[\qquad\qquad\qquad\qquad\Delta\big{(}R_{0}^{2;2,m}R_{i}^{n;i,m+1 }(y);u_{i},f\big{(}R_{1}^{2;2,m}R_{i}^{n;i,m+1}(y);u_{i+1},\ldots,u_{i+m}\big{)} \big{)},u_{i+m+1},\ldots,u_{m+n}\bigg{)}\] \[+(-1)^{mn}\Delta\bigg{(}R_{0}^{2;1,m}R_{0}^{m+1;m+1,n}(y);f\big{(} R_{1}^{2;1,m}R_{0}^{m+1;m+1,n}(y);u_{1},\ldots,u_{m}\big{)},g\big{(}R_{m+1}^{m+ 1;m+1,n}(y);u_{m+1},\ldots,u_{m+n}\big{)}\bigg{)}\] \[-\Delta\bigg{(}R_{0}^{2;1,n}R_{0}^{n+1;n+1,m}(y);g\big{(}R_{1}^{2 ;1,n}R_{0}^{n+1;n+1,m}(y);u_{1},\ldots,u_{n}\big{)},f\big{(}R_{n+1}^{n+1;n+1,m }(y);u_{n+1},\ldots,u_{m+n}\big{)}\bigg{)}, \tag{19}\] for \(y\in Y_{m+n}\) and \(u_{1},\ldots,u_{m+n}\in M.\) This graded Lie bracket can be extended to the graded space \(CY^{\bullet}(M,A)=\oplus_{n=0}^{\infty}CY^{n}(M,A)=\oplus_{n=0}^{\infty}\text{ Hom}(\Bbbk[Y_{n}]\otimes M^{\otimes n},A)\) by the following rules \[\llbracket f,a\rrbracket(y;u_{1},\ldots,u_{m}) =\;\sum_{i=1}^{m}f\big{(}y;u_{1},\ldots,u_{i-1},a\cdot_{M}u_{i}- u_{i}\cdot_{M}a,u_{i+1},\ldots,u_{m}\big{)}\] \[\quad+f(y;u_{1},\ldots,u_{m})\cdot a-a\cdot f(y;u_{1},\ldots,u_{m}),\] \[\llbracket a,b\rrbracket =a\cdot b-b\cdot a,\text{ for }f\in CY^{m}(M,A),\ y\in Y_{m}\text{ and }a,b\in A=CY^{0}(M,A).\] With all the above notations, we have the following interesting result. **4.1 Theorem**.: _Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. Then the pair \(\big{(}CY^{\bullet}(M,A),\llbracket\,\ \rrbracket\big{)}\) is a graded Lie algebra. Moreover, a linear map \(P:M\to A\) is a relative averaging operator if and only if \(P\in CY^{1}(M,A)\) is a Maurer-Cartan element in the graded Lie algebra \(\big{(}CY^{\bullet}(M,A),\llbracket\,\ \rrbracket\big{)}\)._ Proof.: The first part follows from the previous discussion. To prove the second part, we first observe that any linear map \(P:M\to A\) can be identified with an element (denoted by the same notation) \(P\in CY^{1}(M,A)\), where \(P(\big{\nearrow};u)=P(u)\), for \(u\in M\) and the unique tree \(\big{\nearrow}\in Y_{1}\). With this identification, it follows from (19) that \[\llbracket P,P\rrbracket\big{(}\raisebox{-1.72pt}{\includegraphics[]{ rgb}};u,v\big{)} =2\bigg{(}P\big{(}\underbrace{\Delta(\raisebox{-1.72pt}{\includegraphics[]{ rgb}};P(u),v)}_{=0}\big{)}+P\big{(}\underbrace{\Delta(\raisebox{-1.72pt}{ \includegraphics[]{ rgb}};u,P(v))}_{=u\cdot_{M}P(v)}\big{)}-\underbrace{\Delta( \raisebox{-1.72pt}{\includegraphics[]{ rgb}};P(u),P(v))}_{P(u\cdot P(v)}\big{)}\bigg{)},\] \[\llbracket P,P\rrbracket\big{(}\raisebox{-1.72pt}{\includegraphics[]{ rgb}};u,v\big{)} =2\bigg{(}P\big{(}\underbrace{\Delta(\raisebox{-1.72pt}{\includegraphics[]{ rgb}};P(u),v)}_{=P(u\cdot_{M}v)}\big{)}+P\big{(}\underbrace{\Delta(\raisebox{-1.72pt}{ \includegraphics[]{ rgb}};u,P(v))}_{=0}\big{)}-\underbrace{\Delta(\raisebox{-1.72pt}{ \includegraphics[]{ rgb}};P(u),P(v))}_{P(u\cdot P(v)}\big{)}\bigg{)},\] for \(u,v\in M\). Hence \(P\) is a Maurer-Cartan element (i.e. \(\llbracket P,P\rrbracket=0\)) if and only if \(P\) is a relative averaging operator. Let \(A\) be an associative algebra and \(M\) be an \(A\)-bimodule. In the previous theorem, we have seen that any relative averaging operator \(P:M\to A\) can be considered as a Maurer-Cartan element in the graded Lie algebra \(\big{(}CY^{\bullet}(M,A),\llbracket\,\ \rrbracket\big{)}\). Hence a relative averaging operator \(P\) induces a differential \[d_{P}:=\llbracket P,-\rrbracket:CY^{n}(M,A)\to CY^{n+1}(M,A),\ \text{for}\ n\geq 0,\] which makes \(\{CY^{\bullet}(M,A),d_{P}\}\) into a cochain complex. The corresponding cohomology is called the **co-homology** of the relative averaging operator \(P\), and the \(n\)-th cohomology group is denoted by \(H^{n}_{P}(M,A)\). Moreover, the differential \(d_{P}\) makes the triple \(\big{(}CY^{\bullet}(M,A),d_{P},\llbracket\,\ \rrbracket\big{)}\) into a differential graded Lie algebra. This differential graded Lie algebra controls the deformations of the relative averaging operator \(P\) (see the theorem below). For this reason, we call the differential graded Lie algebra \(\{CY^{\bullet}(M,A),d_{P}\}\) as the **controlling algebra** for the operator \(P\). **4.2 Theorem**.: _Let \(P:M\to A\) be a relative averaging operator. For any linear map \(P^{\prime}:M\to A\), the sum \(P+P^{\prime}\) is also a relative averaging operator if and only if \(P^{\prime}\) is a Maurer-Cartan element in the differential graded Lie algebra \((CY^{\bullet}(M,A),d_{P},\llbracket\,\ \rrbracket)\)._ Proof.: Note that the sum \(P+P^{\prime}\) is a relative averaging operator if and only if \(\llbracket P+P^{\prime},P+P^{\prime}\rrbracket=0\), equivalently, \[\llbracket P,P^{\prime}\rrbracket+\llbracket P^{\prime},P\rrbracket+ \llbracket P^{\prime},P^{\prime}\rrbracket=0.\] This holds if and only if \(d_{P}(P^{\prime})+\frac{1}{2}\llbracket P^{\prime},P^{\prime}\rrbracket=0\), which is equivalent to the fact that \(P^{\prime}\) is a Maurer-Cartan element in the differential graded Lie algebra \((CY^{\bullet}(M,A),d_{P},\llbracket\,\ \rrbracket)\). In the next, we show that the cohomology of a relative averaging operator \(P:M\to A\) can be seen as the cohomology of the induced diassociative algebra \(M_{P}\) with coefficients in a suitable representation on \(A\). We start with the following result which is a particular case of Proposition 3.26. **4.3 Proposition**.: _Let \(P:M\to A\) be a relative averaging operator. Then there is a representation of the induced diassociative algebra \(M_{P}\) on the vector space \(A\) with the action maps_ \[\dashv M_{P}\times A\to A,\quad u\dashv a=P(u)\cdot a-P(u\cdot_{M}a),\] \[\dashv M_{P}\times A\to A,\quad u\vdash a=P(u)\cdot a,\] \[\dashv A\times M_{P}\to A,\quad a\dashv u=a\cdot P(u),\] \[\dashv A\times M_{P}\to A,\quad a\vdash u=a\cdot P(u)-P(a\cdot_{M }u).\] It follows from the above proposition that one may define the cohomology of the induced diassociative algebra \(M_{P}\) with coefficients in the above representation on \(A\). More precisely, we consider the cochain complex \(\{CY^{\bullet}(M_{P},A),\delta^{P}_{\text{\rm{Bims}}}\}\), where \(CY^{n}(M_{P},A):=\text{\rm{Hom}}(\mathbf{k}[Y_{n}]\otimes M^{\otimes n},A)\) for \(n\geq 0\), and the coboundary map \(\delta^{P}_{\text{\rm{Bims}}}:CY^{n}(M_{P},A)\to CY^{n+1}(M_{P},A)\) given by \[\delta^{P}_{\text{\rm{Bims}}}(f) (y;u_{1},\dots,u_{n+1})=u_{1}\star_{0}^{y}f(d_{0}y;u_{2},\dots,u_{ n+1})\] \[+\sum_{i=1}^{n}(-1)^{i}\ f(d_{i}y;u_{1},\dots,u_{i}(u_{i}^{y})_{ P}u_{i+1},\dots,u_{n+1})+(-1)^{n+1}\ f(d_{n+1}y;u_{1},\dots u_{n})\star_{n+1}^{y}u_{n+1},\] for \(f\in CY^{n}(M_{P},A)\), \(y\in Y_{n+1}\) and \(u_{1},\ldots,u_{n+1}\in M\). Here \((\star_{i}^{y})_{P}\) represents the product \(\dashv_{P}\) or \(\vdash_{P}\) accordingly when \(\star_{i}^{y}\) is given by \(\dashv\) or \(\vdash\). We denote the \(n\)-th cohomology group of cochain complex \(\{CY^{\bullet}(M_{P},A),\delta_{\rm{Diss}}^{P}\}\) by \(H_{\rm{Diss}}^{n}(M_{P},A)\). **4.4 Proposition**.: _Let \(P:M\to A\) be a relative averaging operator. Then the coboundary operators \(d_{P}\) and \(\delta_{\rm{Diss}}^{P}\) are related by_ \[d_{P}(f)=(-1)^{n}\ \delta_{\rm{Diss}}^{P}(f),\text{ for }f\in CY^{n}(M,A).\] Proof.: For any \(y\in Y_{n+1}\) and \(u_{1},\ldots,u_{n+1}\in M\), we have \[\big{(}d_{P}(f)\big{)}(y;u_{1},\ldots,u_{n+1})\] \[=[P,f](y;u_{1},\ldots,u_{n+1})\] \[=P\big{(}\Delta\big{(}R_{0}^{2,1,n}(y);f\big{(}R_{1}^{2;1,n}(y);u _{1},\ldots,u_{n}\big{)},u_{n+1}\big{)}\big{)}\] \[\qquad-(-1)^{n}P\big{(}\Delta\big{(}R_{0}^{2,2,n}(y);u_{1},f \big{(}R_{2}^{2;2,n}(y);u_{2},\ldots,u_{n+1}\big{)}\big{)}\big{)}\] \[\qquad-(-1)^{n}\sum_{i=1}^{n}(-1)^{i-1}f\big{(}R_{0}^{n;i,2}(y);u _{1},\ldots,u_{i-1},\Delta\big{(}R_{i}^{n;i,2}(y);P(u_{i}),u_{i+1}\big{)},u_{i +2},\ldots,u_{n+1}\big{)}\] \[\qquad+(-1)^{n}\sum_{i=1}^{n}(-1)^{i}f\big{(}R_{0}^{n;i,2}(y);u_{ 1},\ldots,u_{i-1},\Delta\big{(}R_{i}^{n;i,2}(y);u_{i},P(u_{i+1})\big{)},u_{i +2},\ldots,u_{n+1}\big{)}\] \[\qquad+(-1)^{n}\Delta\big{(}R_{0}^{2,2,n}(y);P(u_{1}),f\big{(}R_ {2}^{2;2,n}(y),u_{2},\ldots,u_{n+1}\big{)}\big{)}\] \[\qquad-\Delta\big{(}R_{0}^{2;1,n}(y);f\big{(}R_{1}^{2;1,n}(y);u_{ 1},\ldots,u_{n}\big{)},P(u_{n+1})\big{)}\] \[=(-1)^{n}\bigg{\{}\Delta\big{(}R_{0}^{2;2,n}(y);P(u_{1}),f\big{(} R_{2}^{2;2,n}(y),u_{2},\ldots,u_{n+1}\big{)}\big{)}\] \[\qquad\qquad-P\big{(}\Delta\big{(}R_{0}^{2;2,n}(y);u_{1},f\big{(} R_{2}^{2;2,n}(y);u_{2},\ldots,u_{n+1}\big{)}\big{)}\big{)}\] \[+\sum_{i=1}^{n}(-1)^{i}f\big{(}R_{0}^{n;i,2}(y);u_{1},\ldots,u_{i -1},\underbrace{\Delta\big{(}R_{i}^{n;i,2}(y);P(u_{i}),u_{i+1}\big{)}+\Delta \big{(}R_{i}^{n;i,2}(y);u_{i},P(u_{i+1})\big{)}}_{=u_{i}(\star_{i}^{y})_{P}u _{i+1}},u_{i+2},\ldots,u_{n+1}\big{)}\] \[\qquad\qquad+(-1)^{n+1}\Delta\big{(}R_{0}^{2;1,n}(y);f\big{(}R_{1 }^{2;1,n}(y);u_{1},\ldots,u_{n}\big{)},P(u_{n+1})\big{)}\] \[\qquad\qquad-(-1)^{n+1}P\big{(}\Delta\big{(}R_{0}^{2;1,n}(y);f \big{(}R_{1}^{2;1,n}(y);u_{1},\ldots,u_{n}\big{)},u_{n+1}\big{)}\big{)}\bigg{\}}\] \[=(-1)^{n}\bigg{\{}u_{1}\star_{0}^{y}f(d_{0}(y);u_{2},\ldots,u_{n+ 1})+\sum_{i=1}^{n}(-1)^{i}f(R_{0}^{n;i,2}(y);u_{1},\ldots,u_{i-1},u_{i}(\star_{ i}^{y})_{P}u_{i+1},\ldots,u_{n+1})\] \[\qquad\qquad+(-1)^{n+1}f(d_{n+1}(y);u_{1},\ldots,u_{n})\star_{n+1} ^{y}u_{n+1}\bigg{\}}\quad(\text{as }R_{2}^{2;2,n}(y)=d_{0}(y)\text{ and }R_{1}^{2;1,n}(y)=d_{n+1}(y))\] \[=(-1)^{n}(\delta_{\rm{Diss}}^{P}(f))(u_{1},\ldots,u_{n+1}).\] This completes the proof. It follows from the above proposition that the cohomology of a relative averaging operator \(P\) is isomorphic to the cohomology of the diassociative algebra \(M_{P}\) with coefficients in the representation \(A.\) That is, \[H_{P}^{\bullet}(M,A)\cong H_{\rm{Diss}}^{\bullet}(M_{P},A).\] **4.5 Remark**.: The cohomology of a relative averaging operator \(P\) is useful to study deformations of the operator \(P\) by keeping the underlying algebra and bimodule intact. See [8] for the similar deformation theory of relative Rota-Baxter operators. ### Cohomological relation with diassociative algebras Let \(P:M\to A\) be a relative averaging operator. In the following, we find the relation between the cohomology of the relative averaging operator \(P\) and the cohomology of the induced diassociative algebra \(M_{P}\) with coefficients in the adjoint representation. To do this, we define a collection \(\{\Theta_{n}\}_{n=0}^{\infty}\) of maps \[\Theta_{n}:CY^{n}(M,A)\to CY^{n+1}(M_{P},M_{p})\] by \[\Theta_{n}(f)(y;u_{1},\dots,u_{n+1})=\begin{cases}(-1)^{n+1}\ u_{1}\cdot_{M}f(y _{1};u_{2},\dots,u_{n+1})&\text{ if $y=|\lor y_{1}$ for some $n$-tree $y_{1}$},\\ f(y_{1};u_{1},\dots,u_{n})\cdot_{M}u_{n+1}&\text{ if $y=y_{1}\lor|$ for some $n$-tree $y_{1}$},\\ 0&\text{ otherwise,}\end{cases}\] for \(u_{1},\dots,u_{n+1}\in M.\) Then we have the following. **4.6 Lemma**.: _For \(f\in CY^{m}(M,A)\) and \(g\in CY^{n}(M,A)\), we have_ \[[\Theta_{m}(f),\Theta_{n}(g)]_{\sf MM}=\Theta_{m+n}\big{(}[\![f,g]\!]\big{)}.\] _In other words, the collection \(\{\Theta_{n}\}_{n=0}^{\infty}\) defines a morphism of graded Lie algebras from \((CY^{\bullet}(M,A),[\![\,\ ]\!])\) to \((CY^{\bullet+1}(M_{P},M_{P}),[\,\ ]_{\sf MM}).\)_ Proof.: Let \(y\in Y_{m+n+1}\) be an \((m+n+1)\)-tree and \(u_{0},u_{1},\dots,\dots,u_{m+n}\) be elements of \(M\). If \(y=|\lor y_{1}\), for some \((m+n)\)-tree \(y_{1}\), then \[[\Theta_{m}(P),\Theta_{n}(Q)]_{\sf MM}\big{(}y;u_{0},u_{1},\dots,u _{m+n}\big{)}\] \[=\bigg{(}\sum_{i=1}^{m+1}(-1)^{(i-1)n}\ \Theta_{m}(P)\circ_{i}\Theta_{n}(Q)-(-1)^{mn}\sum_{i=1}^{n+1}(-1)^{(i-1)m}\ \Theta_{n}(Q)\circ_{i}\Theta_{m}(P)\bigg{)}\big{(}y;u_{0},u_{1},\dots,u_{m+n} \big{)}\] \[=\Theta_{m}(P)\big{(}R_{0}^{m+1;1,n+1}(y);\Theta_{n}(Q)\big{(}R_{ 1}^{m+1;1,n+1}(y);u_{0},\dots,u_{n}\big{)},u_{n+1},\dots,u_{m+n}\big{)}\] \[+\sum_{i=1}^{m}(-1)^{in}\ \Theta_{m}(P)\big{(}R_{0}^{m+1;i+1,n+1}( y);u_{0},\dots,u_{i-1},\Theta_{n}(Q)\big{(}R_{i+1}^{m+1;i+1,n+1}(y);u_{i}, \dots,u_{i+n}\big{)},u_{i+n+1},\dots,u_{m+n}\big{)}\] \[-(-1)^{mn}\bigg{\{}\Theta_{n}(Q)\big{(}R_{0}^{n+1;1,m+1}(y); \Theta_{m}(P)\big{(}R_{1}^{n+1;1,m+1}(y);u_{0},\dots,u_{m}\big{)},u_{m+1}, \dots,u_{m+n}\big{)}\] \[+\sum_{i=1}^{n}(-1)^{im}\ \Theta_{n}(Q)\big{(}R_{0}^{n+1;i+1,m+1}( y);u_{o},\dots,u_{i-1},\Theta_{m}(P)\big{(}R_{i+1}^{n+1;i+1,m+1}(y);u_{i}, \dots,u_{i+m}\big{)},u_{i+m+1},\dots,u_{m+n}\big{)}\bigg{\}}\] \[=(-1)^{m+n+1}\ u_{0}\cdot_{M}\big{(}[\![P,Q]\!](y_{1};u_{1},\dots, u_{m+n})\big{)}=\big{(}\Theta_{m+n}[\![P,Q]\!]\big{)}(y;u_{0},u_{1},\dots,u_{m+n} ).\] On the other hand, if \(y=y_{1}\lor|\), for some \((m+n)\)-tree \(y_{1}\), and \(u_{1},\dots,u_{m+n+1}\) are elements of \(M\), then \[[\Theta_{m}(P),\Theta_{n}(Q)]_{\sf MM}(y;u_{1},u_{2},\dots,u_{m+n +1})\] \[=\bigg{(}\sum_{i=1}^{m+1}(-1)^{(i-1)n}\ \Theta_{m}(P)\circ_{i}\Theta_{n}(Q)-(-1)^{mn} \sum_{i=1}^{n+1}(-1)^{(i-1)m}\ \Theta_{n}(Q)\circ_{i}\Theta_{m}(P)\bigg{)}\big{(}y;u_{0},u_{1},\dots,u_{m+n} \big{)}\] \[=\sum_{i=1}^{m}(-1)^{(i-1)n}\ \Theta_{m}(P)\big{(}R_{0}^{m+1;i,n+1}( y);u_{1},\dots,u_{i-1},\Theta_{n}(Q)\big{(}R_{i}^{m+1;i,n+1}(y);u_{i},\dots,u_{i+n} \big{)},\dots,u_{m+n+1}\big{)}\] \[+(-1)^{mn}\Theta_{m}(P)\big{(}R_{0}^{m+1;m+1,n+1}(y);u_{1},\dots, u_{m},\Theta_{n}(Q)\big{(}R_{m+1}^{m+1;m+1,n+1}(y);u_{m+1},\dots,u_{m+n+1} \big{)}\big{)}\] \[-(-1)^{mn}\bigg{\{}\sum_{i=1}^{n}(-1)^{(i-1)m}\ \Theta_{n}(Q)\big{(}R_{0}^{n+1;i,m+1}( y);u_{1},\dots,u_{i-1},\Theta_{m}(P)\big{(}R_{i}^{n+1;i,m+1}(y);u_{i}, \dots,u_{i+m}\big{)},\dots,u_{m+n+1}\big{)}\] \[+(-1)^{mn}\Theta_{n}(Q)\big{(}R_{0}^{n+1;n+1,m+1}(y);u_{1},\dots, u_{n},\Theta_{m}(P)\big{(}R_{n+1}^{n+1;n+1,m+1}(y);u_{n+1},\dots,u_{m+n+1} \big{)}\big{)}\bigg{\}}\] \[=\big{(}[\![P,Q]\!](y;u_{1},\dots,u_{m+n})\big{)}\cdot_{M}u_{m+n+1 }=\big{(}\Theta_{m+n}[\![P,Q]\!]\big{)}(y;u_{1},\dots,u_{m+n+1}).\] Finally, for any other \(y\)'s in \(Y_{m+n+1}\) (that are not of the form \(|\lor y_{1}\) or \(y_{1}\lor|\)), one can easily verify from the partial compositions (6) that \[[\Theta_{m}(P),\Theta_{n}(Q)](y;u_{0},u_{1},\dots,u_{m+n})=0=(\Theta_{m+n}[\![P,Q] \!])(y;u_{0},u_{1},\dots,u_{m+n}).\] This concludes the proof. Let \(\pi_{P}\in CY^{2}(M_{P},M_{P})\) be the Maurer-Cartan element corresponding to the induced diassociative algebra \(M_{P}\). In other words, \(\pi_{P}\) is given by Then it follows from the above lemma that the following diagram commutes As a consequence, we get the following result. **4.7 Theorem**.: _Let \(P:M\to A\) be a relative averaging operator. Then there is a morphism_ \[\Theta_{\bullet}:H_{P}^{\bullet}(M,A)\to H_{\rm{Diss}}^{\bullet+1}(M_{P},M_{P})\] _from the cohomology of the relative averaging operator \(P\) to the cohomology of the induced diassociative algebra \(M_{P}\) with coefficients in the adjoint representation._ ## 5. The controlling algebra and cohomology for relative averaging algebras In this section, we first construct a \(L_{\infty}\)-algebra whose Maurer-Cartan elements are precisely relative averaging algebras. Next, given a relative averaging algebra, we construct the corresponding controlling \(L_{\infty}\)-algebra. Finally, we define the cohomology of a relative averaging algebra with coefficients in a given bimodule. \(L_{\infty}\)**-algebras.** The notion of \(L_{\infty}\)-algebras (also known as strongly homotopy Lie algebras) first appeared in the work of Lada and Stasheff [22]. In this paper, we follow the equivalent definition by a degree shift [21]. **5.1 Definition**.: A \(L_{\infty}\)**-algebra** is a pair \((L,\{l_{k}\}_{k=1}^{\infty})\) consisting of a graded vector space \(L=\oplus_{i\in\mathbb{Z}}L_{i}\) equipped with a collection \(\{l_{k}:L^{\otimes k}\to L\}_{k=1}^{\infty}\) of degree 1 graded linear maps that are graded symmetric in the sense that \[l_{k}(x_{\sigma(1)},\ldots,x_{\sigma(k)})=\epsilon(\sigma)l_{k}(x_{1},\ldots, x_{k}),\text{ for }k\geq 1\text{ and }\sigma\in\mathbb{S}_{k},\] and satisfy the following higher Jacobi identities: \[\sum_{i+j=n+1}\sum_{\sigma\in\mathbb{S}_{(i,n-i)}}\epsilon(\sigma)\ l_{j}\big{(} l_{i}(x_{\sigma(1)},\ldots,x_{\sigma(i)}),x_{\sigma(i+1)},\ldots,x_{\sigma(n)} \big{)}=0,\] for all \(n\geq 1\) and homogeneous elements \(x_{1},\ldots,x_{n}\in L\). Here \(\epsilon(\sigma)\) is the Koszul sign that appears in the graded context. Throughout the paper, we assume that all \(L_{\infty}\)-algebras are weakly filtered [15] (see also [24]). Thus, certain infinite sums in \(L\) are always convergent. **5.2 Definition**.: Let \((L,\{l_{k}\}_{k=1}^{\infty})\) be a \(L_{\infty}\)-algebra. An element \(\alpha\in L_{0}\) is said to be a **Maurer-Cartan element** of the \(L_{\infty}\)-algebra is \(\alpha\) satisfies \[l_{1}(\alpha)+\frac{1}{2!}l_{2}(\alpha,\alpha)+\cdots+\frac{1}{n!}\ l_{n}( \alpha,\ldots,\alpha)+\cdots=0\quad\big{(}\text{i.e. }\sum_{k=1}^{\infty}\frac{1}{k!}l_{k}(\alpha,\ldots,\alpha)=0\big{)}.\] If \((L,\{l_{k}\}_{k=1}^{\infty})\) is a \(L_{\infty}\)-algebra and \(\alpha\in L_{0}\) is a Maurer-Cartan element of it, then one can construct a new \(L_{\infty}\)-algebra \((L,\{l_{k}^{\alpha}\}_{k=1}^{\infty})\) on the same graded vector space \(L\) with the structure maps given by \[l_{k}^{\alpha}(x_{1},\ldots,x_{k})=l_{k}(x_{1},\ldots,x_{k})+l_{1+k}(\alpha,x_{ 1},\ldots,x_{k})+\cdots+\frac{1}{n!}\ l_{n+k}(\underbrace{\alpha,\ldots, \alpha}_{n\text{ copies}},x_{1},\ldots,x_{k})+\cdots,\text{ for }k\geq 1.\] This is called the \(L_{\infty}\)-algebra obtained from \((L,\{l_{k}\}_{k=1}^{\infty})\) twisted by the Maurer-Cartan element \(\alpha\). **5.3 Remark**.: ([15]) Let \(\alpha\) be a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((L,\{l_{k}\}_{k=1}^{\infty})\). Then for any \(\alpha^{\prime}\in L_{0}\), the sum \(\alpha+\alpha^{\prime}\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((L,\{l_{k}\}_{k=1}^{\infty})\) if and only if \(\alpha^{\prime}\) is a Maurer-Cartan element of the twisted \(L_{\infty}\)-algebra \((L,\{l_{k}^{\alpha}\}_{k=1}^{\infty})\). There is a well-known construction of a \(L_{\infty}\)-algebra given by Voronov [39]. Let \((\mathfrak{g},\mathfrak{a},p,\Delta)\) be a quadruple consists of a graded Lie algebra \(\mathfrak{g}\) (with the graded Lie bracket \([\,\ ]\)), an abelian graded Lie subalgebra \(\mathfrak{a}\subset\mathfrak{g}\), a projection map \(p:\mathfrak{g}\rightarrow\mathfrak{g}\) with \(\operatorname{im}(p)=\mathfrak{a}\) and \(\ker(p)\subset\mathfrak{g}\) a graded Lie subalgebra, and an element \(\Delta\in\ker(p)_{1}\) that satisfies \([\Delta,\Delta]=0\). Such a quadruple is called a \(V\)**-data**. **5.4 Theorem**.: _Let \((\mathfrak{g},\mathfrak{a},p,\Delta)\) be a \(V\)-data._ _(i) Then the graded vector space \(\mathfrak{a}\) can be equipped with a \(L_{\infty}\)-algebra with the structure maps_ \[l_{k}(a_{1},\ldots,a_{k})=p[\cdots[[\Delta,a_{1}],a_{2}],\ldots,a_{k}],\text{ for }k\geq 1.\] _(ii) Let \(\mathfrak{h}\subset\mathfrak{g}\) be a graded Lie subalgebra that satisfies \([\Delta,\mathfrak{h}]\subset\mathfrak{h}\). Then the graded vector space \(s^{-1}\mathfrak{h}\oplus\mathfrak{a}\) can be given a \(L_{\infty}\)-algebra with the structure maps_ \[l_{1}\big{(}(s^{-1}x,a)\big{)} =\big{(}-s^{-1}[\Delta,x],p(x+[\Delta,a])\big{)},\] \[l_{2}\big{(}(s^{-1}x,0),(s^{-1}y,0)\big{)} =\big{(}(-1)^{|x|}s^{-1}[x,y],0\big{)},\] \[l_{k}\big{(}(s^{-1}x,0),(0,a_{1}),\ldots,(0,a_{k-1})\big{)} =\big{(}0,p[\cdots[[x,a_{1}],a_{2}],\ldots,a_{k-1}]\big{)},\ k \geq 2,\] \[l_{k}\big{(}(0,a_{1}),\ldots,(0,a_{k})\big{)} =\big{(}0,p[\cdots[[\Delta,a_{1}],a_{2}],\ldots,a_{k}]\big{)},\ k \geq 2,\] _for homogeneous elements \(x,y\in\mathfrak{h}\) (which are considered as elements \(s^{-1}x,s^{-1}y\in s^{-1}\mathfrak{h}\) by a degree shift) and homogeneous elements \(a_{1},\ldots,a_{k}\in\mathfrak{a}\). Up to permutations of the above inputs, all other maps vanish._ Maurer-Cartan characterization of relative averaging algebras.Let \(A\) and \(M\) be two vector spaces. Consider the graded Lie algebra \[\mathfrak{g}=\big{(}\oplus_{n=0}^{\infty}CY^{n+1}(A\oplus M,A\oplus M),[\,\ ]_{ \sf MM}\big{)}\] associated to the vector space \(A\oplus M\). For any \(k,l\geq 0\), let \(\mathcal{A}^{k,l}\) be the direct sum of all possible \((k+l)\) tensor powers of \(A\) and \(M\) in which \(A\) appears \(k\) times and \(M\) appears \(l\) times. For instance, \[\mathcal{A}^{2,0}=A\otimes A,\quad\mathcal{A}^{0,2}=M\otimes M\ \text{ and }\ \mathcal{A}^{1,1}=(A\otimes M)\oplus(M\otimes A).\] Then for any \(n\geq 1\), there is an isomorphism \((A\oplus M)^{\otimes n}\cong\oplus_{k+l=n}\mathcal{A}^{k,l}\) of vector spaces. A linear map \(f\in CY^{n+1}(A\oplus M,A\oplus M)\) is said to have **bidegree**\(k|l\) with \(k+l=n\) if \[f(\mathbf{k}[Y_{n+1}]\otimes\mathcal{A}^{k+1,l})\subset A,\quad f(\mathbf{k}[Y _{n+1}]\otimes\mathcal{A}^{k,l+1})\subset M\ \text{ and }\ f=0\text{ otherwise}.\] We denote the set of all linear maps of bidegree \(k|l\) by \(CY^{k|l}(A\oplus M,A\oplus M)\). Note that there are natural isomorphisms \[CY^{k|0}(A\oplus M,A\oplus M) \cong\text{Hom}(\mathbf{k}[Y_{k+1}]\otimes A^{\otimes k+1},A) \oplus\text{Hom}(\mathbf{k}[Y_{k+1}]\otimes\mathcal{A}^{k,1},M),\] \[CY^{-1|l}(A\oplus M,A\oplus M) \cong\text{Hom}(\mathbf{k}[Y_{l}]\otimes M^{\otimes l},A).\] Moreover, we have the following interesting result. **5.5 Proposition**.: _For \(f\in CY^{k_{f}|l_{f}}(A\oplus M,A\oplus M)\) and \(g\in CY^{k_{g}|l_{g}}(A\oplus M,A\oplus M)\), we have_ \[[f,g]_{\sf MM}\in CY^{k_{f}+k_{g}|l_{f}+l_{g}}(A\oplus M,A\oplus M).\] Proof.: Let \(f\in CY^{m+1}(A\oplus M,A\oplus M)\) and \(g\in CY^{n+1}(A\oplus M,A\oplus M).\) Then we have \(k_{f}+l_{f}=m\) and \(k_{g}+l_{g}=n\). For any \(y\in Y_{m+n+1}\), \(1\leq i\leq m+1\) and \(x_{1}\otimes\cdots\otimes x_{m+n+1}\in\mathcal{A}^{k_{f}+k_{g}+1,l_{f}+l_{g}}\), we have \[(f\circ_{i}g)\big{(}y;x_{1},\ldots,x_{m+n+1}\big{)}=f\big{(}R_{0}^{m+1;i,n+1}(y );x_{1},\ldots,x_{i-1},g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i},\ldots,x_{i+n}\big{)}, \ldots,x_{m+n+1}\big{)}. \tag{20}\] Note that the term \(g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i},\ldots,x_{i+n}\big{)}\) is nonvanishing only when the tensor product \(x_{i}\otimes x_{i+1}\otimes\cdots\otimes x_{i+n}\) lies in \(\mathcal{A}^{k_{g}+1,l_{g}}\) or lies in \(\mathcal{A}^{k_{g},l_{g}+1}\). Case 1. (Let \(x_{i}\otimes x_{i+1}\otimes\cdots\otimes x_{i+n}\in\mathcal{A}^{k_{g}+1,l_{g}}\).) In this case, \(g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i},\ldots,x_{i+n}\big{)}\in A\). Hence the tensor product \[x_{1}\otimes\cdots\otimes x_{i-1}\otimes g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i}, \ldots,x_{i+n}\big{)}\otimes x_{i+n+1}\otimes\cdots\otimes x_{m+n+1}\ \in\mathcal{A}^{k_{f}+1,l_{f}}.\] As a consequence, the term (20) lies in \(A\). Case 2. (Let \(x_{i}\otimes x_{i+1}\otimes\cdots\otimes x_{i+n}\in\mathcal{A}^{k_{g},l_{g}+1}\).) In this case, \(g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i},\ldots,x_{i+n}\big{)}\in M\). Hence the tensor product \[x_{1}\otimes\cdots\otimes x_{i-1}\otimes g\big{(}R_{i}^{m+1;i,n+1}(y);x_{i}, \ldots,x_{i+n}\big{)}\otimes x_{i+n+1}\otimes\cdots\otimes x_{m+n+1}\ \in\mathcal{A}^{k_{f}+1,l_{f}}.\] As a consequence, the term (20) also lies in \(A\). Therefore, we always have \((f\circ_{i}g)(\mathcal{A}^{k_{f}+k_{g}+1,l_{f}+l_{g}})\subset A\). Similarly, we can show that \[(f\circ_{i}g)(\mathcal{A}^{k_{f}+k_{g},l_{f}+l_{g}+1})\subset M\quad\text{ and }\quad f\circ_{i}g=0\ \text{ otherwise}.\] By interchanging the roles of \(f\) and \(g\), we get similar results for \(g\circ_{i}f\). Therefore, it follows from (5) that \[[f,g]_{\sf MM}(\mathcal{A}^{k_{f}+k_{g}+1,l_{f}+l_{g}})\subset A,\quad[f,g]_{ \sf MM}(\mathcal{A}^{k_{f}+k_{g},l_{f}+l_{g}+1})\subset M\ \text{ and }\ [f,g]_{\sf MM}=0\ \text{ otherwise}.\] Hence we get that \([f,g]_{\sf MM}\in CY^{k_{f}+k_{g}|l_{f}+l_{g}}(A\oplus M,A\oplus M)\). As a consequence of the previous proposition, we get the following. **5.6 Proposition**.: _Let \(A\) and \(M\) be two vector spaces. Then_ _(i) \(\mathfrak{h}=CY^{\bullet|0}(A\oplus M,A\oplus M)=\oplus_{n=0}^{\infty}CY^{n |0}(A\oplus M,A\oplus M)\subset\mathfrak{g}\) is a graded Lie subalgebra;_ _(ii) \(\mathfrak{a}=CY^{-1|\bullet+1}(A\oplus M,A\oplus M)=\oplus_{n=0}^{\infty}CY^ {-1|n+1}(A\oplus M,A\oplus M)\subset\mathfrak{g}\) is an abelian subalgebra._ Next, we construct a \(V\)-data as follows. Let \(\mathfrak{g}=\big{(}\oplus_{n=0}^{\infty}CY^{n+1}(A\oplus M,A\oplus M),[\,\ ]_{\sf MM} \big{)}\) be the graded Lie algebra associated to the vector space \(A\oplus M\). Consider the abelian Lie subalgebra \(\mathfrak{a}=\oplus_{n=0}^{\infty}CY^{-1|n+1}(A\oplus M,A\oplus M)\), and let \(p:\mathfrak{g}\rightarrow\mathfrak{g}\) be the projection onto the subspace \(\mathfrak{a}\). Then the quadruple \((\mathfrak{g},\mathfrak{a},p,\overline{\Delta}=0)\) is a \(V\)-data. Moreover, it follows from Proposition 5.6 that \(\mathfrak{h}=\oplus_{n=0}^{\infty}CY^{n|0}(A\oplus M,A\oplus M)\) is a graded Lie subalgebra of \(\mathfrak{g}\) that obviously satisfies \([\overline{\Delta},\mathfrak{h}]_{\sf MM}\subset\mathfrak{h}\). Hence by applying Theorem 5.4, we obtain the following. **5.7 Theorem**.: _Let \(A\) and \(M\) be two vector spaces. Then there is a \(L_{\infty}\)-algebra structure on the graded vector space \(s^{-1}\mathfrak{h}\oplus\mathfrak{a}\) with the structure maps \(\{l_{k}\}_{k=1}^{\infty}\) are given by_ \[l_{2}((s^{-1}f,0),(s^{-1}g,0)) =((-1)^{|f|}\ s^{-1}[f,g]_{\sf MM},0),\] \[l_{k}((s^{-1}f,0),(0,h_{1}),\ldots,(0,h_{k-1})) =(0,p[\cdots[[f,h_{1}]_{\sf MM},h_{2}]_{\sf MM},\ldots,h_{k-1}]_ {\sf MM}),\ k\geq 2,\] _for homogeneous elements \(f,g\in\mathfrak{h}\) (considered as elements \(s^{-1}f,s^{-1}g\in s^{-1}\mathfrak{h}\)) and homogeneous elements \(h_{1},\ldots,h_{k-1}\in\mathfrak{a}\). Up to permutations of the above entries, all other maps vanish._ Let \(A\) and \(M\) be two vector spaces. Suppose there are maps \[\mu\in\operatorname{Hom}(A^{\otimes 2},A),\ l_{M}\in\operatorname{Hom}(A\otimes M,M), \ r_{M}\in\operatorname{Hom}(M\otimes A,M)\ \text{ and }\ P\in\operatorname{Hom}(M,A).\] We define an element \(\Delta\in\mathfrak{h}_{1}=CY^{1|0}(A\oplus M,A\oplus M)=\operatorname{Hom}( \mathbf{k}[Y_{2}]\otimes A^{\otimes 2},A)\oplus\operatorname{Hom}(\mathbf{k}[Y_{2}] \otimes\mathcal{A}^{1,1},M)\) by \[\Delta\big{(}\big{\}};(a,u),(b,v)\big{)}=(\mu(a,b),r_{M}(u,b))\ \text{ and }\ \Delta\big{(}\big{\}};(a,u),(b,v)\big{)}=(\mu(a,b),l_{M}(a,v)), \tag{21}\] for \((a,u),(b,v)\in A\oplus M\). Note that \(\Delta\) can be regarded as an element \(s^{-1}\Delta\in(s^{-1}\mathfrak{h})_{0}\). **5.8 Theorem**.: _With the above notations, \(A_{\mu}:=(A,\mu)\) is an associative algebra, \(M_{l_{M},r_{M}}:=(M,l_{M},r_{M})\) is an \(A_{\mu}\)-bimodule and \(P:M\to A\) is a relative averaging operator (in short, \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\) is a relative averaging algebra) if and only if \(\alpha=(s^{-1}\Delta,P)\in(s^{-1}\mathfrak{h}\oplus\mathfrak{a})_{0}\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\)._ Proof.: First observe that \(l_{1}((s^{-1}\Delta,P))=0\). Moreover, it follows from Proposition 5.5 that \[[\Delta,P]_{\mathsf{MM}}\in CY^{0|1}(A\oplus M,A\oplus M),\quad[[ \Delta,P]_{\mathsf{MM}},P]_{\mathsf{MM}}\in CY^{-1|2}(A\oplus M,A\oplus M)\] \[\text{and}\ \ [[[\Delta,P]_{\mathsf{MM}},P]_{\mathsf{MM}},P]_{ \mathsf{MM}}\in CY^{-2|3}(A\oplus M,A\oplus M).\] Since the space \(CY^{-2|3}(A\oplus M,A\oplus M)\) is trivial, we have \([[[\Delta,P]_{\mathsf{MM}},P]_{\mathsf{MM}},P]_{\mathsf{MM}}=0.\) As a consequence, we have \(l_{k}\big{(}(s^{-1}\Delta,P),\dots,(s^{-1}\Delta,P)\big{)}=0\) for \(k\geq 4.\) Hence \[\sum_{k=1}^{\infty}\frac{1}{k!}\ l_{k}\big{(}(s^{-1}\Delta,P), \dots,(s^{-1}\Delta,P)\big{)}\] \[=\frac{1}{2!}l_{2}\big{(}(s^{-1}\Delta,P),(s^{-1}\Delta,P)\big{)} \ +\ \frac{1}{3!}l_{3}\big{(}(s^{-1}\Delta,P),(s^{-1}\Delta,P),(s^{-1}\Delta,P) \big{)} \tag{22}\] \[=\big{(}-\frac{1}{2}s^{-1}[\Delta,\Delta]_{\mathsf{MM}},\ \frac{1}{2}[[\Delta,P]_{\mathsf{MM}},P]_{ \mathsf{MM}}\big{)}.\] Observe that \[[\Delta,\Delta]_{\mathsf{MM}} =0\ \ \text{if and only if $A_{\mu}$ is an associative algebra and $M_{l_{M},r_{M}}$ is an $A_{\mu}$-bimodule,}\] \[[[\Delta,P]_{\mathsf{MM}},P]_{\mathsf{MM}} =0\ \ \text{if and only if $P$ is a relative averaging operator (cf. Theorem \ref{eq:Cartan element}).}\] Thus, it follows from (22) that \(\alpha=(s^{-1}\Delta,P)\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\) if and only if \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\) is a relative averaging algebra. Let \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\) be a given relative averaging algebra. Here \(\mu\) denotes the associative multiplication on \(A\), and \(l_{M},r_{M}\) respectively denote the left and right \(A\)-actions on \(M\). We have seen in the previous theorem that \(\alpha=(s^{-1}\Delta,P)\in(s^{-1}\mathfrak{h}\oplus\mathfrak{a})_{0}\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\), where \(\Delta\) is given by (21) or (18). Therefore, we can consider the \(L_{\infty}\)-algebra \(\big{(}s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}^{(s^{-1}\Delta,P)}\}_{k=1} ^{\infty})\) twisted by the Maurer-Cartan element \(\alpha=(s^{-1}\Delta,P)\). Then by following Remark 5.3, we get the next result. **5.9 Theorem**.: _Let \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\) be a given relative averaging algebra with the corresponding Maurer-Cartan element \(\alpha=(s^{-1}\Delta,P)\in(s^{-1}\mathfrak{h}\oplus\mathfrak{a})_{0}\). Suppose there are maps_ \[\mu^{\prime}\in\operatorname{Hom}(A^{\otimes 2},A),\ l_{M}^{\prime}\in \operatorname{Hom}(A\otimes M,M),\ r_{M}^{\prime}\in\operatorname{Hom}(M \otimes A,M)\ \ \text{and}\ \ P^{\prime}\in\operatorname{Hom}(M,A).\] _Then \(M_{l_{M}+l_{M}^{\prime},r_{M}+r_{M}^{\prime}}\xrightarrow{P+P^{\prime}}A_{\mu+ \mu^{\prime}}\) is a relative averaging algebra if and only if \(\alpha^{\prime}=(s^{-1}\Delta^{\prime},P^{\prime})\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \(\big{(}s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}^{(s^{-1}\Delta,P)}\}_{k=1 }^{\infty}\big{)}\), where \(\Delta^{\prime}\) is defined in similar to (21)._ The above theorem shows that the \(L_{\infty}\)-algebra \(\big{(}s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}^{(s^{-1}\Delta,P)}\}_{k=1 }^{\infty}\big{)}\) controlls the deformations of the relative averaging algebra \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\). For this reason, the \(L_{\infty}\)-algebra \(\big{(}s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}^{(s^{-1}\Delta,P)}\}_{k=1 }^{\infty}\big{)}\) is called the **controlling algebra** for the given relative averaging algebra \(M_{l_{M},r_{M}}\xrightarrow{P}A_{\mu}\). **5.10 Remark**.: Let \(M\xrightarrow{P}A\) be a relative averaging algebra. Since the corresponding controlling algebra \(\big{(}s^{-1}\mathfrak{h}\oplus\mathfrak{a},\{l_{k}^{(s^{-1}\Delta,P)}\}_{k=1 }^{\infty}\big{)}\) is a \(L_{\infty}\)-algebra, it follows that \((l_{1}^{(s^{-1}\Delta,P)})^{2}=0\). We will use this fact in the construction of the cochain complex of the relative averaging algebra \(M\xrightarrow{P}A\). **Cohomology of relative averaging algebras (with adjoint bimodule).** Here we will define the cohomology of a relative averaging algebra \(M\xrightarrow{P}A\) (with coefficients in the adjoint bimodule). For each \(n\geq 0\), we define an abelian group \(C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\) by \[C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A)=\begin{cases}0&\text{if }n=0,\\ \operatorname{Hom}(A,A)\oplus\operatorname{Hom}(M,M)&\text{if }n=1,\\ \operatorname{Hom}(A^{\otimes n},A)\oplus\operatorname{Hom}(\mathcal{A}^{n-1,1},M)\oplus\operatorname{Hom}(\mathbf{k}[Y_{n-1}]\otimes M^{\otimes n-1},A)& \text{if }n\geq 2.\end{cases}\] Before we define the coboundary map \(\delta_{\mathrm{rAvg}}:C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\to C^{n+1}_{ \mathrm{rAvg}}(M\xrightarrow{P}A)\), we observe the following. First, there is an embedding \(\operatorname{Hom}((A\oplus M)^{\otimes n},A\oplus M)\hookrightarrow \operatorname{Hom}(\mathbf{k}[Y_{n}]\otimes(A\oplus M)^{\otimes n},A\oplus M),\ f\mapsto\widetilde{f}\), where \(\widetilde{f}\) is given by \[\widetilde{f}(y;x_{1},\dots,x_{n})=f(x_{1},\dots,x_{n}),\text{ for all }y\in Y_{n}\text{ and }x_{1},\dots,x_{n}\in A\oplus M.\] With this, the classical Gerstenhaber bracket \([\,\ ]_{\mathsf{G}}\) on the graded space \(\oplus_{n=1}^{\infty}\mathrm{Hom}((A\oplus M)^{\otimes n},A\oplus M)\) embedds into the Majumdar-Mukherjee bracket \([\,\ ]_{\mathsf{MM}}\). When we restrict the above embedding, we obtain embeddings \[\operatorname{Hom}(A^{\otimes n},A) \hookrightarrow\operatorname{Hom}(\mathbf{k}[Y_{n}]\otimes A^{ \otimes n},A),\ f\mapsto\widetilde{f},\] \[\operatorname{Hom}(\mathcal{A}^{n-1,1},M) \hookrightarrow\operatorname{Hom}(\mathbf{k}[Y_{n}]\otimes \mathcal{A}^{n-1,1},M),\ g\mapsto\widetilde{g}.\] Note that an element \((f,g)\in C^{1}_{\mathrm{rAvg}}(M\xrightarrow{P}A)=\operatorname{Hom}(A,A) \oplus\operatorname{Hom}(A,A)\) can be identified with the element \((s^{-1}(\widetilde{f}+\widetilde{g}),0)\in(s^{-1}\mathfrak{h}\oplus\mathfrak{ a})_{-1}\). Here we assume that \(\mathfrak{a}_{-1}=0\). Similarly, an element \((f,g,\gamma)\in C^{n\geq 2}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\) can be identified with the element \((s^{-1}(\widetilde{f}+\widetilde{g}),\gamma)\in(s^{-1}\mathfrak{h}\oplus \mathfrak{a})_{n-2}\). Using the above identifications, we now define a map \(\delta_{\mathrm{rAvg}}:C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\to C^{n+1}_{ \mathrm{rAvg}}(M\xrightarrow{P}A)\) by \[\delta_{\mathrm{rAvg}}((f,g))= -l_{1}^{(s^{-1}\Delta,P)}(s^{-1}(\widetilde{f}+\widetilde{g}),0),\text{ for }(f,g)\in C^{1}_{\mathrm{rAvg}}(M\xrightarrow{P}A),\] \[\delta_{\mathrm{rAvg}}((f,g,\gamma))= (-1)^{n-2}l_{1}^{(s^{-1}\Delta,P)}(s^{-1}(\widetilde{f}+\widetilde {g}),\gamma),\text{ for }(f,g,h)\in C^{n\geq 2}_{\mathrm{rAvg}}(M \xrightarrow{P}A).\] It follows from Remark 5.10 that \((\delta_{\mathrm{rAvg}})^{2}=0\). In other words, \(\{C^{\bullet}_{\mathrm{rAvg}}(M\xrightarrow{P}A),\delta_{\mathrm{rAvg}}\}\) is a cochain complex. The corresponding cohomology is called the **cohomology** of the relative averaging algebra \(M\xrightarrow{P}A\). We denote the corresponding \(n\)-th cohomology group by \(H^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\). Note that \[\delta_{\mathrm{rAvg}}((f,g,\gamma))\] \[=(-1)^{n-2}l_{1}^{(s^{-1}\Delta,P)}(s^{-1}(\widetilde{f}+ \widetilde{g}),\gamma)\] \[=(-1)^{n-2}\sum_{k=0}^{\infty}\frac{1}{k!}k_{k+1}\big{(}\underbrace {(s^{-1}\Delta,P),\dots,(s^{-1}\Delta,P)}_{\text{$k$ times}},(s^{-1}(\widetilde{f}+ \widetilde{g}),\gamma)\big{)}\] \[=(-1)^{n-2}\bigg{\{}l_{2}\big{(}(s^{-1}\Delta,0),(s^{-1}( \widetilde{f}+\widetilde{g}),0)\big{)}+l_{3}\big{(}(s^{-1}\Delta,0),(0,P),(0, \gamma)\big{)}\] \[\qquad\qquad+\frac{1}{n!}l_{n+1}((s^{-1}(\widetilde{f}+ \widetilde{g}),0),\underbrace{(0,P),\dots,(0,P)}_{\text{$n$ times}}\big{)} \bigg{\}}\quad\text{(as the other terms get vanished)}\] \[=(-1)^{n-2}\bigg{(}-s^{-1}[\Delta,\widetilde{f}+\widetilde{g}]_{ \mathsf{MM}}\,\ [[\Delta,P]_{\mathsf{MM}},\gamma]_{\mathsf{MM}}+\frac{1}{n!} \underbrace{[\cdots[\widetilde{f}+\widetilde{g},P]_{\mathsf{MM}},P]_{\mathsf{ MM}},\dots,P]_{\mathsf{MM}}}_{\text{$n$ times}}\bigg{)}. \tag{23}\] Using the above identifications, the term (23) (which lies is \((s^{-1}\mathfrak{h}\oplus\mathfrak{a})_{n-1}\)) can be identified with the element \[\big{(}(-1)^{n-1}[\mu,f]_{\mathfrak{G}}\,\ (-1)^{n-1}[\mu+l_{M}+r_{M},f+g]_{ \mathfrak{G}}\,\ \delta^{P}_{\mathrm{Diss}}(\gamma)+h_{P}(f,g)\big{)}\in C^{n+1}_{r\mathrm{Avg }}(M\xrightarrow{P}A).\] Here the first component \((-1)^{n-1}[\mu,f]_{\mathfrak{G}}\) is nothing but \(\delta_{\mathrm{Hoch}}(f)\), where \(\delta_{\mathrm{Hoch}}\) is the Hochschild coboundary operator of the associative algebra \(A\) with coefficients in the adjoint \(A\)-bimodule. We denote the second component \((-1)^{n-1}[\mu+l_{M}+r_{M},f+g]_{\mathfrak{G}}\in\mathrm{Hom}(\mathcal{A}^{n,1 },M)\) by the notation \(\delta^{f}_{\mathrm{Hoch}}(g)\) and it is given by \[\big{(}\delta^{f}_{\mathrm{Hoch}}(g)\big{)} (a_{1},\dots,a_{n+1})=a_{1}\cdot_{M}(f+g)(a_{2},\dots,a_{n+1})\] \[+\sum_{i=1}^{n}(-1)^{i}g(a_{1},\dots,a_{i-1},(\mu+\cdot_{M})(a_{i},a_{i+1}),\dots,a_{n+1})+(-1)^{n+1}(f+g)(a_{1},\dots,a_{n})\cdot_{M}a_{n+1},\] for \(a_{1}\otimes\dots\otimes a_{n+1}\in\mathcal{A}^{n,1}\) (i.e. all \(a_{i}\)'s are from \(A\) except one, which is from \(M\)). Finally, to better understand the term \(h_{P}(f,g)\), we first realize an element of \(\mathrm{Hom}(\mathbf{k}[Y_{i}]\otimes(A\oplus M)^{\otimes l},A\oplus M)\) as a degree \((l-1)\) coderivation on the free dendriform coalgebra \(\oplus_{n=1}^{\infty}\mathbf{k}[Y_{n}]\otimes(s^{-1}A\oplus s^{-1}M)^{\otimes n}\). See [38] for details. With this identification, the Majumdar-Mukherjee bracket can be seen as the commutator bracket of coderivations on the dendriform coalgebra \(\oplus_{n=1}^{\infty}\mathbf{k}[Y_{n}]\otimes(s^{-1}A\oplus s^{-1}M)^{\otimes n}\). Hence, for any \(y\in Y_{n}\) (say \(y=y_{1}\lor y_{2}\) for some unique \((i-1)\)-tree \(y_{1}\in Y_{i-1}\) and \((n-i)\)-tree \(y_{2}\in Y_{n-i}\)) and \(u_{1},\dots,u_{n}\in M\), \[(h_{P}(f,g))(y;u_{1},\dots,u_{n})\] \[=\frac{(-1)^{n}}{n!}\bigg{\{}n!\big{(}P(u_{1}),\dots,P(u_{n}) \big{)}-p\big{(}P(u_{1}),\dots,P(u_{i-1}),u_{i},P(u_{i+1}),\dots,P(u_{n}) \big{)}\bigg{\}}.\] Hence the coboundary map \(\delta_{\mathrm{FAvg}}\) is given by \(\delta_{\mathrm{FAvg}}((f,g,\gamma))=\big{(}\delta_{\mathrm{Hoch}}(f),\delta^ {f}_{\mathrm{Hoch}}(g),\delta^{P}_{\mathrm{Diss}}(\gamma)+h_{P}(f,g)\big{)}\), for \((f,g,\gamma)\in C^{n}_{r\mathrm{Avg}}(M\xrightarrow{P}A)\). Let \(M\xrightarrow{P}A\) be a relative averaging algebra. In the following, we construct a long exact sequence that connects the cohomology of the operator \(P\) and the cohomology of the full relative averaging algebra \(M\xrightarrow{P}A.\) We first consider a new cochain complex \(\{C^{\bullet}_{\mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A}),\delta_{\mathrm{Ass Bimod}}\}\), where \[C^{0}_{\mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A})=0\ \ \text{and}\ \ C^{n\geq 1}_{ \mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A})=\mathrm{Hom}(A^{\otimes n},A) \oplus\mathrm{Hom}(\mathcal{A}^{n-1,1},M).\] The coboundary map \(\delta_{\mathrm{AssBimod}}\) is given by \[\delta_{\mathrm{AssBimod}}((f,g))=(\delta_{\mathrm{Hoch}}(f),\delta^{f}_{ \mathrm{Hoch}}(g)),\ \text{for}\ (f,g)\in C^{n\geq 1}_{\mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A}).\] We denote the \(n\)-th cohomology of this complex by \(H^{n}_{\mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A})\). Since this cohomology captures precisely the information of the associative algebra \(A\) and the \(A\)-bimodule \(M\), we call this cohomology the cohomology of the associative bimodule \({}^{A}M^{A}\) (i.e. associative algebra \(A\) together with the \(A\)-bimodule \(M\)). **5.11 Theorem**.: _Let \(M\xrightarrow{P}A\) be a relative averaging algebra. Then there is a long exact sequence_ \[\dots\to H^{n-1}_{P}(M,A)\to H^{n}_{r\mathrm{Avg}}(M\xrightarrow{P}A)\to H^{ n}_{\mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A})\to H^{n}_{P}(M,A)\to\dots \tag{24}\] Proof.: Note that there is a short exact sequence of cochain complexes \[0\to\{CY^{\bullet-1}(M_{P},A),\delta^{P}_{\mathrm{Diss}}\}\to\{C^{\bullet}_{r \mathrm{Avg}}(M\xrightarrow{P}A),\delta_{r\mathrm{Avg}}\}\to\{C^{\bullet}_{ \mathrm{AssBimod}}({}^{A}M^{A},{}^{A}M^{A}),\delta_{\mathrm{AssBimod}}\}\to 0\] with obvious maps between complexes. This short exact sequence induces the long exact sequence (24) on the cohomology groups. **Cohomology of an averaging algebra (with coefficients in the adjoint bimodule).** Let \(A\xrightarrow{P}A\) be an averaging algebra. For each \(n\geq 0\), we define the space \(C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\) of \(n\)-cochains by \[C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)=\begin{cases}0&\text{if $n=0$},\\ \mathrm{Hom}(A,A)&\text{if $n=1$},\\ \mathrm{Hom}(A^{\otimes n},A)\oplus\mathrm{Hom}(\mathbf{k}[Y_{n-1}]\otimes A ^{\otimes n-1},A)&\text{if $n\geq 2$}.\end{cases}\] Then there is an embedding \(i:C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\hookrightarrow C^{n}_{\mathrm{Avg}}(A \xrightarrow{P}A)\) given by \[i(f)=(f,f),\text{ for }f\in C^{1}_{\mathrm{Avg}}(A\xrightarrow{P}A),\] \[i(f,\gamma)=(f,f,\gamma),\text{ for }(f,\gamma)\in C^{n\geq 2}_{\mathrm{Avg}}(A \xrightarrow{P}A).\] Let \((f,\gamma)\in C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\). Here we assume that \(\gamma=0\) when \(n=1\). Then \[\delta_{\mathrm{rAvg}}(i(f,\gamma))=\delta_{\mathrm{rAvg}}\big{(}(f,f,\gamma) \big{)}=\big{(}\delta_{\mathrm{Hoch}}(f),\underbrace{\delta^{f}_{\mathrm{Hoch}}(f), \delta^{P}_{\mathrm{Hoss}}(\gamma)}_{\overline{=\delta_{\mathrm{Hoch}}(f)}}( \gamma)+h_{P}(f,f)\big{)}\in\mathrm{im}(i).\] This shows that the map \(\delta_{\mathrm{Avg}}:C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\to C^{n+1}_{ \mathrm{rAvg}}(A\xrightarrow{P}A)\) restricts to a map \[\delta_{\mathrm{Avg}}:C^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\to C^{n+1}_{ \mathrm{Avg}}(A\xrightarrow{P}A)\] that satisfies \(\delta_{\mathrm{rAvg}}\circ i=i\circ\delta_{\mathrm{Avg}}\). Explicitly, the map \(\delta_{\mathrm{Avg}}\) is given by \[\delta_{\mathrm{Avg}}((f,\gamma))=(\delta_{\mathrm{Hoch}}(f),\delta^{P}_{ \mathrm{Diss}}(\gamma)+h_{P}(f,f)),\text{ for }(f,\gamma)\in C^{n}_{\mathrm{ Avg}}(A\xrightarrow{P}A).\] It follows from the condition \((\delta_{\mathrm{rAvg}})^{2}=0\) that the map \(\delta_{\mathrm{Avg}}\) is also a differential (i.e. \((\delta_{\mathrm{Avg}})^{2}=0\)). Hence \(\{C^{\bullet}_{\mathrm{Avg}}(A\xrightarrow{P}A),\delta_{\mathrm{Avg}}\}\) is a cochain complex. The corresponding cohomology is called the **cohomology** of the averaging algebra \(A\xrightarrow{P}A\). We denote the \(n\)-th cohomology group by \(H^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\). The next result shows that the cohomology of an averaging algebra fits into a long exact sequence. This is a particular case of Theorem 5.11. **5.12 Theorem**.: _Let \(A\xrightarrow{P}A\) be an averaging algebra. Then there is a long exact sequence_ \[\ldots\to H^{n-1}_{P}(A,A)\to H^{n}_{\mathrm{Avg}}(A\xrightarrow{P}A)\to H^ {n}_{\mathrm{Hoch}}(A,A)\to H^{n}_{P}(A,A)\to\cdots.\] _Here \(H^{n}_{P}(A,A)\) is the \(n\)-th cohomology group of the averaging operator \(P\) and \(H^{n}_{\mathrm{Hoch}}(A,A)\) is the \(n\)-th Hochschild cohomology group of the associative algebra \(A\)._ **Cohomology of relative averaging algebras (with arbitrary bimodule).** Here we will introduce the cohomology of a relative averaging algebra with coefficients in a bimodule. We will use this cohomology in Section 7 to study abelian extensions. Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a bimodule over it. For each \(n\geq 0\), we define the space of \(n\)-cochains \(C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) by \[C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)=\begin{cases}0&\text {if $n=0$},\\ \mathrm{Hom}(A,B)\oplus\mathrm{Hom}(M,N)&\text{if $n=1$},\\ \mathrm{Hom}(A^{\otimes n},B)\oplus\mathrm{Hom}(\mathcal{A}^{n-1,1},N)\oplus \mathrm{Hom}(\mathbf{k}[Y_{n-1}]\otimes M^{\otimes n-1},B)&\text{if $n\geq 2$}.\end{cases}\] To define the coboundary map, we first consider the cochain complex \(\{C^{\bullet}_{\mathrm{rAvg}}(M\oplus N\xrightarrow{P\oplus Q}A\oplus B), \delta_{\mathrm{rAvg}}\}\) of the semidirect product relative averaging algebra \(M\oplus N\xrightarrow{P\oplus Q}A\oplus B\) (given in Theorem 3.24) with coefficients in the adjoint bimodule. Then for each \(n\geq 0\), there is an obvious inclusion \[C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\hookrightarrow C^{n} _{\mathrm{rAvg}}(M\oplus N\xrightarrow{P\oplus Q}A\oplus B).\] Moreover, the map \(\delta_{\mathrm{rAvg}}:C^{n}_{\mathrm{rAvg}}(M\oplus N\xrightarrow{P\oplus Q}A \oplus B)\to C^{n+1}_{\mathrm{rAvg}}(M\oplus N\xrightarrow{P\oplus Q}A\oplus B)\) restricts to a map (denoted by the same notation) \(\delta_{\mathrm{rAvg}}:C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q} B)\to C^{n+1}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). Hence \(\{C^{\bullet}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B),\delta_{ \mathrm{rAvg}}\}\) becomes a cochain complex. Note that the restricted map \(\delta_{\mathrm{rAvg}}\) is explicitly given by \[\delta_{\mathrm{rAvg}}((f,g,\gamma))=\big{(}\delta_{\mathrm{Hoch}}(f),\delta^{ f}_{\mathrm{Hoch}}(g),\delta^{P}_{\mathrm{Diss}}(\gamma)+h_{P,Q}(f,g)\big{)},\] for \((f,g,\gamma)\in C^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). Here \(\delta_{\mathrm{Hoch}}\) is the Hochschild coboundary operator of the associative algebra \(A\) with coefficients in the \(A\)-bimodule \(B\), and for any \(f\in\mathrm{Hom}(A^{\otimes n},B)\), the map \(\delta^{f}_{\mathrm{Hoch}}:\mathrm{Hom}(\mathcal{A}^{n-1,1},N)\to\mathrm{Hom}( \mathcal{A}^{n,1},N)\) is given by \[\big{(}\delta^{f}_{\mathrm{Hoch}}(g)\big{)}(a_{1},\dots,a_{n+1}) =(l+_{N})(a_{1},(f+g)(a_{2},\dots,a_{n+1}))\] \[\quad+\sum_{i=1}^{n}(-1)^{i}g\big{(}a_{1},\dots,a_{i-1},(\mu+l_{M }+r_{M})(a_{i},a_{i+1}),\dots,a_{n+1}\big{)}\] \[\quad+(-1)^{n+1}(r+_{N})((f+g)(a_{1},\dots,a_{n}),a_{n+1}),\] for \(g\in\mathrm{Hom}(\mathcal{A}^{n,1},N)\) and \(a_{1}\otimes\dots\otimes a_{n+1}\in\mathcal{A}^{n,1}\). The map \(\delta^{P}_{\mathrm{Diss}}\) is the coboundary operator of the induced diassociative algebra \(M_{P}\) with coefficients in the representation \(B\) (given in Proposition 3.26). Finally, the map \(h_{P,Q}(f,g)\) is given by \[(h_{P,Q}(f,g))(y;u_{1},\dots,u_{n})=(-1)^{n}\big{(}f(P(u_{1}),\dots,P(u_{n}))- Qg(P(u_{1}),\dots,u_{i},\dots,P(u_{n}))\big{)},\] for \(y\in Y_{n}\) (which can be uniquely written as \(y=y_{1}\lor y_{2}\) for some \((i-1)\)-tree \(y_{1}\) and \((n-i)\)-tree \(y_{2}\)) and \(u_{1},\dots,u_{n}\in M\). The cohomology of the complex \(\{C^{\bullet}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B),\delta_{ \mathrm{rAvg}}\}\) is called the **cohomology** of the relative averaging algebra \(M\xrightarrow{P}A\) with coefficients in the bimodule \((N\xrightarrow{Q}B,l,r).\) We denote the \(n\)-th cohomology group by \(H^{n}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). ### Remark In Example 3.21 we have seen that a bimodule over an averaging algebra can be seen as a bimodule over the corresponding relative averaging algebra. With this view, one can define the cohomology of an averaging algebra with coefficients in a bimodule over it. ## 6. Deformations of relative averaging algebras In this section, we study formal and infinitesimal deformations of a relative averaging algebra in terms of the cohomology theory. In particular, we show that the set of all equivalence classes of infinitesimal deformations of a relative averaging algebra \(M\xrightarrow{P}A\) has a bijection with the second cohomology group \(H^{2}_{\mathrm{rAvg}}(M\xrightarrow{P}A)\). Let \(\mathsf{R}\) be a commutative unital ring with unity \(1_{\mathsf{R}}\). An augmentation of \(\mathsf{R}\) is a homomorphism \(\varepsilon:\mathsf{R}\to\mathbf{k}\) satisfying \(\varepsilon(1_{\mathsf{R}})=1_{\mathbf{k}}.\) Throughout this section, we assume that \(\mathsf{R}\) is a commutative unital ring with an augmentation \(\varepsilon\). Given such \(\mathsf{R}\), one may always define the notion of \(\mathsf{R}\)-relative averaging algebra similar to Definition 3.1(ii) by replacing the vector spaces and linear maps by \(\mathsf{R}\)-modules and \(\mathsf{R}\)-linear maps. In other words, a \(\mathsf{R}\)-relative averaging algebra is a relative averaging algebra in the category of \(\mathsf{R}\)-modules. Morphisms between \(\mathsf{R}\)-relative averaging algebras can be defined similarly. Note that any relative averaging algebra \(M\xrightarrow{P}A\) can be regarded as a \(\mathsf{R}\)-relative averaging algebra, where the \(\mathsf{R}\)-module structures on \(A\) and \(M\) are respectively given by \(r\cdot a=\varepsilon(r)a\) and \(r\cdot u=\varepsilon(r)u\), for \(r\in\mathsf{R}\), \(a\in A\), \(u\in M\). ### Definition A \(\mathsf{R}\)**-deformation** of a relative averaging algebra \(M\xrightarrow{P}A\) consists of a quadruple \((\mu_{\mathsf{R}},l_{\mathsf{R}},r_{\mathsf{R}},P_{\mathsf{R}})\) of \(\mathsf{R}\)-bilinear maps \[\mu_{\mathsf{R}}:(\mathsf{R}\otimes_{\mathsf{k}}A)\times(\mathsf{R}\otimes_{ \mathsf{k}}A)\to\mathsf{R}\times_{\mathsf{k}}A,\qquad l_{\mathsf{R}}:(\mathsf{ R}\otimes_{\mathsf{k}}A)\times(\mathsf{R}\otimes_{\mathsf{k}}M)\to\mathsf{R} \otimes_{\mathsf{k}}M,\] \[r_{\mathsf{R}}:(\mathsf{R}\otimes_{\mathsf{k}}M)\times(\mathsf{R}\otimes_{ \mathsf{k}}A)\to\mathsf{R}\otimes_{\mathsf{k}}M\text{ and a $\mathsf{R}$-linear map $P_{\mathsf{R}}:\mathsf{R}\otimes_{\mathsf{k}}M\to\mathsf{R}\otimes_{\mathsf{k}}A$}\] such that the following conditions hold: (i) \((\mathsf{R}\otimes_{\mathbf{k}}A,\mu_{\mathsf{R}})\) is an \(\mathsf{R}\)-associative algebra, \((\mathsf{R}\otimes_{\mathbf{k}}M,l_{\mathsf{R}},r_{\mathsf{R}})\) a bimodule over it and the \(\mathsf{R}\)-linear map \(P_{R}:\mathsf{R}\otimes_{\mathbf{k}}M\to\mathsf{R}\otimes_{\mathbf{k}}A\) is a relative averaging operator. In other words, \(\mathsf{R}\otimes_{\mathbf{k}}M\stackrel{{ P_{\mathsf{R}}}}{{ \longrightarrow}}\mathsf{R}\otimes_{\mathbf{k}}A\) is a \(\mathsf{R}\)-relative averaging algebra by considering the above structures on \(\mathsf{R}\otimes_{\mathbf{k}}A\) and \(\mathsf{R}\otimes_{\mathbf{k}}M\). (ii) The pair \((\varepsilon\otimes_{\mathbf{k}}\mathrm{id}_{A},\varepsilon\otimes_{\mathbf{ k}}\mathrm{id}_{M}):(\mathsf{R}\otimes_{\mathbf{k}}M\stackrel{{ P_{\mathsf{R}}}}{{\longrightarrow}}\mathsf{R}\otimes_{\mathbf{k}}A) \rightsquigarrow(M\stackrel{{ P}}{{\longrightarrow}}A)\) is a morphism of \(\mathsf{R}\)-relative averaging algebras. ### Definition Let \(M\stackrel{{ P}}{{\longrightarrow}}A\) be a relative averaging algebra. Two \(\mathsf{R}\)-deformations \((\mu_{\mathsf{R}},l_{\mathsf{R}},r_{\mathsf{R}},P_{\mathsf{R}})\) and \((\mu^{\prime}_{\mathsf{R}},l^{\prime}_{\mathsf{R}},r^{\prime}_{\mathsf{R}},P^ {\prime}_{\mathsf{R}})\) are said to be **equivalent** if there exists an isomorphism of \(\mathsf{R}\)-relative averaging algebras \[(\Phi,\Psi):(\mathsf{R}\otimes_{\mathbf{k}}M\stackrel{{ P_{ \mathsf{R}}}}{{\longrightarrow}}\mathsf{R}\otimes_{\mathbf{k}}A)\rightsquigarrow( \mathsf{R}\otimes_{\mathbf{k}}M\stackrel{{ P^{\prime}_{\mathsf{R}}}}{{ \longrightarrow}}\mathsf{R}\otimes_{\mathbf{k}}A)\] satisfying \((\varepsilon\otimes_{\mathbf{k}}\mathrm{id}_{A})\circ\Phi=(\varepsilon \otimes_{\mathbf{k}}\mathrm{id}_{A})\) and \((\varepsilon\otimes_{\mathbf{k}}\mathrm{id}_{M})\circ\Psi=(\varepsilon \otimes_{\mathbf{k}}\mathrm{id}_{M})\). We will now consider the cases, when \(\mathsf{R}=\mathbf{k}[[t]]\) (the ring of formal power series) and \(\mathsf{R}=\mathbf{k}[[t]]/(t^{2})\) (the local Artinian ring of dual numbers). In the first case, a \(\mathsf{R}\)-deformation is called a formal deformation, and in the second case, a \(\mathsf{R}\)-deformation is called an infinitesimal deformation. A more precise description of formal deformation is given by the following. ### Definition (i) Let \(M_{l_{M},r_{M}}\stackrel{{ P}}{{\longrightarrow}}A_{\mu}\) be a given relative averaging algebra. A **formal deformation** of it consists of a quadruple \((\mu_{t},l_{t},r_{t},P_{t})\) of formal sums \[\mu_{t}=\sum_{i=0}^{\infty}t^{i}\mu_{i},\qquad l_{t}=\sum_{i=0}^{\infty}t^{i}l _{i},\qquad r_{t}=\sum_{i=0}^{\infty}t^{i}r_{i}\ \ \text{and}\ \ P_{t}=\sum_{i=0}^{\infty}t^{i}P_{i} \tag{25}\] (where \(\mu_{i}:A\times A\to A\), \(l_{i}:A\times M\to M\), \(r_{i}:M\times A\to M\) and \(P_{t}:M\to A\) are bilinear/linear maps, for \(i\geq 0\), with \(\mu_{0}=\mu\), \(l_{0}=l_{M}\), \(r_{0}=r_{M}\) and \(P_{0}=P\)) such that \(A[[t]]=(A[[t]],\mu_{t})\) is an associative algebra over \(\mathbf{k}[[t]]\), and \(M[[t]]=(M[[t]],l_{t},r_{t})\) is a bimodule over the algebra \(A[[t]]\), and the \(\mathbf{k}[[t]]\)-linear map \(P_{t}:M[[t]]\to A[[t]]\) is a relative averaging operator. In other words, \(M[[t]]\stackrel{{ P_{t}}}{{\longrightarrow}}A[[t]]\) is a relative averaging algebra over \(\mathbf{k}[[t]]\). (ii) Two formal deformations \((\mu_{t},l_{t},r_{t},P_{t})\) and \((\mu^{\prime}_{t},l^{\prime}_{t},r^{\prime}_{t},P^{\prime}_{t})\) are **equivalent** if there exists a pair \((\varphi_{t},\psi_{t})\) of formal sums \[\varphi_{t}=\sum_{i=0}^{\infty}t^{i}\varphi_{i}\quad\text{ and }\quad\psi_{t}=\sum_{i=0}^{\infty}t^{i}\psi_{i}\] (where \(\varphi_{i}:A\to A\) and \(\psi_{i}:M\to M\) are linear maps, for \(i\geq 0\), with \(\varphi_{0}=\mathrm{id}_{A}\) and \(\psi_{0}=\mathrm{id}_{M}\)) such that \[(\varphi_{t},\psi_{t}):(M[[t]]\stackrel{{ P_{t}}}{{ \longrightarrow}}A[[t]])\rightsquigarrow(M[[t]]\stackrel{{ P^{\prime}_{t}}}{{ \longrightarrow}}A[[t]])\] is an isomorphism of relative averaging algebras over \(\mathbf{k}[[t]]\). Then we write \((\mu_{t},l_{t},r_{t},P_{t})\sim(\mu^{\prime}_{t},l^{\prime}_{t},r^{\prime}_{t},P^{\prime}_{t})\). It follows from the above definition that a quadruple \((\mu_{t},l_{t},r_{t},P_{t})\) given by (25) is a formal deformation of the relative averaging algebra \(M\stackrel{{ P}}{{\longrightarrow}}A\) if the following system of equations are hold: \[\sum_{i+j=n}\mu_{i}(\mu_{j}(a,b),c) =\sum_{i+j=n}\mu_{i}(a,\mu_{j}(b,c)), \tag{27}\] \[\sum_{i+j=n}l_{i}(\mu_{j}(a,b),u) =\sum_{i+j=n}l_{i}(a,l_{j}(b,u)),\] (28) \[\sum_{i+j=n}r_{i}(l_{j}(a,u),b) =\sum_{i+j=n}l_{i}(a,r_{j}(u,b)),\] (29) \[\sum_{i+j=n}r_{i}(r_{j}(u,a),b) =\sum_{i+j=n}r_{i}(u,\mu_{j}(a,b)),\] (30) \[\sum_{i+j+k=n}\mu_{i}\big{(}P_{j}(u),P_{k}(v)\big{)}=\sum_{i+j+k=n}P _{i}\big{(}l_{j}(P_{k}(u),v)\big{)}=\sum_{i+j+k=n}P_{i}\big{(}r_{j}(u,P_{k}(v)) \big{)}, \tag{26}\] for all \(a,b,c\in A\), \(u,v\in M\) and \(n\geq 0\). These are called the deformation equations. Note that the deformation equations are held for \(n=0\) as \(M_{{}_{\text{\tiny{\rm{I}}M},r_{\text{\tiny{\rm{M}}}}}}\stackrel{{ P}}{{\longrightarrow}}A_{\mu}\) is a relative averaging algebra. For \(n=1\). It follows from (26) that \[\mu_{1}(a\cdot b,c)+\mu_{1}(a,b)\cdot c=\mu_{1}(a,b\cdot c)+a\cdot\mu_{1}(b,c), \text{ for }a,b,c\in A,\] which is equivalent to \(\delta_{\text{\tiny{\rm{Hoch}}}}(\mu_{1})=0\). To summarize the identities (27), (28), (29) for \(n=1\), we define an element \(\beta_{1}\in\text{\rm{Hom}}(\mathcal{A}^{1,1},M)\) by \[\beta_{1}(a,u)=l_{1}(a,u)\ \text{ and }\ \beta_{1}(u,a)=r_{1}(u,a),\text{ for }a \in A,u\in M. \tag{31}\] Then we get that \(\delta_{\text{\tiny{\rm{Hoch}}}}^{\mu_{1}}(\beta_{1})=0\). Finally, the identity (30) for \(n=1\) is equivalent to \[\big{(}\delta_{\text{\tiny{\rm{Diass}}}}^{P}(P_{1})+h_{P}(\mu_{1},\beta_{1}) \big{)}(y;u,v)=0,\text{ for }y=\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{ \Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Ybox{\Y equivalent infinitesimal deformations give rise to cohomologous \(2\)-cocycles. Hence, there is a well-defined map \[\Gamma:(\text{infinitesimal deformations of }M\xrightarrow{P}A)/\sim\ \to\ H_{\text{rAvg}}^{2}(M \xrightarrow{P}A).\] To obtain a map in the other direction, we first consider a \(2\)-cocycle \((\mu_{1},\beta_{1},P_{1})\in Z_{\text{rAvg}}^{2}(M\xrightarrow{P}A)\). Then it is easy to see that the \(2\)-cocycle \((\mu_{1},\beta_{1},P_{1})\) induces an infinitesimal deformation \[(\mu_{t}=\mu+t\mu_{1},t_{t}=l+tl_{1},r_{t}=r+tr_{1},P_{t}=P+tP_{1})\] of the relative averaging algebra \(M\xrightarrow{P}A\), where the maps \(l_{1},r_{1}\) are defined from \(\beta_{1}\) by (31). Let \((\mu_{1}^{\prime},\beta_{1}^{\prime},P_{1}^{\prime})\) be another \(2\)-cocycle cohomologous to \((\mu_{1},\beta_{1},P_{1})\), i.e. \((\mu_{1},\beta_{1},P_{1})-(\mu_{1}^{\prime},\beta_{1}^{\prime},P_{1}^{\prime} )=\delta_{\text{rAvg}}((\varphi_{1},\psi_{1}))\), for some \((\varphi_{1},\psi_{1})\in C^{1}_{\text{rAvg}}(M\xrightarrow{P}A)\). Then it is easy to verify that the corresponding infinitesimal deformations \((\mu_{t},l_{t},r_{t},P_{t})\) and \((\mu_{t}^{\prime},l_{t}^{\prime},r_{t}^{\prime},P_{t}^{\prime})\) are equivalent via the pair \((\varphi_{t}=\operatorname{id}_{A}+t\varphi_{1},\psi_{t}=\operatorname{id}_{M} +t\psi_{1})\). As a consequence, we obtain a map \[\Theta:H_{\text{rAvg}}^{2}(M\xrightarrow{P}A)\ \to\ (\text{infinitesimal deformations of }M\xrightarrow{P}A)/\sim.\] Finally, it is routine task to check that the maps \(\Gamma\) and \(\Theta\) are inverses to each other. This completes the proof. ## 7. Abelian extensions of relative averaging algebras Our aim in this section is to study abelian extensions of a relative averaging algebra \(M\xrightarrow{P}A\) by a bimodule \((N\xrightarrow{Q}B,l,r)\) of it. We show that the isomorphism classes of such abelian extensions are in bijective correspondence with the second cohomology group \(H_{\text{rAvg}}^{2}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \(N\xrightarrow{Q}B\) be a \(2\)-term chain complex (not necessarily a bimodule). Note that \(N\xrightarrow{Q}B\) can be regarded as a relative averaging algebra with the trivial associative multiplication on \(B\) and the trivial \(B\)-bimodule structure on \(N\). With this consideration, we have the following definition. ### Definition An **abelian extension** of a relative averaging algebra \(M\xrightarrow{P}A\) by a \(2\)-term chain complex \(N\xrightarrow{Q}B\) is a relative averaging algebra \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) with a short exact sequence of relative averaging algebras of the form (32) Sometimes, we denote an abelian extension as above by the relative averaging algebra \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) when the exact sequence is understood. A section of the abelian extension (32) is given by a pair \((s,\overline{s})\) of linear maps \(s:A\to\widehat{A}\) and \(\overline{s}:M\to\widehat{M}\) satisfying \(p\circ s=\operatorname{id}_{A}\) and \(\overline{p}\circ\overline{s}=\operatorname{id}_{M}\). Given any section \((s,\overline{s})\), we define two bilinear maps (both denoted by the same notation) \(\cdot_{B}:A\times B\to B\) and \(\cdot_{B}:B\times A\to B\) by \[a\cdot_{B}b=s(a)\cdot_{\widehat{A}}i(b)\ \text{ and }\ b\cdot_{B}a=i(b)\cdot_{ \widehat{A}}s(a),\text{ for }a\in A,b\in B.\] These two maps make \(B\) into an \(A\)-bimodule. Similarly, there are two bilinear maps (both denoted by the same notation) \(\cdot_{N}:A\times N\to N\) and \(\cdot_{N}:N\times A\to N\) given by \[a\cdot_{N}n=s(a)\cdot_{\widehat{M}}\overline{i}(n)\ \text{ and }\ n\cdot_{N}a= \overline{i}(n)\cdot_{\widehat{M}}s(a),\text{ for }a\in A,n\in N.\] Here \(\cdot_{\widetilde{M}}\) denotes both the left and right \(\widehat{A}\)-actions on \(\widehat{M}\). These two maps make \(N\) into an \(A\)-bimodule. Finally, we define bilinear maps \(l:M\times B\to N\) and \(r:B\times M\to N\) by \[l(u,b)=\overline{s}(u)\cdot_{\widetilde{M}}i(b)\ \ \text{and}\ \ r(b,u)=i(b)\cdot_{ \widetilde{M}}\overline{s}(u),\ \text{for}\ u\in M,b\in B.\] It is straightforward to see that the maps \(l,r\) satisfy the identities (10) and (11). Finally, for any \(u\in M\) and \(n\in N\), \[P(u)\cdot_{B}Q(n)=sP(u)\cdot_{\widehat{A}}iQ(n)=\widehat{P} \overline{s}(u)\cdot_{\widehat{A}}\widehat{P}\overline{i}(n)= \begin{cases}=\widehat{P}\big{(}\widehat{P}(\overline{s}(u))\cdot_{ \widetilde{M}}\overline{i}(n)\big{)}\\ =\widehat{P}\big{(}\overline{s}(u)\cdot_{\widetilde{M}}\widehat{P}\overline{ i}(n)\big{)}\end{cases}\] \[=\begin{cases}=\widehat{P}\big{(}sP(u)\cdot_{\widetilde{M}} \overline{i}(n)\big{)}\ =\ Q\big{(}P(u)\cdot_{N}n\big{)},\\ =\widehat{P}\big{(}\overline{s}(u)\cdot_{\widetilde{M}}iQ(n)\big{)}\ =\ Q \big{(}l(u,Q(n))\big{)}.\end{cases}\] Similarly, one can show that \[Q(n)\cdot_{B}P(u)=iQ(n)\cdot_{\widehat{A}}sP(u)=\widehat{P} \overline{i}(n)\cdot_{\widehat{A}}\widehat{P}\overline{s}(u)= \begin{cases}=\widehat{P}\big{(}\widehat{P}\overline{i}(n)\cdot_{ \widetilde{M}}\overline{s}(u)\big{)}\\ =\widehat{P}\big{(}\overline{i}(n)\cdot_{\widetilde{M}}\widehat{P}\overline{ s}(u)\big{)}\end{cases}\] \[=\begin{cases}=\widehat{P}\big{(}iQ(n)\cdot_{\widetilde{M}} \overline{s}(u)\big{)}\ =\ Q\big{(}r(Q(n),u)\big{)},\\ =\widehat{P}\big{(}\overline{i}(n)\cdot_{\widetilde{M}}sP(u)\big{)}\ =\ Q \big{(}n\cdot_{N}P(u)\big{)}.\end{cases}\] Combining all these, we get that \((N\xrightarrow{Q}B,l,r)\) is a bimodule over the relative averaging algebra \(M\xrightarrow{P}A\). This is called the induced bimodule structure starting from the abelian extension (32). Note that this bimodule structure is independent of the choice of section. To see this, let \((s^{\prime},\overline{s}^{\prime})\) be any other section of (32). Then we observe that \(s(a)-s^{\prime}(a)\in\text{ker}(p)=\text{im}(i)\) and \(\overline{s}(u)-\overline{s}^{\prime}(u)\in\text{ker}(\overline{p})=\text{im} (\overline{i})\), for \(a\in A\) and \(u\in M\). Let \(\cdot_{B}^{\prime}\), \(\cdot_{N}^{\prime}\) and \(l^{\prime},r^{\prime}\) be the maps induced by the section \((s^{\prime},\overline{s}^{\prime})\). Then we have \[a\cdot_{B}b-a\cdot_{B}^{\prime}b=\big{(}s(a)-s^{\prime}(a)\big{)} \cdot_{\widehat{A}}i(b)=0\ \ \text{and}\ \ b\cdot_{B}a-b\cdot_{B}^{\prime}a=i(b)\cdot_{\widehat{A}}\big{(}s(a)-s^{ \prime}(a)\big{)}=0,\] \[a\cdot_{N}n-a\cdot_{N}^{\prime}n=\big{(}s(a)-s^{\prime}(a) \big{)}\cdot_{\widehat{M}}\overline{i}(n)=0\ \ \text{and}\ \ n\cdot_{N}a-n\cdot_{N}^{\prime}a=\overline{i}(n)\cdot_{\widehat{M}} \big{(}s(a)-s^{\prime}(a)\big{)}=0,\] \[l(u,b)-l^{\prime}(u,b)=\big{(}\overline{s}(u)-\overline{s}^{ \prime}(u)\big{)}\cdot_{\widehat{M}}i(b)=0\ \ \text{and}\ \ r(b,u)-r^{\prime}(b,u)=i(b)\cdot_{\widehat{M}}\big{(} \overline{s}(u)-\overline{s}^{\prime}(u)\big{)}=0.\] Hence our claim follows. **7.2 Definition**.: Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \(N\xrightarrow{Q}B\) be a 2-term chain complex. Two abelian extensions \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) and \(\widehat{M}^{\prime}\xrightarrow{\widehat{P}^{\prime}}\widehat{A}^{\prime}\) are said to be **isomorphic** if there is an isomorphism \((\varphi,\psi):(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A})\rightsquigarrow( \widehat{M}^{\prime}\xrightarrow{\widehat{P}^{\prime}}\widehat{A}^{\prime})\) of relative averaging algebras that makes the following diagram commutative (33) Let \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) and \(\widehat{M}^{\prime}\xrightarrow{\widehat{P}^{\prime}}\widehat{A}^{\prime}\) be two isomorphic abelian extensions as in the above definition. Then it is easy to see that the corresponding induced bimodules on the 2-term chain complex \(N\xrightarrow{Q}B\) are the same. **7.3 Notation**.: Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a given bimodule over it. We denote by \(\text{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) the set of all isomorphism classes of abelian extensions of \(M\xrightarrow{P}A\) by the 2-term complex \(N\xrightarrow{Q}B\) so that the induced bimodule coincides with the prescribed one. In the following result, we parametrize the space \(\text{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) by the second cohomology group of the relative averaging algebra. ### Theorem _Let \(M\xrightarrow{P}A\) be a relative averaging algebra and \((N\xrightarrow{Q}B,l,r)\) be a given bimodule over it. Then there is a bijective correspondence between \(\operatorname{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) and the second cohomology group \(H^{2}_{\operatorname{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\)._ Proof.: Let \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) be an abelian extension of the relative averaging algebra \(M\xrightarrow{P}A\) by the \(2\)-term complex \(N\xrightarrow{Q}B\) representing an element in \(\operatorname{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). Let \((s,\overline{s})\) be a section. Then we define a triple \((\alpha,\beta,\gamma)\) of maps \[\alpha\in\operatorname{Hom}(A^{\otimes 2},B), \alpha(a,b)=s(a)\cdot_{\widehat{A}}s(b)-s(a\cdot b),\] \[\beta\in\operatorname{Hom}(\mathcal{A}^{1,1},N), \begin{cases}\beta(a,u)=s(a)\cdot_{\widehat{M}}\overline{s}(u)- \overline{s}(a\cdot_{M}u),\\ \beta(u,a)=\overline{s}(u)\cdot_{\widehat{M}}s(a)-\overline{s}(u\cdot_{M}a), \end{cases}\] \[\gamma\in\operatorname{Hom}(M,B), \gamma(u)=(\widehat{P}\circ\widehat{s}-s\circ P)(u),\] for \(a,b\in A\) and \(u\in M\). Then it is easy to verify that \((\alpha,\beta,\gamma)\in Z^{2}_{\operatorname{rAvg}}(M\xrightarrow{P}A;N \xrightarrow{Q}B)\) is a \(2\)-cocycle in the cohomology complex of the relative averaging algebra \(M\xrightarrow{P}A\) with coefficients in the bimodule \((N\xrightarrow{Q}B,l,r)\). Moreover, the corresponding cohomology class in \(H^{2}_{\operatorname{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) doesn't depend on the choice of the section. Let \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) and \(\widehat{M^{\prime}}\xrightarrow{\widehat{P^{\prime}}}\widehat{A^{\prime}}\) be two isomorphic abelian extensions. For any section \((s,\overline{s})\) of the first abelian extension, we have \[p^{\prime}\circ(\varphi\circ s)=p\circ s=\operatorname{id}_{A}\quad\text{ and }\quad\overline{p}^{\prime}\circ(\psi\circ\overline{s})=\overline{p}\circ \overline{s}=\operatorname{id}_{M}.\] Thus \((\varphi\circ s,\psi\circ\overline{s})\) is a section of the second abelian extension. If \((\alpha^{\prime},\beta^{\prime},\gamma^{\prime})\in Z^{2}_{\operatorname{rAvg} }(M\xrightarrow{P}A;N\xrightarrow{Q}B)\) is the \(2\)-cocycle corresponding to the second abelian extension and its section \((\varphi\circ s,\psi\circ\overline{s})\), then \[\alpha^{\prime}(a,b) =(\varphi\circ s)(a)\cdot_{\widehat{A}^{\prime}}(\varphi\circ s )(b)-(\varphi\circ s)(a\cdot b)\] \[=\varphi\big{(}s(a)\cdot_{\widehat{A}}s(b)-s(a\cdot b)\big{)}= \varphi(\alpha(a,b))=\alpha(a,b)\quad(\because\varphi|_{B}=\operatorname{id}_{B}).\] Similarly, one can show that \(\beta^{\prime}=\beta\) and \(\gamma^{\prime}=\gamma\). Thus, we obtain \((\alpha,\beta,\gamma)=(\alpha^{\prime},\beta^{\prime},\gamma^{\prime})\). As a consequence, we obtain a well-defined map \[\Lambda:\operatorname{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\ \to\ H^{2}_{ \operatorname{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B).\] To obtain a map in the other direction, we take a \(2\)-cocycle \((\alpha,\beta,\gamma)\in Z^{2}_{\operatorname{rAvg}}(M\xrightarrow{P}A;N \xrightarrow{Q}B)\). Take \(\widehat{A}=A\oplus B\) and \(\widehat{M}=M\oplus N\), and consider bilinear maps \[\mu_{\widehat{A}}:\widehat{A}\times\widehat{A}\to\widehat{A}, \mu_{\widehat{A}}\big{(}(a,b),(a^{\prime},b^{\prime})\big{)}=\big{(}a\cdot a ^{\prime},a\cdot_{B}b^{\prime}+b\cdot_{B}a^{\prime}+\alpha(a,a^{\prime}) \big{)},\] \[l_{\widehat{M}}:\widehat{A}\times\widehat{M}\to\widehat{M}, l_{\widehat{M}}\big{(}(a,b),(u,n)\big{)}=\big{(}a\cdot_{M}u,a\cdot_{N}n+r(b,u)+ \beta(a,u)\big{)},\] \[r_{\widehat{M}}:\widehat{M}\times\widehat{A}\to\widehat{M}, r_{\widehat{M}}\big{(}(u,n),(a,b)\big{)}=\big{(}u\cdot_{M}a,l(u,b)+n \cdot_{N}a+\beta(u,a)\big{)},\] for \((a,b),(a^{\prime},b^{\prime})\in\widehat{A}\) and \((u,n)\in\widehat{M}\). Then it is easy to see that \((\widehat{A},\mu_{\widehat{A}})\) is an associative algebra and \((\widehat{M},l_{\widehat{M}},r_{\widehat{M}})\) is a bimodule over it. Finally, we define a map \(\widehat{P}:\widehat{M}\to\widehat{A}\) by \[\widehat{P}((u,n))=\big{(}P(u),Q(n)+\gamma(u)\big{)},\text{ for }(u,n)\in \widehat{M}.\] Then \(\widehat{P}\) is a relative averaging operator. In other words, \(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A}\) is a relative averaging algebra. This is an abelian extension of the relative averaging algebra \(M\xrightarrow{P}A\) by the \(2\)-term chain complex \(N\xrightarrow{Q}B\), and defines an element in \(\operatorname{Ext}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). Finally, let \((\alpha,\beta,\gamma)\) and \((\alpha^{\prime},\beta^{\prime},\gamma^{\prime})\) be two cohomologous \(2\)-cocycles, say \((\alpha,\beta,\gamma)-(\alpha^{\prime},\beta^{\prime},\gamma^{\prime})=\delta_{ \operatorname{rAvg}}((\kappa,\eta))\), for some \((\kappa,\eta)\in C^{1}_{\operatorname{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\). If \(\widehat{M}^{\prime}\xrightarrow{\widehat{P^{\prime}}}\widehat{A^{\prime}}\) is the relative averaging algebra induced by the \(2\)-cocycle \((\alpha^{\prime},\beta^{\prime},\gamma^{\prime})\), then the pair of maps \[(\varphi,\psi):(\widehat{M}\xrightarrow{\widehat{P}}\widehat{A})\rightsquigarrow( \widehat{M}^{\prime}\xrightarrow{\widehat{P^{\prime}}}\widehat{A^{\prime}})\] is an isomorphism of abelian extensions, where \(\varphi:\widehat{A}\to\widehat{A}^{\prime}\), \(\varphi(a,b)=(a,b+\kappa(a))\) and \(\psi:\widehat{M}\to\widehat{M}^{\prime}\), \(\psi(u,n)=(u,n+\eta(u))\). This shows that there is a well-defined map \[\Upsilon:H^{2}_{\mathrm{rAvg}}(M\xrightarrow{P}A;N\xrightarrow{Q}B)\ \to\ \mathrm{Ext}(M \xrightarrow{P}A;N\xrightarrow{Q}B).\] Finally, the maps \(\Lambda\) and \(\Upsilon\) are inverses to each other. This shows the required bijection. ## 8. Homotopy relative averaging algebras and homotopy diassociative algebras In this section, we first consider \({\it{D}}{\it{i}}{\it{S}}_{\infty}\)-algebras introduced by Loday. However, our definition is more simple to use. Next, we introduce the homotopy relative averaging operators and homotopy relative averaging algebras. We show that a homotopy relative averaging algebra naturally induces a \({\it{D}}{\it{i}}{\it{S}}_{\infty}\)-algebra structure. We first recall some basic definitions related to \({\it{A}}_{\infty}\)-algebras [17]. ### Definition An \({\it{A}}_{\infty}\)-**algebra** is a pair \((A,\{\mu_{k}\}_{k=1}^{\infty})\) consisting of a graded vector space \(A=\oplus_{i\in{\mathbb{Z}}}{\it{A}}_{i}\) together with a collection \(\{\mu_{k}\}_{k=1}^{\infty}\) of degree \(1\) graded linear maps \(\mu_{k}:A^{\otimes k}\to A\), for \(k\geq 1\), satisfying the following identities (called higher associativities) \[\sum_{k+l=n+1}\sum_{i=1}^{n-l+1}(-1)^{|a_{1}|+\cdots+|a_{i-1}|}\ \mu_{k}\big{(}a_{1}, \ldots,a_{i-1},\mu_{l}(a_{i},\ldots,a_{i+l-1}),a_{i+l},\ldots,a_{n})=0, \tag{34}\] for all \(n\geq 1\) and homogeneous elements \(a_{1},\ldots,a_{n}\in A\). Let \(A=\oplus_{i\in{\mathbb{Z}}}{\it{A}}_{i}\) be a graded vector space. Let \(\overline{T}(A)=\oplus_{n=1}^{\infty}A^{\otimes n}\) be the free tensor algebra over the graded vector space \(A\). For each \(n\in{\mathbb{Z}}\), let \(C^{n}(A,A):=\mathrm{Hom}_{n}(\overline{T}(A),A)\) be the space of all degree \(n\) graded linear maps from the graded vector space \(\overline{T}(A)\) to \(A\). Thus, an element \(\mu\in C^{n}(A,A)\) is given by a sum \(\mu=\sum_{k=1}^{\infty}\mu_{k}\), where \(\mu_{k}:A^{\otimes k}\to A\) is a degree \(n\) linear map, for \(k\geq 1\). For \(\mu=\sum_{k=1}^{\infty}\mu_{k}\in C^{m}(A,A)\) and \(\nu=\sum_{l=1}^{\infty}\nu_{l}\in C^{n}(A,A)\), we define a bracket \([\mu,\nu]\in C^{m+n}(A,A)\) by \[[\mu,\nu]:=\sum_{s=1}^{\infty}\sum_{k+l=ns+1}\big{(}\mu_{k}\circ\nu_{l}-(-1)^ {mn}\ \nu_{l}\circ\mu_{k}\big{)},\ \text{where}\] \[(\mu_{k}\circ\nu_{l})(a_{1},\ldots,a_{s})=\sum_{i=1}^{s-l+1}(-1)^{|a_{1}|+ \cdots+|a_{i-1}|}\ \mu_{k}\big{(}a_{1},\ldots,a_{i-1},\nu_{l}(a_{i},\ldots,a_{i+l-1}),a_{i+l}, \ldots,a_{s}\big{)}.\] The graded vector space \(\oplus_{n\in{\mathbb{Z}}}{\it{C}}^{n}(A,A)\) with the above bracket is a graded Lie algebra. An element \(\mu=\sum_{k=1}^{\infty}\mu_{k}\in C^{1}(A,A)\) is a Maurer-Cartan element of the graded Lie algebra \((\oplus_{n\in{\mathbb{Z}}}{\it{C}}^{n}(A,A),[\,\ ])\) if and only if the pair \((A,\{\mu_{k}\}_{k=1}^{\infty})\) is an \({\it{A}}_{\infty}\)-algebra. Let \((A,\{\mu_{k}\}_{k=1}^{\infty})\) be an \({\it{A}}_{\infty}\)-algebra. A **representation** of this \({\it{A}}_{\infty}\)-algebra is given by a pair \((M,\{\eta_{k}\}_{k=1}^{\infty})\) that consists of a graded vector space \(M=\oplus_{i\in{\mathbb{Z}}}M_{i}\) with a collection \(\{\eta_{k}:{\mathcal{A}}^{k-1,1}\to M\}_{k=1}^{\infty}\) of degree \(1\) linear maps satisfying the identities (34) when exactly one of the variables \(a_{1},\ldots,a_{n}\) comes from \(M\) and the corresponding linear operation \(\mu_{i}\) or \(\mu_{j}\) replaced by \(\eta_{i}\) or \(\eta_{j}\). Like the ungraded case, here \({\mathcal{A}}^{k-1,1}\) denotes the direct sum of all possible tensor powers of \(A\) and \(M\) in which \(A\) appears \(k-1\) times (and hence \(M\) appears exactly once). Note that any \({\it{A}}_{\infty}\)-algebra \((A,\{\mu_{k}\}_{k=1}^{\infty})\) can be realized as a representation of itself, where \(\eta_{k}=\mu_{k}\), for \(k\geq 1\). ### Definition A \({\it{D}}{\it{i}}{\it{S}}_{\infty}\)-**algebra** (also called a **strongly homotopy diassociative algebra**) is a pair \((D,\{\pi_{k}\}_{k=1}^{\infty})\) consisting of a graded vector space \(D=\oplus_{i\in{\mathbb{Z}}}D_{i}\) equipped with a collection of degree \(1\) graded linear maps \(\{\pi_{k}:{\mathbf{k}}[Y_{k}]\otimes D^{\otimes k}\to D\}_{k=1}^{\infty}\) satisfying the following set of identities \[\sum_{k+l=n+1}\sum_{i=1}^{n-l+1}(-1)^{|a_{1}|+\cdots+|a_{i-1}|}\pi_{k}\big{(}R_ {0}^{k;l}(y);a_{1},\ldots,a_{i-1},\pi_{l}\big{(}R_{i}^{k;i,l}(y);a_{i},\ldots, a_{i+l-1}\big{)},a_{i+l},\ldots,a_{n}\big{)}=0, \tag{35}\] for all \(n\geq 1\), \(y\in Y_{n}\) and homogeneous elements \(a_{1},\ldots,a_{n}\in D\). The maps \(R_{0}^{k;i,l}\) and \(R_{i}^{k;i,l}\) are described in (3), (4). Note that any diassociative algebra can be realized as a \(Diass_{\infty}\)-algebra concentrated in degree \(-1\). More precisely, if \((D,\dashv,\vdash)\) is a diassociative algebra then \(s^{-1}D\) (considered as a graded vector space with \((s^{-1}D)_{-1}=D\) and \((s^{-1}D)_{i}=0\) for \(i\neq-1\)) can be given a \(Diass_{\infty}\)-algebra structure with the operations \(\{\pi_{k}:\mathbf{k}[Y_{k}]\otimes(s^{-1}D)^{\otimes k}\to s^{-1}D\}_{k=1}^{\infty}\) given by \[\pi_{2}(\raisebox{-10.0pt}{\includegraphics[width=10.0pt]{fig/Diass_2}};s^{-1 }a,s^{-1}b)=s^{-1}(a\dashv b),\ \ \ \pi_{2}(\raisebox{-10.0pt}{\includegraphics[width=10.0pt]{fig/Diass_2}};s^{-1 }a,s^{-1}b)=s^{-1}(a\dashv b)\ \ \text{and}\ \pi_{k}=0\ \text{for}\ k\neq 2.\] **8.3 Remark**.: Let \((D,\{\pi_{k}\}_{k=1}^{\infty})\) be any \(Diass_{\infty}\)-algebra. Using the higher diassociative identities (35) and the mathematical induction on \(k\), we can show that \[\pi_{k}(y;a_{1},\ldots,a_{k})=\pi_{k}(y^{\prime};a_{1},\ldots,a_{k}),\ \text{ for }a_{1},\ldots,a_{k}\in D,\] when both of \(y,y^{\prime}\in Y_{k}\) can be written as the grafting of a \((i-1)\)-tree and \((k-i)\)-tree. **8.4 Proposition**.: _Let \((A,\{\mu_{k}\}_{k=1}^{\infty})\) be an \(A_{\infty}\)-algebra and \((M,\{\eta_{k}\}_{k=1}^{\infty})\) be a representation of it. Then the graded vector space \(A\oplus M\) can be equipped with a \(Diass_{\infty}\)-algebra structure with the operations \(\{\pi_{k}:\mathbf{k}[Y_{k}]\otimes(A\oplus M)^{\otimes k}\to A\oplus M\}_{k=1}^ {\infty}\) given by_ \[\pi_{k}\big{(}y;(a_{1},u_{1}),\ldots,(a_{k},u_{k})\big{)}=\big{(}\mu_{k}(a_{1},\ldots,a_{k}),\eta_{k}(a_{1},\ldots,a_{i-1},u_{i},a_{i+1},\ldots,a_{k})\big{)}, \tag{36}\] _for \(k\geq 1\), \(y\in Y_{k}\) (which can be uniquely written as \(y=y_{1}\lor y_{2}\) for some \((i-1)\)-tree \(y_{1}\in Y_{i-1}\) and \((k-i)\)-tree \(y_{2}\in Y_{k-i}\)) and \((a_{1},u_{1}),\ldots,(a_{k},u_{k})\in A\oplus M.\)_ We denote the above \(Diass_{\infty}\)-algebra simply by \(A\oplus_{Diass_{\infty}}M\). Note that \(A\oplus_{Diass_{\infty}}M\) generalizes the diassociative algebra of Proposition 3.11 in the homotopy context. It is important to mention that the converse of the above proposition is also true. More precisely, let \(A=\oplus_{i\in\mathbb{Z}}A_{i}\) and \(M=\oplus_{i\in\mathbb{Z}}M_{i}\) be two graded vector spaces equipped with two collections \(\{\mu_{k}:A^{\otimes k}\to A\}_{k=1}^{\infty}\) and \(\{\eta_{k}:\mathcal{A}^{k-1,1}\to M\}_{k=1}^{\infty}\) of degree \(1\) graded linear maps. Then \((A,\{\mu_{k}\}_{k=1}^{\infty})\) is an \(A_{\infty}\)-algebra and \((M,\{\eta_{k}\}_{k=1}^{\infty})\) is a representation if and only if \((A\oplus M,\{\pi_{k}\}_{k=1}^{\infty})\) is a \(Diass_{\infty}\)-algebra, where the maps \(\pi_{k}\)'s are given in (36). In the following, we construct a graded Lie algebra whose Maurer-Cartan elements correspond to \(Diass_{\infty}\)-algebra structures on a given graded vector space. Let \(D=\oplus_{i\in\mathbb{Z}}D_{i}\) be a graded vector space. For each \(n\in\mathbb{Z}\), we define the space \(CY^{n}(D,D):=\operatorname{Hom}_{n}(\mathbf{k}[\overline{Y}]\otimes\overline{ T}(D),D)\) whose elements are of the form \(\pi=\sum_{k=1}^{\infty}\pi_{k}\), where \(\pi_{k}:\mathbf{k}[Y_{k}]\otimes D^{\otimes k}\to D\) is a degree \(n\) linear map. For \(\pi=\sum_{k=1}^{\infty}\pi_{k}\in CY^{m}(D,D)\) and \(\varpi=\sum_{l=1}^{\infty}\varpi_{l}\in CY^{l}(D,D)\), we define an element \(\{\![\pi,\varpi]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Given an \(A_{\infty}\)-algebra and a representation of it, we will now introduce the notion of a homotopy relative averaging operator. Let \((A,\{\mu_{k}\}_{k=1}^{\infty})\) be an \(A_{\infty}\)-algebra and \((M,\{\eta_{k}\}_{k=1}^{\infty})\) be a representation of it. Consider the graded Lie algebra \[\mathfrak{g}=\big{(}\oplus_{n\in\mathbb{Z}}CY^{n}(A\oplus M,A\oplus M)=\oplus_ {n\in\mathbb{Z}}\mathrm{Hom}_{n}(\mathbf{k}[\overline{Y}]\otimes\overline{T}( A\oplus M),A\oplus M),\{\![\,\ ]\!\}\big{)}\] associated to the graded vector space \(A\oplus M.\) Then it is easy to see that the graded subspace \(\mathfrak{a}=\oplus_{n\in\mathbb{Z}}CY^{n}(M,A)=\oplus_{n\in\mathbb{Z}} \mathrm{Hom}_{n}(\mathbf{k}[\overline{Y}]\otimes\overline{T}(M,A))\) is an abelian Lie subalgebra of \(\mathfrak{g}\). Let \(p:\mathfrak{g}\to\mathfrak{g}\) be the projection map onto the subspace \(\mathfrak{a}\). On the other hand, since \(A\oplus_{{\it Diass}_{\infty}}M=(A\oplus M,\{\pi_{k}\}_{k=1}^{\infty})\) is a \({\it Diass}_{\infty}\)-algebra, it defines a Maurer-Cartan element \(\pi=\sum_{k=1}^{\infty}\pi_{k}\in CY^{1}(A\oplus M,A\oplus M)\) of the graded Lie algebra \(\mathfrak{g}\) (i.e. \(\{\![\pi,\pi]\!\}=0\)). Further, the element \(\pi\in\ker(p)_{1}\). Hence we obtain a \(V\)-data \((\mathfrak{g},\mathfrak{a},p,\pi)\). Therefore, by Theorem 5.4 (i), the graded vector space \(\mathfrak{a}\) inherits a \(L_{\infty}\)-algebra structure with the operations \(\{l_{k}:\mathfrak{a}^{\otimes k}\to\mathfrak{a}\}_{k=1}^{\infty}\) given by \[l_{k}(\gamma_{1},\ldots,\gamma_{k})=p[\![\cdots\{\![\{\![\pi,\gamma_{1}]\!\}, \gamma_{2}]\!\},\ldots,a_{k}]\!],\] for homogeneous \(\gamma_{1},\ldots,\gamma_{k}\in\mathfrak{a}\). This \(L_{\infty}\)-algebra can be seen as the homotopy analogue of the graded Lie algebra given in Theorem 4.1. Our next definition is motivated by the Maurer-Cartan characterization of a relative averaging operator given in Theorem 4.1. ### Definition A **homotopy relative averaging operator** on \((M,\{\eta_{k}\}_{k=1}^{\infty})\) over the \(A_{\infty}\)-algebra \((A,\{\mu_{k}\}_{k=1}^{\infty})\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((\mathfrak{a},\{l_{k}\}_{k=1}^{\infty})\). It follows from the above definition that a homotopy relative averaging operator is an element \(P=\sum_{k=1}^{\infty}P_{k}\in\mathrm{Hom}_{0}(\mathbf{k}[\overline{Y}]\otimes \overline{T}(M),A)\) that satisfies \[\sum_{k=1}^{\infty}\frac{1}{k!}l_{k}(P,\ldots,P)=0. \tag{38}\] In other words, \(P\) must satisfy \(\sum_{k=1}^{\infty}\frac{1}{k!}p[\![\cdots\{\![\{\![\pi,P]\!\},P]\!\},\ldots,P] \!]=0\), which is equivalent to the condition that \(p(e^{\{\![\![-,P]\!\}}\pi)=0\). Note that a homotopy relative averaging operator can be equivalently described by a collection \(P=\{P_{k}:\mathbf{k}[Y_{k}]\otimes M^{\otimes k}\to A\}_{k=1}^{\infty}\) of degree \(0\) linear maps satisfying \(p(e^{\{\![-,P]\!\}}\pi)=0\). ### Definition A **homotopy relative averaging algebra** is a triple \((A,M,P)\) consisting of an \(A_{\infty}\)-algebra \(A=(A,\{\mu_{k}\}_{k=1}^{\infty})\), a representation \(M=(M,\{\eta_{k}\}_{k=1}^{\infty})\) and a homotopy relative averaging operator \(P=\{P_{k}\}_{k=1}^{\infty}\). We often denote a homotopy relative averaging algebra as above by \(M\xrightarrow{\{P_{k}\}_{k=1}^{\infty}}A\). **8.8 Proposition**.: _Let \(M\xrightarrow{\{P_{k}\}_{k=1}^{\infty}}A\) be a homotopy relative averaging algebra. Then \((M,\{\pi_{k}^{P}\}_{k=1}^{\infty})\) is a \({\it Diass}_{\infty}\)-algebra, where_ \[\pi_{k}^{P}(y;u_{1},\ldots,u_{k})=(e^{\{\![\![-,P]\!\}}\pi)(y;u_{1},\ldots,u_{k }),\text{ for }k\geq 1,y\in Y_{k}\text{ and }u_{1},\ldots,u_{k}\in M.\] Proof.: Note that \[\{\![\![e^{\{\![-,P]\!\}}\pi,e^{\{\![\![-,P]\!\}}\pi]\!]\!]=e^{\{\![\![-,P]\!\} }\{\![\pi,\pi]\!]=0\ (\text{as }\{\![\![\pi,\pi]\!]=0\}).\] This shows that \(e^{\{\![\![-,P]\!\}}\pi\) is a Maurer-Cartan element of the graded Lie algebra \(\mathfrak{g}\). Hence the collection of maps \(\{\pi_{k}\}_{k=1}^{\infty}\) defines a \({\it Diass}_{\infty}\)-algebra structures on \(M\), where \(\pi_{k}=(e^{\{\![\![-,P]\!\}}\pi)|_{\mathbf{k}[Y_{k}]\otimes M^{\otimes k}}\), for \(k\geq 1\). This completes the proof. A homotopy relative averaging operator \(\{P_{k}\}_{k=1}^{\infty}\) is said to be **strict** if \(P_{k}=0\) for \(k\neq 1\). It follows from (38) that a strict homotopy relative averaging operator is a degree \(0\) linear map \(P:M\to A\) that satisfies \[\mu_{k}\big{(}P(u_{1}),\ldots,P(u_{k})\big{)}=P\big{(}\eta_{k}(P(u_{1}),\ldots,u_ {i},\ldots,P(u_{k}))\big{)},\text{ for }k\geq 1\text{ and }1\leq i\leq k.\] A strict homotopy relative averaging algebra is a triple that consists of an \(A_{\infty}\)-algebra, a representation and a strict homotopy relative averaging operator. In this case, Theorem 8.8 reads as follows. **Lemma 8.9**.: _Let \(M\xrightarrow{P}A\) be a strict homotopy relative averaging algebra. Then \((M,\{\pi_{k}^{P}\}_{k=1}^{\infty})\) is a \(Dias_{\infty}\)-algebra, where_ \[\pi_{k}^{P}(y;u_{1},\ldots,u_{k}):=\eta_{k}(P(u_{1}),\ldots,u_{i},\ldots,P(u_{k })),\] _for \(k\geq 1\), \(y\in Y_{k}\) (which can be uniquely written as \(y=y_{1}\lor y_{2}\) for some \((i-1)\)-tree \(y_{1}\in Y_{i-1}\) and \((k-i)\)-tree \(y_{2}\in Y_{k-i}\)) and \(u_{1},\ldots,u_{k}\in M\)._ In the following, we show that any \(Dias_{\infty}\)-algebra is always induced from a strict strict homotopy relative averaging algebra. Let \((D,\{\pi_{k}\}_{k=1}^{\infty})\) be a given \(Dias_{\infty}\)-algebra. Consider the graded vector space \(D/I\) which is obtained from \(D\) quotient by the homogeneous ideal \(I\) generated by the set \[\{\pi_{k}(y;a_{1},\ldots,a_{k})-\pi_{k}(y^{\prime};a_{1},\ldots,a_{k})\mid k \geq 1,\ y,y^{\prime}\in Y_{k}\ \text{and}\ a_{1},\ldots,a_{k}\in D\}.\] It is easy to see that the graded vector space \(D/I\) carries an \(A_{\infty}\)-algebra structure with the operations \(\{\mu_{k}:(D/I)^{\otimes k}\to D/I\}_{k=1}^{\infty}\) given by \[\mu_{k}([a_{1}],\ldots,[a_{k}])=[\pi_{k}(y;a_{1},\ldots,a_{k})],\ \text{for}\ k \geq 1\ \text{and}\ [a_{1}],\ldots,[a_{k}]\in D/I.\] We denote this \(A_{\infty}\)-algebra structure simply by \(D_{\text{Ass}_{\infty}}\). It is also easy to check that the \(A_{\infty}\)-algebra \(D_{\text{Ass}_{\infty}}\) has a representation on the graded vector space \(D\) with the action maps \[\eta_{k}([a_{1}],\ldots,[a_{i-1}],a_{i},[a_{i+1}],\ldots,[a_{k}])=\pi_{k}(y;a _{1},\ldots,a_{k}),\] for \(k\geq 1\), \([a_{1}],\ldots,[a_{i-1}],[a_{i+1}],\ldots,[a_{k}]\in D_{\text{Ass}_{\infty}}\) and \(a_{i}\in D\). Here \(y\in Y_{k}\) is any \(k\)-tree which is the grafting of some \((i-1)\)-tree and \((k-i)\)-tree. Moreover, \(D\xrightarrow{q}D_{\text{Ass}_{\infty}}\) is a strict homotopy relative averaging algebra, where \(q\) is the quotient map. Further, the induced \(Dias_{\infty}\)-algebra structure on \(D\) coincides with the given one. **Data availability statement.** Data sharing does not apply to this article as no datasets were generated or analysed during the current study. **Acknowledgements.** The author would like to thank Indian Institute of Technology (IIT) Kharagpur for providing the beautiful academic environment where the research has been carried out.
2309.12863
Domain Adaptation for Arabic Machine Translation: The Case of Financial Texts
Neural machine translation (NMT) has shown impressive performance when trained on large-scale corpora. However, generic NMT systems have demonstrated poor performance on out-of-domain translation. To mitigate this issue, several domain adaptation methods have recently been proposed which often lead to better translation quality than genetic NMT systems. While there has been some continuous progress in NMT for English and other European languages, domain adaption in Arabic has received little attention in the literature. The current study, therefore, aims to explore the effectiveness of domain-specific adaptation for Arabic MT (AMT), in yet unexplored domain, financial news articles. To this end, we developed carefully a parallel corpus for Arabic-English (AR- EN) translation in the financial domain for benchmarking different domain adaptation methods. We then fine-tuned several pre-trained NMT and Large Language models including ChatGPT-3.5 Turbo on our dataset. The results showed that the fine-tuning is successful using just a few well-aligned in-domain AR-EN segments. The quality of ChatGPT translation was superior than other models based on automatic and human evaluations. To the best of our knowledge, this is the first work on fine-tuning ChatGPT towards financial domain transfer learning. To contribute to research in domain translation, we made our datasets and fine-tuned models available at https://huggingface.co/asas-ai/.
Emad A. Alghamdi, Jezia Zakraoui, Fares A. Abanmy
2023-09-22T13:37:19Z
http://arxiv.org/abs/2309.12863v1
# Domain Adaptation for Arabic machine translation: ###### Abstract Neural machine translation (NMT) has shown impressive performance when trained on large-scale corpora. However, generic NMT systems have demonstrated poor performance on out-of-domain translation. To mitigate this issue, several domain adaptation methods have recently been proposed which often lead to better translation quality than genetic NMT systems. While there has been some continuous progress in NMT for English and other European languages, domain adaption in Arabic has received little attention in the literature. The current study, therefore, aims to explore the effectiveness of domain-specific adaptation for Arabic MT (AMT), in yet unexplored domain, financial news articles. To this end, we developed carefully a parallel corpus for Arabic-English (AR-EN) translation in the financial domain for benchmarking different domain adaptation methods. We then fine-tuned several pre-trained NMT and Large Language models including ChatGPT-3.5 Turbo on our dataset. The results showed that the fine-tuning is successful using just a few well-aligned in-domain AR-EN segments. The quality of ChatGPT translation was superior than other models based on automatic and human evaluations. To the best of our knowledge, this is the first work on fine-tuning ChatGPT towards financial domain transfer learning. To contribute to research in domain translation, we made our datasets and fine-tuned models available at [https://huggingface.co/asas-ai/](https://huggingface.co/asas-ai/). Machine Translation Arabic MT Domain Adaptation Financial Domain ### Introduction In the recent years, the rapid advancement of deep learning techniques and their adaptation in machine translation has made a great stride in many translation tasks. Neural Machine Translation (NMT) systems, trained on a large-scale corpora, have demonstrated impressive performance in translating generic language. However, NMT models tend to perform poorly on out-of-domain data [1], especially if the target domain has a distinctive style and vocabulary [2]. A NMT model trained on exclusively medical texts is unlikely to achieve accurate performance on financial or news data. To address this problem, researchers have proposed different domain adaptation approaches and techniques which seem to improve the quality of NMT systems on out-of-domain data [3, 4, 5]. While there are many MT models, systems and tools for translating Arabic texts in the literature; however, the quality of the translation is poor, especially for out-of-domain texts [6]. A key technical challenge related to AMT arises from the lack of available bilingual datasets for out-of-domain texts that can be used as standard benchmark to conduct unified experiments. In fact, researchers tend to collect datasets according to their specific domains and try to resolve the linguistic issues for Arabic, based on custom datasets such as in the domain of news [7, 8] ignoring hereby many other domains. Other technical issues such as out-of-vocabulary (OOV) and very long sentences also make MT more challenging [1]. To address these challenges, researchers have proposed different techniques, including for example, BPE [9], character-level BPE variant [10], hybrid techniques [11], and mixed fine-tuning [6]. However, domain robustness remains an unsolved problem and there is a need for further research in this area [12]. This is specially true for Arabic language. Existing domain adaptation research has only focused on news [13] and medical [14] domains, no prior study, to the best of our knowledge, has been conducted on financial domain. To alleviate the issue with translation mismatch related to out-of-domain texts, the authors in [13] studied the performance of NMT systems under morphology-based and frequency-based tokenization schemes and BPE on in-domain data. They evaluated their best performing models on out-of-domain data yielding significant improvements of 37.96% in BLEU score [15]. The latter [14] proposed a method for domain-specific data augmentation for MT to tackle the issue with a small bilingual dataset. They employed mixed fine-tuning to train models that significantly improve translation of in-domain texts. Their method achieved improvements of approximately 5-6 BLEU and 2-3 BLEU, respectively, on the Arabic-to-English and English-to-Arabic language pairs. While a lot of research in domain adaptation in MT for other language pairs like [6], [16] exist which focus on synthetic data generation and multiple other techniques like checkpoint averaging [6], only one work [14] investigated the same for AMT, but only for a medical domain. This research aims to fill the gap in creating different MT settings and investigate in the domain of financial texts and potentially to extend to other domains. Our contributions are the following: * We introduce the first AR-EN parallel corpus in the financial domain. * We compare the effectiveness of different adaption methods and data augmentation approaches for limited domain data. * We fine-tuned several models and made them publicly available to the research community. * Our work is the first to fine-tune GPT3.5 model and evaluate its capability for domain adaption. ## 1 Background ### Neural machine translation NMT models based on deep neural networks (DNN) have been proposed in early NMT research [17]. A DNN based NMT model employs a neural network system to perform the required machine translation tasks using an encoder-decoder network [18]. The encoder neural network inputs and encodes a source language sentence into a fixed-length vector in each hidden state. Then, given the final hidden state of the encoder, the decoder does the reverse work by transforming the hidden state vector to the target sentence word by word. A translation probability of a source sentence is modeled into the target sentence. Given a source sentences \(S\)= \(\left\{s_{1},s_{2},..s_{n}\right\}\) and a target sentence \(T\)= \(\left\{t_{1},t_{2},..t_{n}\right\}\), the encoder encodes all the words from the source sentence \(S\) into a set of hidden states \(\left(h_{1},h_{2},..h_{n}\right)\) and passes the fixed-size vector \(v\), which represents the source sentence, to the decoder. The translation probability with a single neural network is given by following formula [19]: \[P(S)=\prod_{i=1}^{n}P(t_{<i},S) \tag{1}\] where \(t<i\) stands for the sequence preceding the \(ith\) target word. Hence each predicted word \(t_{i}\) is based on the previously predicted word \(t_{i-1}\) and the previous hidden states \(h_{i-1}\). However, when the sentences become long the performance deteriorates. This limitation is due to the limited feature representation ability in a fixed-length vector [17]. To overcome this issue and to provide additional word alignment information in translating long sentences, Bahdanau et al. [20] introduced the idea of the attention mechanism. Concretely, attention mechanism is an intermediate component between encoder and decoder, which can help to determine the word alignment dynamically. The decoder pays attention to input or to any part of the input sentence. Attention is calculated using each encoder output and the current hidden state, resulting in a vector of the same size as the input sequences using score functions [20]. There are mainly three different architectures for constructing NMT, namely Recurrent neural network (RNN), Convolution neural network (CNN), and Self-attention-based Transformer. The use of RNN-based models has demonstrated good quality translation results. This type of network is composed of encoder and decoder with similar working of sequence-to-sequence learning. Multiple variant of RNN architectures include i.e., LSTM [21], BiLSTM [20] and GRU [22]. The second approach of developing NMT systems based in convolution neural network (CNN) architecture. Work using CNN has generally reported good results, specially for word-based MT [23]. This work applied a convolution layer on the bottom of the recurrent layer which hinders the performance. The bottleneck was handled by implementing the fully convolutional model as suggested by [24]. The performance and accuracy were improved with a number of models; word-based [25], character-based [10], and recently with attention [26]. Recently, the use of transformers has resulted in well-performing machine translation systems. This type of it is a sequence-to-sequence model [27], which consists of a stack of layers. Each layer first utilizes the self-attention to extract information from the whole sentence, then follows a point-wise feed-forward network to provide non-linearity. The novel idea of self-attention is to extend the mechanism to the processing of input sequences and output sentences as well. In general form, the Transformer attention function uses three vectors: queries(Q), keys (K) and values (V). The output is a weighted sum of values, where weights are computed by a similarity score between \(n\) query vectors and \(m\) keys [27]. The attention is defined as follows: \[Attention(Q,K,V)=softmax(score(Q,K))V \tag{2}\] where score Q,K is an nxm matrix of similarity scores. A straightforward choice for scoreQ,K proposed by Luong et al. [28] is the dot product i.e. score(Q,K) =QK. The softmax function normalizes over the columns of that matrix so that the weights for each query vector sum up to one. There are many variants in the implementation of attention-based models which are classified into two broad categories, global and local attention discussed in detail in this survey [17]. Current state-of-the-art NMT models [29] rely on the Transformer model [27] and multiple attention mechanism [20]. However, the transformer-based language models such as Bidirectional Encoder Representation from Transformers (BERT) [30] expands the function of attention to encompass the main task. It uses self-attention, which is applied to two states within the same sequence, as the foundation for sequence representations rather than an RNN. For Arabic language, two transformer-based language models have been developed so far; notably AraBERT [31] and GigaBERT [32]. Both models aim at solving a masked language-modelling task in order to correctly predict a masked word from its context. Besides, these models aim at resolving a next sentence prediction task especially to decide whether two sentences are consecutive or not. ### Domain-specific MT Domain translation is a challenging task due to the fact that language varies across different domains, genres, and styles. For example, texts in a financial domain often contain specific terminologies and jargon that may not be extensively used in legal or health domains. Therefore, researchers have proposed different methods to improve the quality of translations in domains such as medical and biomedical [14, 33, 34], legal [35], and financial texts [36]. Several domain adaptation approaches have been proposed [for more comprehensive survey see 3]. Domain adaptation methods can intervene in various stages of NMT system design, training and use and can be classified into three main categories: data center methods, architecture-centric adaptation methods, and inference schemes for adaptation [3]. In data center methods, the objective is to select or generate appropriate in-domain data. A large generic monolingual data can be filtered to select domain-representative dataset based on some unique characteristics of the target domain. However, selecting a small in-domain dataset may be more domain relevant, but the impact of any deviation from the target domain will be magnified [37]. Another approach is to construct partially synthetic bilingual training corpora by forward- or back translation. [38] observed that models trained exclusively on back translations can perform similarly to models trained on natural data. Recently, the use of pre-trained large language models (LLMs) to generate large amounts of synthetic data at very low cost has emerged to be an effective approach [14]. Architecture-centric adaptation typically involves adding trainable parameters to pre-trained models to avoid train models from scratch. A common approach is to fine-tune an existing well-performing NMT model on small in-domain data. Extensive fine-tuning can lead to catastrophic forgetting. [39] proposed mixed-fine tuning which involves two steps: (1) training an NMT model on out-of-domain data until convergence and then (2) fine-tuning the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) until convergence. Mixed-fine tuning approaches can be helpful to prevent two major issues notably overlooking the specificity of each domain [1] and forgetting previously learned knowledge when exposed to the new training examples as reported in [40]. Lastly, the inference schemes for adaptation develop a separate NMT model to each domain and combine them at inference time. ### Domain-adaptation in Arabic MT The development of Arabic MT systems has gone through different stages, including rule-based systems [41, 42], statistical MT [43], and more recently neural MT systems [44]. [45] conducted a comprehensive survey of Arabic MT systems and the unique challenges in Arabic MT. Arabic is one of the official six languages adopted by United Nations and it is spoken by 400 million people in the middle east, north Africa, and many other parts of the world. Arabic is a Semitic language and it is notoriously difficult for MT due to its linguistic characteristics [46][45]. First, Arabic has a rich and complex morphology which is substantially different from English or other western languages [47]. Second, Arabic has long and short vowels. While the long vowels are represented by letters, the short vowels are marked by diacritic signs placed above or below the letters. However, the use of diacritic signs is not compulsatory in Arabic and hence they are rarely used in informal writing. Therefore, it is hard to identify the correct sense of a word, especially when sufficient context is not provided. Third, variation among different Arabic dialects has always been problematic for AMT. Furthermore, the Arabic language used in social medial varies considerably from Modern Standard Arabic (MSA). These aspects of the Arabic language pose series challenges for Arabic MT. In addition to the aforementioned issues, there is a lack of high quality parallel corpora of sufficient size for training or fine-tuning Arabic MT for different domains. It is commonly known that NMT systems do not perform well in domain specific translation, specially in low-resource languages [1]. Addressing these challenges, some researchers have turned to domain adaption methods to develop domain-specific Arabic MT systems. For example, [14] proposed the use of pre-trained LMs and back-translation for domain-specific data augmentation for MT. Furthermore, current Arabic MT research has primary focused on the translation of limited domains such as news and official texts, whilst few attempts focus on domain specific translation such as medical domain [48]. Specifically, most of the used parallel data available to the researcher was limited to texts produced by international organizations, parliamentary debates [33]. Unfortunately, existing single-domain AMT methods do not work well for multiple domains. Thus, multi-domain NMT approaches are more in demand to tackle this limitation. To recap, previous research has shown that domain adaptation leads to better translation quality than general NMT. Since there is relatively little work on Arabic domain adaptation, the primary objective of this research is to explore different effectiveness of domain translation methods, a yet unexplored domain, financial domain. To this end, this work aims to fine-tune several Transformer NMT models and LLM and perform cross-domain testing and evaluation to gain some insights into model robustness against domain changes. ## 2 Methodology This section gives an overview of the methods and algorithms for AMT domain adaptation using LLMs models. First, information about the collected bilingual dataset used is given which we refer as authentic dataset, then our approach is presented, and lastly, the metrics we used for evaluation are described. ### Approach In this work, we investigate mainly two methods to augment our in-domain data for the domain of financial news and propose approaches to leverage pre-trained LLMs for domain-specific data generation for this MT task. Concerning domain-specific data generation, we start with synthetic data generation to augment our authentic sentences for Arabic. Then, to obtain the parallel data in English, we apply back-translation from the Arabic synthetic sentences. In our case, we leverage a pipeline of different models. We start by AraGPT2 [49] and gpt2 [50] as text generation models for Arabic and English to create synthetic pairs for (AR-EN) and (EN-AR), respectively. For Arabic, we use titles only from the collected authentic dataset as text prompts to generate corresponding long-form text using AraGTP2. Then, we use a summarization model to summarize the generated bunches of texts to obtain short summaries that will serve as generated titles. In the end, we back-translate the long-form text as well as the generated summaries to serve as title and article. The same pipeline applies to English as the target language. Figure 1 shows the case of augmenting the authentic dataset with AR-EN pairs using this method. #### 2.1.1 Synthetic data generation Data augmentation has been used in domain translation due to the scarcity of domain-specific datasets that are suitable for training large models. A common approach to augment domain data is the use of back-translation when there is abundant data in the target domain [38, 1]. [14] proposed the use of state-of-the-art large language models to general unlimited new sentences in the source language and then back-translating in the target language. Recent studies explored the use of ChatGPT for generating new parallel sentences. However, in this study [51], the authors showed that the performance of ChatGPT for Arabic shows inferior performance compared to the finetuned AraT5. #### 2.1.2 Back-translation We use a pre-trained machine translation model [52] for back-translation. The back-translation is applied on both sides of generated summaries and titles, namely on the long-form text (which serves as an article) as well as on the summarized form (which serves as a title) into the respective target language. ### Experiment setup #### 2.2.1 Datasets For fine-tuning in domain-specific MT models, we collected a dataset from different online resources for the pair AR-EN. As shown in table 1 most of the data is collected from Capital Markets Authority (CMA) yielding a total of 7560 AR-EN pairs. Note that we consider titles (3780 AR-EN pairs) and articles (3780 AR-EN pairs). Additionally, we augmented our dataset with synthetic data as well as back-translated data. This step augmented the authentic dataset by 12,318 and 12,000 AR-EN sentence pairs as synthetic and back-translated data, respectively. This table 2 shows the breakdown of the segments in our dataset. We randomly sampled 1000 segments from the authentic dataset to serve as test data for all models. Additionally, we randomly sampled 1000 segments for building the development for both models notably for OPUS (bt-big) and NLLB. However, for fine-tuning chatGPT we randomly sampled 2000 pairs each for each setup. #### 2.2.2 NMT pre-trained models Our generic NMT pre-trained models use different Transformer architectures, however, we have implemented the fine-tuning objective using the huggingface NMT transformer (a sequence-to-sequence version in the Transformers library) procedure. For inference, we use beam size 4 and batch size 16, on a GPU T4-15GB (Google Colab). Further, we use chatGPT as a baseline with zero-shot learning. \begin{table} \begin{tabular}{l r r r} \hline \hline **Source** & **Articles** & **Titles** & **Sentences** \\ \hline Tadawul & 569 & 569 & 2544 \\ Capital Markets Authority & 2320 & 2320 & 8351 \\ Eye of Riyadh & 891 & 891 & 1877 \\ Total & 3780 & 3780 & 15771 \\ \hline \hline \end{tabular} \end{table} Table 1: Authentic dataset statistics Figure 1: Data augmentation pipeline OPUS (bt-big): We use OPUS [53] models from the Tatoeba-Challenge, specifically the models augmented with back-translated data of Wikimedia content and trained with Transformer-Big architecture. Here we picked the _Helsinki-NLP/opus-mt-ar-en_ checkpoint. For tokenization, we instantiate our tokenizer which is based on SentencePiece [54] with the _AutoTokenizer/from_pretrained_ method. This ensures that the tokenizer corresponds to the model architecture we want to use. NLLB: No-Language-Left-Behind (NLLB) [55] is a multilingual model which supports 200 languages with a massive size Transformer. Fine-tuning is carried out on NLLB using its distilled version _facebook/nllb-200-distilled-600M_ checkpoint. For tokenization, we instantiate a multilingual model provided by NLLB for tokenization with the _NllbTokenizerFast,from_pretrained_ method. This ensures that the tokenizer corresponds to the model architecture we are using. ChatGPT3.5: We use the chatGPT-3.5-turbo model via its official API 1 which power the ChatGPT. Here we prepare our dataset in the format that is acceptable by the API functions. In particular we convert the AR-EN pairs into the Prompt template for sentence-level translation as recommended in the OpenAI playground for sentence-level translation task. In order to avoid error, we truncate all the sentence pairs to a max size of 4,290 characters before sending the request. Moreover, we set the size of the total tokens to about 378,460 tokens due to limit rate costs. For this model, we formatted the requests with as the system message first _You are a professional translator in the financial domain. Translate the following Arabic sentence:_ ar_en _into English'_ followed by user content messages, where ar_en represent the AR-EN pairs. Footnote 1: [https://chat.openai.com](https://chat.openai.com) Footnote 2: [https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-eng) Before, starting with the experiments, we considered the following three setups for fine-tuning the models on the domain-specific dataset. Next section 3 will discuss the results and findings. Setup #1: Baseline models We consider pre-trained NMT models evaluated on our cleaned authentic test split containing 1000 AR-EN sentence pairs. Our baseline NMT models use the OPUS(bt-big) 3[52], NLLB 600M 4[55] and ChatGPT-3.5 5. Footnote 3: [https://huggingface.co/facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) Footnote 5: [https://platform.openai.com/docs/guides/gpt/chat-completions-api](https://platform.openai.com/docs/guides/gpt/chat-completions-api) Footnote 4: [https://github.com/mjpost/sacreBLEU](https://github.com/mjpost/sacreBLEU) Footnote 5: [https://github.com/Unbabel/COMET](https://github.com/Unbabel/COMET) Setup #2: Fine-tuning with authentic data For fine-tuning, we have initialized the transformer models with the trained weights of the baselines. We use our authentic dataset with the splits shown in table 2. We have kept all hyperparameters identical. The models have been fine-tuned until convergence over the validation set. At test time, the respective testset from the authentic dataset is used for this setup as well. Again, all metrics are reported. Setup #3: Fine-tuning with augmented data Similar to previous setup, we have initialized the transformer models with the trained weights of the baselines. However, here we use our authentic dataset augmented with the respective data with the splits shown in table 2. Basically, we augment the authentic dataset with back-translated data and shuffle it. The same applies for synthetic data. This step yield two version of fine-tuning, one using the former and one using the latter. The models have been fine-tuned until convergence over the validation set. At test time, the testset from the authentic dataset is used for this setup as well while also reporting all metrics. ### Metrics As performance measures, we report the spBLEU score [15] which uses a SentencePiece tokenizer, chrF [56], TER [57], which are implemented in sacrebleu 5. Additionally, we compute COMET [58] that was proposed recently by taking advantage of cross-lingual pre-trained LMs using knowledge from both source and target languages. COMET make prediction score that correlates with human judgement [58]. For our experiments, we adapt the official COMET implementation 6. For COMET, we use the reference-based Estimation model _wmt20-comet-da_, trained based on (Direct Assessment) DA and used Quality Estimation (QE). Another score that correlates with human evaluation BERTScore [59] is also computed. Including different metrics in the evaluation allows us to test the models on metrics different from those used for training. ## 3 Results and Discussion This section elaborates on our automatic and human evaluations and discusses the results. We also provide a preliminary comparison of the models' performance on domain-specific MT as baseline models and as fine-tuned models. Therefore, we report if they can perform robustly well on domain-specific or even noisy sentences from our collected dataset. Specifically, we focus on the translation robustness of the models on the translation of Arabic financial news. Table 3 shows the main results over the respective testset. The \(\uparrow\) and \(\downarrow\) symbols in the tables indicate which values are better. We analyze the translation outputs by comparing the MT evaluation metrics in each setup. ### Automatic evaluation In setup #1, OPUS and NLLB perform equally with inferior performances of around 14 and 42 for BLEU and chrF points, respectively. The TER score which is expressed as the ratio of the number of edits to the average number of words in the reference is high for the two models. Thus it indicates that the translation is of poor quality. In terms of COMET score, both models have very poor results which means reference-based COMET may lose information from source, translation output, or reference embeddings, except for ChatGPT-3.5. But, BERTScores for all three models are high which means that they don't correlate with COMET score. In comparison, BERTScore and COMET have a significant difference in their scores. In contrast, ChatGPT-3.5 performs competitively better (BLEU 26.13) than OPUS and NLLB models. Indeed, we are not surprised by this fact which is in-line with related research works [60, 51, 61]. However, these findings are not consistent with a previous finding [62] where the authors evaluated ChatGPT and GPT on 4,000 Arabic-English pairs and found out that SoTA models like araT5 [63] outperforms ChatGPT by 19 BLEU Points. As for the translation robustness, results from this setup#1 suggest that ChatGPT-3.5 outperforms these models on financial news by a significant margin. When, we analyze setup #2, as expected, fine-tuning all models on authentic data has generally helped improve the BLEU scores and other metrics as well. This finding is also in-line with others' research [54, 14]. However, [60] notice that for domain-specific translation (e.g., in the biomedical filed), ChatGPT's performance degrades considerably. We attribute this behaviour to the observation that ChatGPT is capable to translate our sentences better than terminologies in sentences from the biomedical domain, a very specific domain. Furthermore, we clearly, see that BLEU scores increase from 14.58 to 48.83, from 14.38 to 43.43 and from 26.13 to 51.15 for OPUS, NLLB and chatGPT-3.5, respectively. In terms of COMET and BERTScore, both metrics correlate. This indicates acceptable translation outputs. Concerning ChatGPT, even though it only used 2000 pairs of AR-EN sentences for fine-tuning, it outperforms all other models which means the MT quality of chatGPT can easily be improved with little additional data from the language pair, a fact that has not been previously confirmed for related approaches, since this is the first work that assesses the performance of ChatGPT fine-tuned models for AR-EN MT task. Nevertheless, for English, this work [64] has shown that ChatGPT has great robust translation capabilities over related SoTA MT models. Our experimental result confirms the latter finding and shows that with a carefully prepared certain amount of fine-tuning data, this model is capable to create acceptable translations. As for the translation robustness, results from this setup#2 suggest that ChatGPT-3.5 performs competitively well on financial news. Regarding the human evaluation, all models in this setup reached possible and acceptable translations. We conclude, our experiment shows that providing in-domain examples to ChatGPT achieves comparable results to a SoTA model in terms of automatic and human evaluation. In setup #3 we fine-tune the baseline models with the augmented data in two versions, one using back-translated data and the other using synthetic data. We observe both lexical metrics (BLEU and chrF) show consistent degradation with all models. The same applies to TER score. For instance, for ChatGPT, BLEU score decreased dramatically from 51.15 to 34.67 when fine-tuned on synthetic data while maintaining an acceptable score (BLEU 45.38) when fine-tuned on back-translated data. We observe that COMET score degraded massively for ChatGPT more than for OPUS and NLLB. One explanation could be that the synthetic data may have a lot of generated tokens that are grammatically correct, but they have nonsense meaning, as we know from the current state of the generative text. This could indicate that the translation results are not close in embedding space with the source and reference. In contrast, BERTScore maintained a good score over the two versions for all models. In this setup, OPUS (bt-big) FT (back MT) has made it the best model that provides reasonably good scores translations, however still lags behind the OPUS model fine-tuned on authentic data by at least 1.3 BLEU points. Generally, the drop in performance for all models in this setup is not consistent with others' research. For instance, the authors in [14] used synthetic data in the healthcare domain and achieved improvements on the in-domain test set. In comparison, with this work, the authors applied synthetic data generation using mGPT 7 a multilingual language model. We argue that this model might have better perplexity in generated tokens compared to araGPT2. To the best of our knowledge, we did not find any research work investigating the performance of both models in regard to Arabic. We will further investigate this issue in future work. However, there are many general reasons explaining OPUS, NLLB and ChatGPT behaviour in domain-specific MT, especially in the case of augmenting the dataset with synthetic data. One explanation is that the use of synthetic data may cause incorrect token choices, grammatical errors, or unnatural sentence structures to propagate into the translation outputs which make suboptimal translation outputs. Indeed, the results of this study demonstrate the models' robust translation capabilities for in-domain adaptation. They perform well when fine-tuned on authentic data. However, we observe a discrepancy between COMET and BERTScore. For instance, ChatGPT-3.5 perform worse on augmented data yielding a lower COMET score (23.03) but still having high BERTScore (0.91). This behaviour seems uncommon. One possible explanation is that COMET with reference-based translation is failing to find closeness in all three resource embeddings, whereas BERTScore is able the find closeness in the similarity between an MT output and a reference translation. This behaviour encourages us to drive human evaluation a much-needed score for trustworthiness. Regarding the human scoring, we observe that in setup #1 and #2, where ChatGPT-3.5 made it the best model in terms of lexical and semantic metrics, the human evaluation supports this result with the highest score of 3.1. However, in setup #3, even though the automatic metrics are degraded for all models, except BERTScore, the human evaluation shows that the translation quality of all models is comparable. Thus, we find that BERTScore correlates with human judgment more than COMET which has become recently a new state-of-the-art level of correlation with human judgment. This finding opens a great investigation for the future into whether semantic metrics correlate with human judgment and to what extent, in particular when ChatGPT-3.5 is applied. Figure 2 elaborates on all the automatic evaluation and human results. ### Human evaluation In addition to the automatic evaluations reported above, we decided to assess the quality of our models' translations using human evaluation. To this end, we recruited three native speakers and domain experts (post-graduate students in \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline **Setups** & **Model** & **spBLEU \(\uparrow\)** & **chrF \(\uparrow\)** & **TER \(\downarrow\)** & **COMET \(\uparrow\)** & **BERTScore \(\uparrow\)** & **Human \(\uparrow\)** \\ \hline \multirow{4}{*}{1} & OPUS (bt-big 8) & 14.58 & 43.93 & 79.59 & 3.89 & 0.89 & - \\ & NLLB 600M 9 & 14.38 & 42.17 & 77.58 & 2.98 & 0.89 & - \\ & ChatGPT-3.5 & **26.13** & **60.98** & **66.83** & **33.7** & **0.91** & - \\ \hline \multirow{4}{*}{2} & OPUS (bt-big) FT 10 & 48.83 & 65.11 & 53.18 & 51.12 & **0.95** & 2.7 \\ & NLLB 600M FT 11 & 43.43 & 61.01 & 54.65 & **52.10** & 0.94 & 2.81 \\ & ChatGPT-3.5 FT & **51.15** & **71.28** & **46.47** & 42.90 & 0.94 & 3.1 \\ \hline \multirow{4}{*}{3} & OPUS (bt-big) FT (back MT)12 & **47.56** & 64.53 & **54.30** & **57.21** & **0.95** & 2.94 \\ & OPUS (bt-big) FT (synthetic) 13 & 40.67 & 57.87 & 60.46 & 49.71 & 0.94 & 2.67 \\ \cline{1-1} \cline{2-7} & NLLB 600M FT (back MT)14 & 43.38 & 60.92 & 54.63 & 52.77 & 0.94 & 2.67 \\ \cline{1-1} & NLLB 600M FT (synthetic) 15 & 40.77 & 58.26 & 57.48 & 49.44 & 0.94 & 2.85 \\ \cline{1-1} \cline{2-7} & ChatGPT-3.5 FT (back MT) & 45.07 & **67.64** & 55.07 & 33.55 & 0.93 & 2.93 \\ \cline{1-1} & ChatGPT-3.5 FT (synthetic) & 34.67 & 62.93 & 70.29 & 23.03 & 0.91 & 2.77 \\ \hline \hline \multicolumn{7}{c}{FT = Fine-tuned} & & & & & \\ \end{tabular} \end{table} Table 3: MT evaluation scores and human evaluation for AR-EN Test dataset (1000 pairs). The best scores are in **bold**. Figure 2: Plotting the performance of the three models across different setups finance) to rate the acceptability of 50 randomly selected sentences from the test set. Similar to [14], we conducted a bilingual evaluation, whereby the evaluators rated both the original source sentences and translations generated by the MT models. The human evaluators were asked to rate each of the sentence based on the scale proposed by [65], ranging from 1 to 4, and outlined as follows: * 4 = Ideal: Not necessarily a perfect translation, but grammatically correct, with all information accurately transferred. * 3 = Acceptable: Not perfect (stylistically or grammatically odd), but definitely comprehensible, AND with accurate transfer of all important information. * 2 = Possibly Acceptable: Possibly comprehensible (given enough context and/or time to work it out); some information transferred accurately. * 1 = Unacceptable: Absolutely not comprehensible and/or little or no information is accurately transferred. We first asked the three human evaluators to rate one model's output and then we conducted an inter-rater reliability analysis on their ratings. The result of weighted Cohen's Kappa is X. Then, we asked each rater rate there models' outputs and provide justification for their responses were "Ideal" or "Unacceptable." The mean value of the raters' scores was averaged for each system, as shown in table 3. ## 4 Conclusion In this paper, we conducted several experiments to assess the performance of pre-trained NMT and LLM like GPT-3.5 using data augmentation in the domain of Arabic financial news articles. Generally, the results obtained from these experiments are very promising. While ChatGPT shows good results using few pairs, other models need more examples and still have lower performance. We explored the effectiveness of all models using data augmentation in the financial domain and found that the MT quality decreased for all models adequately. Here ChatGPT shows inferior performance, while OPUS still performs well on back-translated data than on synthetic data. There are many future works that can be carried out based on the findings from this study. Firstly, we would like to explore new techniques and methods to enhance translation outputs rather than the approach of data augmentation. Secondly, we think it is valuable to integrate more high-performance automatic metrics into the comparison that take semantics into consideration in a better way than in COMET and BERTScore. Finally, we will explore novel approaches to integrate additional models or even incorporate domain-specific models for improved translation performance. ### Acknowledgements This work is supported by a research grant from the Saudi Ministry of Culture.
2303.17940
Per-Example Gradient Regularization Improves Learning Signals from Noisy Data
Gradient regularization, as described in \citet{barrett2021implicit}, is a highly effective technique for promoting flat minima during gradient descent. Empirical evidence suggests that this regularization technique can significantly enhance the robustness of deep learning models against noisy perturbations, while also reducing test error. In this paper, we explore the per-example gradient regularization (PEGR) and present a theoretical analysis that demonstrates its effectiveness in improving both test error and robustness against noise perturbations. Specifically, we adopt a signal-noise data model from \citet{cao2022benign} and show that PEGR can learn signals effectively while suppressing noise. In contrast, standard gradient descent struggles to distinguish the signal from the noise, leading to suboptimal generalization performance. Our analysis reveals that PEGR penalizes the variance of pattern learning, thus effectively suppressing the memorization of noises from the training data. These findings underscore the importance of variance control in deep learning training and offer useful insights for developing more effective training approaches.
Xuran Meng, Yuan Cao, Difan Zou
2023-03-31T10:08:23Z
http://arxiv.org/abs/2303.17940v1
# Per-Example Gradient Regularization Improves Learning Signals from Noisy Data ###### Abstract Gradient regularization, as described in Barrett and Dherin (2021), is a highly effective technique for promoting flat minima during gradient descent. Empirical evidence suggests that this regularization technique can significantly enhance the robustness of deep learning models against noisy perturbations, while also reducing test error. In this paper, we explore the per-example gradient regularization (PEGR) and present a theoretical analysis that demonstrates its effectiveness in improving both test error and robustness against noise perturbations. Specifically, we adopt a signal-noise data model from Cao et al. (2022) and show that PEGR can learn signals effectively while suppressing noise. In contrast, standard gradient descent struggles to distinguish the signal from the noise, leading to suboptimal generalization performance. Our analysis reveals that PEGR penalizes the variance of pattern learning, thus effectively suppressing the memorization of noises from the training data. These findings underscore the importance of variance control in deep learning training and offer useful insights for developing more effective training approaches. ## 1 Introduction Regularization in deep learning refers to a set of techniques aimed at improving the performance of the model (Kukacka et al., 2017). Gradient Regularization, as a new regularization technique (Barrett and Dherin, 2021), proposed a modified gradient flow, represented by \(L(\mathbf{W})+(\lambda/4)\|\nabla L(\mathbf{W})\|^{2}\). The equation combines the original loss function, \(L(\mathbf{W})\), with an implicit regularizer that penalizes the Euclidean norm of the gradient. In a recent study, Barrett and Dherin (2021) analyzed the impact of finite learning rates on the iterates of gradient descent and discovered that such regularization improves testing accuracy and is tied to sharpness-aware minimization with some minor differences. It is worth noting that the gradient norm lacks a stochastic approximation, which means that it cannot be integrated with commonly used stochastic gradient descent (SGD) methods (Keskar et al., 2017; Smith et al., 2020). To bridge this gap, Smith et al. (2021) proposed a per-example gradient regularization (PEGR) that modifies the regularization term \(\|\nabla L(\mathbf{W})\|^{2}\) to \(1/m\sum_{k=1}^{m}\|\nabla L_{i}(\mathbf{W})\|^{2}\), where \(\|\nabla L_{i}(\mathbf{W})\|\) represents the Euclidean norm of the gradient in the minibatch of data. This kind of gradient descent takes into account the concept of sharpness aware minimization (Foret et al., 2021; Geiping et al., 2021). Surprisingly, it has been empirically observed that this regularization method achieves significantly improved generalization performance compared to full-batch gradient regularization (Andriushchenko and Flammarion, 2022). While PEGR has been observed to have good performance, the essential theory behind this intuitive selection such as the stochastic selection and full gradient descent remains unclear. Therefore, it requires to seek a more fundamental and comprehensive understanding of the behavior of gradient dynamics under such regularization. However, understanding gradient regularization in neural networks is challenging, primarily due to nonconvexity. The activation and loss functions make the training dynamic analysis a highly nonconvex optimization problem (Dherin et al., 2022). Previous work has not provided a comprehensive theoretical analysis of the improvement in testing accuracy, and there is no theoretical explanation for the behavior of gradient regularization, especially its role during model training. In this work, we investigate the algorithmic behavior of PEGR. In particular, we present an algorithmic analysis of learning two-layer convolutional neural networks (CNNs) with fixed second-layer parameters of \(+1\)'s and \(-1\)'s and a square ReLU activation function, i.e., \(\sigma(z)=\max\{0,z\}^{2}\). We consider a setting where the input data comprise label-dependent signals and label-independent noises, and utilize a signal-noise decomposition of the CNN filters to precisely characterize how PEGR affects the signal learning and noise memorization during the model training. We then prove a separation between the generalization performances when the model training is performed with and without PEGR. Our paper makes significant contributions in the following ways: 1. We identify that per-example gradient regularization (PEGR) can effectively suppress noise memorization while promoting signal learning in over-parameterized neural networks. Specifically, we demonstrate that when certain conditions are met, CNN models trained using gradient descent with PEGR prioritize learning the signal over memorizing the noise. Furthermore, by appropriately closing the regularization at the right time, these models achieve convergence of training gradient and exhibit lower test error. 2. Additionally, we present a negative result that demonstrates how CNN models trained using gradient descent without PEGR are prone to memorize noise instead of learning the signal. Taken together with our previous finding, this result provides clear evidence of the importance of PEGR in promoting effective learning in over-parameterized neural networks. 3. Our theoretical analysis suggests that the advantages of PEGR in promoting signal learning are most pronounced in the early stages of training. As the network continues to learn and the signal becomes sufficiently strong, we also provide a theoretical framework for determining an appropriate time to close the gradient regularization, which ensures gradient convergence while avoiding over-regularization of the signal. To summarize, Our results demonstrate that PEGR under gradient descent can effectively learn signals while suppressing noise and provide guidance on choosing hyperparameters. The remainder of this paper is organized as follows. First, we provide some additional references and notations below. In Section 2, we introduce the problem settings. Next, in Section 3, we present the theoretical analysis of the efficacy of PEGR under gradient descent. Section 4 shows the experimental results which support our theories. Section 5 provides a brief overview of the proof of the main theorems. Finally, we conclude the paper in Section 6 and discuss some related questions for future investigation. ### Additional related work In this section, we will discuss in detail some of the related work briefly mentioned before. **Sharpness aware minimization.** The study on the connection between sharpness and generalization can be traced back to Hochreiter and Schmidhuber (1997). Keskar et al. (2017) observed a positive correlation between the batch size, the generalization error, and the sharpness of the loss landscape when changing the batch size. Jastrzebski et al. (2017) extended this by finding a correlation between the sharpness and the ratio between learning rate to batch size. Jiang et al. (2020) performed a large-scale empirical study on various generalization measures and show that sharpness-based measures have the highest correlation with generalization. Foret et al. (2021) introduced a novel Sharpness-Aware Minimization (SAM) procedure for simultaneously minimizing loss value and loss sharpness to improve model generalization ability, which results in state-of-the-art performance on several benchmark datasets and models, as well as providing robustness to label noise. Zhao et al. (2022) shows that penalizing the gradient norm of the loss function during optimization is a method to improve the generalization performance of deep neural networks, which leads to better flat minima, and achieves state-of-the-art results on various datasets. Wen et al. (2023) clarifies the mechanism and rigorously defines the sharpness notion that Sharpness-Aware Minimization (SAM) regularization technique is based on, revealing the alignment between gradient and top eigenvector of Hessian as the key mechanism behind its effectiveness. **Neural network training techniques.** Besides the gradient regularization we previously discussed, a series of recent works have also studied some other training techniques. Blanc et al. (2020) found that networks trained with perturbed training labels exhibit an implicit regularization term that drives them towards simpler models, regardless of their architecture or activation function. Inspired by Martin and Mahoney (2021)'s study of weight matrix spectra in deep neural networks, Meng and Yao (2023) introduced a spectral criterion to identify the presence of heavy tails, a sign of regularization in DNNs, enabling early stopping of the training process without testing data to avoid overfitting while preserving generalization ability. Zou et al. (2023) provided a theoretical explanation for Mixup's efficacy in improving neural network performance by showing its ability to learn rare features through a signal-noise data model, and suggests early stopping as a useful technique for Mixup training. Allen-Zhu and Li (2023) described a study on how ensemble of independently trained neural networks with the same architecture can improve test accuracy and how this superiority can be distilled into a single model using knowledge distillation, based on a theory that when data has a structure called " multi-view". **Overfitting in over-parameterized regime.** We introduce several works related to the benign/harmful overfitting in over-parameterized regime. Hastie et al. (2019); Wu and Xu (2020) investigated a scenario where the dimension and sample size increase while maintaining a fixed ratio between them, and observed a double descent risk curve in relation to this ratio. Bartlett et al. (2020) established upper and lower risk bounds for the over-parameterized minimum norm interpolator and demonstrated that benign overfitting can occur under certain conditions on the data covariance spectrum. Zou et al. (2021) examined the generalization performance of constant stepsize stochastic gradient descent with iterate averaging or tail averaging in the over-parameterized regime. Mei and Montanari (2022); Meng et al. (2022) studied the generalization of random feature models under the setting where the data dimension, data size, and feature dimension tend to infinity proportionally, and observe the phenomenon of multiple descent. Cao et al. (2022) examines benign overfitting in training two-layer CNNs and finds a sharp phase transition between benign and harmful overfitting, determined by the signal-to-noise ratio, where a CNN trained by gradient descent achieves small training and test loss under certain conditions and only achieves a constant level test loss otherwise. Kou et al. (2023) investigates the phenomenon of benign overfitting in ReLU neural networks and provides algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise, demonstrating a sharp transition between benign and harmful overfitting under different conditions on data distribution. ### Notation Given two sequences \(x_{n}\) and \(y_{n}\), we denote \(x_{n}=O(y_{n})\) if there exist some absolute constant \(C_{1}>0\) and \(N>0\) such that \(|x_{n}|\leq C_{1}|y_{n}|\) for all \(n\geq N\). Similarly, we denote \(x_{n}=\Omega(y_{n})\) if there exist \(C_{2}>0\) and \(N>0\) such that \(|x_{\underline{n}}|\geq C_{2}|y_{n}|\) for all \(n>N\). We say \(x_{n}=\Theta(y_{n})\) if \(x_{n}=O(y_{n})\) and \(x_{n}=\Omega(y_{n})\) both hold. We use \(O(\cdot)\), \(\widetilde{\Omega}(\cdot)\), and \(\widetilde{\Theta}(\cdot)\) to hide logarithmic factors in these notations respectively. Moreover, we denote \(x_{n}=\mathrm{poly}(y_{n})\) if \(x_{n}=O(y_{n}^{D})\) for some positive constant \(D\), and \(x_{n}=\mathrm{polylog}(y_{n})\) if \(x_{n}=\mathrm{polylog}(y_{n})\). Finally, for two scalars \(a\) and \(b\), we denote \(a\lor b=\max\{a,b\}\). ## 2 Problem Setting In this section, we will discuss the data generation model and the convolutional neural network (CNN) that we utilized in our paper. Our research focuses on binary classification. In order to establish the context for our work, we will introduce the data distribution \(\mathcal{D}\) that we considered in the following definition. **Definition 2.1** (Data model): _Let \(\mathbf{\mu}\in\mathbb{R}^{d}\) denote a fixed vector representing the signal contained in each data point. Each data point \((\mathbf{x},y)\), where \(\mathbf{x}=[\mathbf{x}^{(1)\top},\mathbf{x}^{(2)\top}]^{\top}\in\mathbb{R}^{2d}\) and \(y\in\pm 1\), is generated from the following distribution \(\mathcal{D}\):_ 1. _The label_ \(y\) _is generated as a Rademacher random variable._ 2. _A noise vector_ \(\mathbf{\xi}\) _is generated from the Gaussian distribution_ \(\mathcal{N}(\mathbf{0},\sigma_{p}^{2}\cdot(\mathbf{I}-\mathbf{\mu}\mathbf{\mu}^{\top} /\|\mathbf{\mu}\|^{2}))\)_._ 3. _One of_ \(\mathbf{x}^{(1)}\) _or_ \(\mathbf{x}^{(2)}\) _is given as_ \(y\cdot\mathbf{\mu}\)_, which represents the signal, while the other is given by_ \(\mathbf{\xi}\)_, which represents noise._ A series of similar data have been widely studied in recent works (Frei et al., 2022; Cao et al., 2022; Shen et al., 2022; Zou et al., 2023a, b). The data generation model draws inspiration from image data, where the input is comprised of various patches, with only a subset of these patches related to the image's class label. Specifically, we designate the patch \(y\cdot\mathbf{\mu}\) as the signal patch that correlates to the data's label, while \(\mathbf{\xi}\) represents the noise patch, which is unrelated to the label and therefore irrelevant for prediction. In order to clearly distinguish the role of gradient regularization in learning signals/noises, we assume that the noise patch is generated from a Gaussian distribution \(\mathcal{N}(\mathbf{0},\sigma_{p}^{2}\cdot(\mathbf{I}-\mathbf{\mu}\mathbf{\mu}^{\top} /\|\mathbf{\mu}\|^{2}))\) to ensure the noise vector is orthogonal to the signal vector \(\mathbf{\mu}\) for simplicity. **Two-layer CNN** We investigate a two-layer convolutional neural network that applies its filters to the two patches, \(\mathbf{x}^{(1)}\) and \(\mathbf{x}^{(2)}\), separately. Additionally, we fix the second layer parameters of the network as +1/m and -1/m, respectively. Then the network can be written as \(f(\mathbf{W},\mathbf{x})=\sum_{j\in\{\pm 1\}}j\cdot F_{j}(\mathbf{W}_{j}, \mathbf{x})\), where \[F_{j}(\mathbf{W}_{j},\mathbf{x})=\frac{1}{m}\sum_{r=1}^{m}\Big{(}\sigma(\langle \mathbf{w}_{j,r},\mathbf{x}^{(1)}\rangle)+\sigma(\langle\mathbf{w}_{j,r}, \mathbf{x}^{(2)}\rangle)\Big{)}.\] Here, \(\sigma(z)=\mathtt{ReLU}^{2}(z)\) is the activation function, \(\mathbf{w}_{j,r}\), \(j\in\{\pm 1\}\) and \(r\in[m]\), are first-layer parameter vectors with positive and negative second-layer parameters respectively, and \(\mathbf{x}=[\mathbf{x}^{(1)\top},\mathbf{x}^{(2)\top}]^{\top}\in\mathbb{R}^{2m}\). Note that using polynomial activation is commonly applied in studying feature learning of deep learning models. Our analysis can also be applied to \(\sigma(z)=\mathtt{ReLU}^{q}(z)\) with \(q>2\) by some additional treatments. We denote by \(\mathbf{W}\in\mathbb{R}^{2m}\) the collection of all \(\mathbf{w}_{j,r}\), \(j\in\{\pm 1\}\) and \(r\in[m]\). For each data point \(\mathbf{x}_{i}=[y_{i}\boldsymbol{\mu}^{\top},\boldsymbol{\xi}_{i}^{\top}]^{\top}\), \(\boldsymbol{\mu}\) is the signal and \(\boldsymbol{\xi}_{i}\) is the noise. Given \(n\) training data points \((\mathbf{x}_{i},y_{i})\), \(i\in[n]\), we define the empirical cross-entropy loss as \[L_{S}(\mathbf{W})=\frac{1}{n}\sum_{i=1}^{n}\ell[y_{i}\cdot f(\mathbf{W}, \mathbf{x}_{i})],\] where \(\ell(z)=\log(1+\exp(-z))\). Then the loss function with gradient regularization is given as \[\widetilde{L}(\mathbf{W})=L_{S}(\mathbf{W})+\frac{\lambda}{2n}\cdot\sum_{i=1} ^{n}\|\nabla_{\mathbf{W}}\ell[y_{i}\cdot f(\mathbf{W},\mathbf{x}_{i})]\|_{F} ^{2},\] where \(\lambda\) is the regularization parameter. It is worth noting that the gradient regularization is calculated by averaging the gradient norms when fed into different training data points, which can be also understood as a kind of variance control penalty over the training data. We consider training \(f(\mathbf{W},\mathbf{x})\) by minimizing \(\widetilde{L}(\mathbf{W})\) with gradient descent \(\mathbf{w}_{j,r}^{(t+1)}=\mathbf{w}_{j,r}^{(t)}-\eta\nabla_{\mathbf{w}_{j,r}^ {(t)}}\widetilde{L}(\mathbf{W})\), and we are particularly interested in analyzing the test error differences between the case with gradient regularization (\(\lambda>0\)) and the case without regularization (\(\lambda=0\)). However, note that the direct comparison between methods with and without regularization may not be fair. Therefore we compare the following two cases: * Implementing gradient regularization in the first phase of training: with some appropriately chosen \(T_{1}\), we use \(\lambda>0\) for iterations \(0\leq t\leq T_{1}\), and set \(\lambda=0\) for iterations \(t>T_{1}\). * Using no regularization throughout training: we use \(\lambda=0\) for all iterations \(t\geq 0\). We aim to show that both methods above can minimize the training loss \(L_{S}(\mathbf{W})\) while achieving different prediction accuracies on test data. Note that we consider gradient descent starting from Gaussian initialization, where each entry of \(\mathbf{W}_{+1}\) and \(\mathbf{W}_{-1}\) is sampled from a Gaussian distribution \(\mathcal{N}(0,\sigma_{0}^{2})\) and \(\sigma_{0}^{2}\) is the variance. ## 3 Main Results In this section, we present main theoretical results, which rely on a signal-noise decomposition of the filters in the CNN trained by gradient descent. Our results are based on the following conditions on the dimension \(d\), sample size \(n\), neural network width \(m\), learning rate \(\eta\) and initialization scale \(\sigma_{0}\) **Condition 3.1**: _Define a small constant \(0<\alpha<0.001\). Suppose that_ 1. _[noitemsep]_ 2. _Dimension_ \(d\) _is large:_ \(d\geq\widetilde{\Omega}(m^{2}n^{(2+2\alpha)})\)_._ 3. _The signal and noise levels satisfy_ \((\|\boldsymbol{\mu}\|+\|\boldsymbol{\mu}\|^{4})\ll\sigma_{p}\sqrt{d}\) _and_ \(\sigma_{p}\sqrt{d}\to+\infty\)_._ 4. _The standard deviation of Gaussian initialization_ \(\sigma_{0}\) _is sufficiently small. We assume_ \(\sigma_{0}\leq O\Big{(}\frac{1}{\sigma_{p}^{2}d\cdot(nm)^{2\alpha}}\Big{)}\)_._ 5. _The learning rate_ \(\eta=\widetilde{O}\big{(}\frac{nm}{\sigma_{p}^{2}d}\big{)}\)_._ 6. _Training sample size_ \(n\) _and neural network width_ \(m\) _satisfy_ \(m,n=\Omega(\mathrm{polylog}(d,\sigma_{0}^{-1}))\)_._ We set \(\lambda=\sigma_{p}^{-1}d^{-1/2}\). Below, we provide some remarks on Condition 3.1. The condition on \(d\) ensures that the learning takes place in a sufficiently over-parameterized setting, and similar conditions have been imposed in (Chatterji and Long, 2021; Cao et al., 2021). Additionally, the condition of the signal and noise levels in the data assumes that the signal level is considerably smaller than the noise level. This is to ensure that we are focusing on a relatively difficult learning problem for which learning methods without gradient regularization may fail. Moreover, the conditions on \(\sigma_{0}\) and \(\eta\) are technical assumptions we make to ensure the convergence of gradient descent. Finally, we only require that the sample size \(n\) and neural network width \(m\) be at least \(\mathrm{polylog}(d,\sigma_{0}^{-1})\), which are very mild assumptions. **Theorem 3.2**: _Consider implementing gradient regularization in the first phase of training. Specifically, set \(\lambda=\sigma_{p}^{-1}d^{-1/2}\) for iterations \(0\leq t\leq T_{1}\) and \(\lambda=0\) for iterations \(t>T_{1}\), where \(T_{1}\) satisfies that \(T_{1}=\frac{m}{\eta\|\boldsymbol{\mu}\|^{2}}\log\Big{(}\frac{4}{\sqrt{2\log(8 m/\delta)\sigma_{0}\|\boldsymbol{\mu}\|\cdot\log{(n)}}}\Big{)}\). Then under Condition 3.1, for any \(\varepsilon\geq 0\), there exists \(T_{1}\leq t\leq T_{1}+\widetilde{\Omega}(\frac{2nm\sigma_{p}^{2}d}{\eta \varepsilon\|\boldsymbol{\mu}\|^{2}})\), such that:_ 1. _[noitemsep]_ 2. _The training loss and its gradient converge below_ \(\varepsilon\)_:_ \(L_{S}(\mathbf{W}^{(t)})\leq\varepsilon,\|\nabla_{\mathbf{W}}L_{S}(\mathbf{W}) |_{\mathbf{W}=\mathbf{W}^{(t)}}\|_{F}^{2}\leq\varepsilon\)_._ 3. _The test error converges to_ \(0\)_: For any new data_ \((\mathbf{x},y)\)_,_ \(\mathbb{P}(yf(\mathbf{W}^{(t)},\mathbf{x})<0)\leq\frac{1}{\mathrm{poly}(n)}\)_._ Theorem 3.2 characterizes the case of signal learning. It shows that if we add gradient regularization in the beginning of the training and close it at appropriate time, the neural network can learn the signal, and then achieve small testing error and training gradient. To demonstrate the benefits of gradient regularization, we also present the theorem under the case that there is no gradient regularization during the whole training process. **Theorem 3.3**: _Consider using no regularization throughout training, i.e., \(\lambda=0\) for all \(t\geq 0\). Then under Condition 3.1, for any \(\varepsilon>0\), there exists \(0\leq t\leq\widetilde{O}\big{(}\frac{nm}{\eta\sigma_{p}^{2}d}+\eta^{-1} \varepsilon^{-1}m^{3}n\big{)}\) such that:_ 1. _[noitemsep]_ 2. _The training loss and its gradient converge below_ \(\varepsilon\)_:_ \(L_{S}(\mathbf{W}^{(t)})\leq\varepsilon,\|\nabla_{\mathbf{W}}L_{S}(\mathbf{W}) |_{\mathbf{W}=\mathbf{W}^{(t)}}\|_{F}^{2}\leq\varepsilon\)_._ 3. _The test error is large: For any new data_ \((\mathbf{x},y)\)_,_ \(\mathbb{P}(yf(\mathbf{W}^{(t)},\mathbf{x})<0)\geq\frac{1}{2.01}\)_._ Clearly, Theorem 3.3 is the case without gradient regularization. In this case, the CNN trained by gradient descent mainly memorizes noises in the training data and does not learn enough signal. This, together with Theorem 3.2, gives a clear statement: * By modifying the learning algorithm, the per-example gradient regularization can improve the learning of significant patterns within the data (often referred to as "signal") while simultaneously discouraging the memorizing of irrelevant or random variation (known as "noise"). This approach enhances the model's ability to extract relevant features and generalize well to new data. In this study, we examine the per-example gradient regularization (PEGR), which calculates the gradient of the loss function for each training data point. The full version of gradient regularization (FGR) considers interactions between different training data, which can make the dynamics more complex. We show in the experiments that the full version of gradient descent is unable to improve signal learning while preventing the memorization of noise. **Remark 3.4**: _Based on the analysis in Section 5.1, we see that the addition of PEGR to the loss function gives a suppressing term for both noise memorization and signal learning. This suppression term is particularly effective in the presence of high levels of noise, reducing the impact of noise memorization. Conversely, the subtracted term in signal learning is relatively small, allowing for signal learning to progress without significant interference._ ## 4 Experiments In this section, we conduct numeric experiments and real data experiments, in Subsection 4.1 and 4.2 separately. Both experiments show that PEGR will improve the test accuracy. ### Numerical experiments In this section, we perform numerical experiments on several synthetic data sets, which take different values on the \(\sigma_{p}\), to verify our theoretical results. The synthetic data is generated according to Definition 2.1. In particular, we set \(\left\|\boldsymbol{\mu}\right\|^{2}=1\), data dimension \(d=400\), neural network width \(m=10\) and training data size \(n=20\). We train all the data sets in total 1500 epochs with learning rate \(\eta=0.02\). In the case of Theorem 3.2, we close the PEGR at epoch 800. The tuning parameter \(\lambda\) are set to be \(0.01\). Note that the value of \(\sigma_{0}\) we set is based on the value of \(\sigma_{p}\). When \(\sigma_{p}=0.5\) or \(\sigma_{p}=1\), we set \(\sigma_{0}=0.01\); when \(\sigma_{p}=1.5\), we set \(\sigma_{0}=0.001\). We present the results of our experiments in Figure 1-4. We compare the performance of per-example gradient regularization (PEGR), full gradient regularization (FGR), and standard training through numerical experiments. FGR is represented by the following formula: \[\widehat{L}(\mathbf{W})=L_{S}(\mathbf{W})+\lambda\bigg{\|}\frac{1}{n}\sum_{i= 1}^{n}\nabla_{\mathbf{W}}\ell(y_{i}\cdot f(\mathbf{W},\mathbf{x}_{i}))\bigg{\|} _{F}^{2}.\] The standard learning is just \(\lambda=0\). **Training Loss:** The training loss is denoted by \(L_{S}(\mathbf{W}^{(t)})\). It's easy to observe that when training begins, the training loss is approximately \(\log(2)\approx 0.69\). Theorems 3.2 and 3.3 provide proofs for the convergence of the training loss. As depicted in Figure 1, when we eliminate regularization, all training losses converge to \(0\). **Signal Learning:** We define the signal learning by \(\text{Signal}=\max_{j,r}|\langle\mathbf{w}_{j,r},\mathbf{\mu}\rangle|.\) The results shown in Figure 2 demonstrate that PEGR effectively promotes signal learning, while FGR and standard training fail to achieve the same results. It's important to note that when \(\sigma_{p}=0.1\), signal learning increases in all cases due to the lower level of noise compared to when \(\sigma_{p}=0.5,1,1.5\). In scenarios where the data SNR is high, standard training or FGR can effectively learn the signal. Additionally, it's worth noting that PEGR does not suppress the signal even when it is strong. **Noise Memorizing:** We define the noise memorizing by \(\text{Noise}=\max_{j,r,i}\langle\mathbf{w}_{j,r},\mathbf{\xi}_{i}\rangle.\) The findings depicted in Figure 3 demonstrate that PEGR is highly effective at suppressing noise memorization, while FGR and standard training fail to produce the same outcomes. It is noteworthy that when \(\sigma_{p}=0.1\), noise memorization increases in all cases, likely due to the low level of noise. Nevertheless, it is important to note that noise memorization does not impede signal learning when \(\sigma_{p}=0.1\). In situations where the data SNR is low, FGR or standard training may not be capable of ef Figure 1: Training loss under different algorithms. (a) gives the training loss curve under per-example gradient regularization; (b) gives the training loss curve under full gradient regularization; (c) gives the training loss curve under standard training. Figure 2: Signal learning where \(y\)-axis is \(\max_{j,r}|\langle\mathbf{w}_{j,r},\mathbf{\mu}\rangle|\) under different algorithms. (a) gives the signal learning curve under per-example gradient regularization; (b) gives the signal learning curve under full gradient regularization; (c) gives the signal learning curve under standard training. fectively learning the signal. In such scenarios, PEGR suppresses noise and facilitates signal learning. **Test Accuracy:** PEGR undoubtedly achieves the highest test accuracy by enhancing signal learning and suppressing noise memorization. Figure 4 shows that the test accuracies in PEGR tend to 1 under different values of \(\sigma_{p}\), as proven in Theorem 3.2 and Theorem 3.3. It is worth noting that, while all algorithms have a test accuracy tending to 1 when \(\sigma_{p}=0.1\) due to the high SNR, only PEGR has all test accuracy tend to 1 when compared with FGR and standard training. In summary, the numerical experiments show that per-example gradient regularization (PEGR) enhances signal learning while suppressing noise memorization compared to full gradient regularization (FGR) and standard learning approaches. Furthermore, the training loss of all methods converges to zero. Importantly, the test accuracy achieved by PEGR is significantly higher than the test accuracy obtained by the other algorithms. These findings provide strong empirical evidence in support of our theoretical demonstration, indicating that PEGR holds promise as an effective approach for improving learning performance in neural networks. Figure 4: Test accuracy under different algorithms. (a) gives the test accuracy curve under per-example gradient regularization; (b) gives the test accuracy curve under full gradient regularization; (c) gives the test accuracy curve under standard training. Figure 3: Noise memorizing where \(y\)-axis is \(\max_{j,r,i}(\mathbf{w}_{j,r},\boldsymbol{\xi}_{i})\) under different algorithms. (a) gives the noise memorizing curve under per-example gradient regularization; (b) gives the noise memorizing curve under full gradient regularization; (c) gives the noise memorizing curve under standard training. ### Real data experiments In this section, we report on real data experiments conducted on the MNIST dataset using Lenet5 as the model architecture to assess the effectiveness of PEGR in suppressing noise memorization and enhancing signal learning. To achieve this, we added standard noise with a mean of 0 and varied the standard deviation of the noise, \(\sigma_{p}\), to 0, 5, 10, and 15, respectively. The test data was kept free from noise to evaluate how well the signal was learned. Our experiments were conducted with the following parameters: a learning rate \(\eta\) of 0.01, initialization \(\sigma_{0}\) of 0.1, batch size of 64, PEGR closed after 25 epochs, and a tuning parameter of \(\lambda=0.1\). The training loss and test accuracy results are presented below. **Training loss:** As dipicted in Figure 5, the training loss for both PEGR and standard training approaches show a consistent decrease with increasing iterations, implying that the training process is stable and allowing us to evaluate the efficacy of PEGR in real data training. The observed stable training status allows for reliable comparison of the two approaches and enables us to assess the potential of PEGR in enhancing signal learning in real-world data sets. **Test accuracy:** As shown in Figure 6, PEGR demonstrates its ability to enhance signal learning during real data training by yielding higher test accuracy compared to standard training under different values of \(\sigma_{p}\) and training algorithms. The decrease in test accuracy for both approaches Figure 5: Training loss under different \(\sigma_{p}\). (a) gives the training loss curve under \(\sigma_{p}=0\); (b) gives the training loss curve under \(\sigma_{p}=5\); (c) gives the training loss curve under \(\sigma_{p}=10\); (d) gives the training loss curve under \(\sigma_{p}=15\). Figure 6: Test accuracy under different \(\sigma_{p}\). (a) gives the test accuracy curve under \(\sigma_{p}=0\); (b) gives the test accuracy curve under \(\sigma_{p}=5\); (c) gives the test accuracy curve under \(\sigma_{p}=10\); (d) gives the test accuracy curve under \(\sigma_{p}=15\). as the value of \(\sigma_{p}\) increases indicates that noise obscures the signal, but PEGR is still able to distinguish the signal from noise and gradually increase the test accuracy. Moreover, as shown in Figure 6 (a), PEGR does not prevent signal learning when there is no noise. Hence, these results suggest that PEGR can be effectively applied to improve the generalization of deep neural networks in real-world settings where training data sets contain significant noise. In conclusion, our real data experiments confirm the effectiveness of PEGR in enhancing signal learning. As the level of noise in the training data increases, both PEGR and standard training approaches achieve a small training loss. However, PEGR is able to separate the signal from the noise better than standard training, resulting in higher test accuracy. These results, combined with our theoretical guarantees in Theorems 3.2 and 3.3, suggest that the PEGR technique can be widely applied when training data sets contain significant noise and the signal is obscured. ## 5 Overview of Proof Technique In this section, we present the main technique in the study of CNN training under our setting. The maximum admissible iterations we set in the paper is \(T^{*}=\eta^{-1}\text{poly}(n,m,d,\varepsilon^{-1},\sigma_{0}^{-1},\sigma_{p} \sqrt{d})\). The complete proofs of all the results are given in the appendix. ### Why PEGR works better than standard training? In this section, we give the equation of the gradient, and briefly discuss about why PEGR works better than standard training. We further denote by \(\zeta_{i}^{(t)}=\sum_{j^{\prime}}\sum_{r^{\prime}=1}^{m}\big{(}\texttt{ReLU}^ {2}(\langle\mathbf{w}_{j^{\prime},r^{\prime}}^{(t)},y_{i}\boldsymbol{\mu} \rangle)\|\boldsymbol{\mu}\|^{2}+\texttt{ReLU}^{2}(\langle\mathbf{w}_{j^{ \prime},r^{\prime}}^{(t)},\boldsymbol{\xi}_{i}\rangle)\|\boldsymbol{\xi}_{i} \|^{2}\big{)}\), \(\ell_{i}^{\prime(t)}=\ell^{\prime}[y_{i}\cdot f(\mathbf{W}^{(t)},\mathbf{x}_{i} )]\), \(\ell_{i}^{\prime\prime(t)}=\ell^{\prime\prime}[y_{i}\cdot f(\mathbf{W}^{(t)}, \mathbf{x}_{i})]\), and show the update rule in the following lemma. **Lemma 5.1**: _Given \(\widetilde{L}(\mathbf{W})\) above, we have_ \[\mathbf{w}_{j,r}^{(t+1)} =\mathbf{w}_{j,r}^{(t)}-\frac{2\eta}{nm}\sum_{i=1}^{n}\ell_{i}^{ \prime(t)}\bigg{(}1+\frac{4\lambda\zeta_{i}^{(t)}}{m}\cdot\ell_{i}^{\prime \prime(t)}\bigg{)}\cdot jy_{i}\big{(}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{( t)},y_{i}\boldsymbol{\mu}\rangle)y_{i}\boldsymbol{\mu}+\texttt{ReLU}( \langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)\boldsymbol{\xi}_{i} \big{)}\] \[\quad-\frac{4\lambda\eta}{nm^{2}}\sum_{i=1}^{n}\ell_{i}^{\prime 2(t)} \big{(}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i}\boldsymbol{\mu} \rangle)\|\boldsymbol{\mu}\|^{2}y_{i}\boldsymbol{\mu}+\texttt{ReLU}(\langle \mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)\|\boldsymbol{\xi}_{i}\|^{2 }\boldsymbol{\xi}_{i}\big{)}.\] The proof for Lemma 5.1 can be found in Appendix A.1. With the update rule established, we can now proceed to analyze the disparity in test error between cases with and without gradient regularization. By using Lemma 5.1, we can observe how per-example gradient regularization affects the learning of both noise and feature. Specifically, in the early stages of training where \(|\mathbf{w}_{j,t}^{(t)}|\ll 1\), we can ignore the high-order terms \(\zeta_{i}^{(t)}\), and we can approximate \(\ell_{i}^{(t)}\sim-0.5\) and \(\ell_{i}^{\prime 2(t)}\sim 1/4\). Moreover, we assume that \(\boldsymbol{\xi}_{i}\) is nearly orthogonal to all other noise vectors. These simplifications allow us to discuss the training with PEGR and the standard training method separately. In standard training, we set \(\lambda=0\), and the update rule becomes \[\mathbf{w}_{j,r}^{(t+1)}=\mathbf{w}_{j,r}^{(t)}-\frac{2\eta}{nm}\sum_{i=1}^{n} \ell_{i}^{\prime(t)}\big{(}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i} \boldsymbol{\mu}\rangle)j\boldsymbol{\mu}+\texttt{ReLU}(\langle\mathbf{w}_{j, r}^{(t)},\boldsymbol{\xi}_{i}\rangle)jy_{i}\boldsymbol{\xi}_{i}\big{)}.\] If the noise is significantly stronger than the signal, the signal and noise can grow at a rate of \(\Theta(\eta\|\mathbf{\mu}\|^{2})\) and \(\Theta(\eta\|\mathbf{\xi}_{i}\|^{2}/n)\) respectively, indicating a faster rate of noise memorization. As training progresses, the noise memorization grows significantly faster than the signal learning, leading to a rapid decrease in the loss function. Consequently, the signal learning will no longer improve despite the small loss. Similar in the PEGR training, we also ignore the higher-order terms \(\zeta_{i}^{(t)}\), and the update rule can be approximated by \[\mathbf{w}_{j,r}^{(t+1)} \approx\mathbf{w}_{j,r}^{(t)}-\frac{2\eta}{nm}\sum_{i=1}^{n}\ell_ {i}^{\prime(t)}\big{(}\text{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i}\mathbf{\mu} \rangle)j\mathbf{\mu}+\text{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},\mathbf{\xi}_{i} \rangle)jy_{i}\mathbf{\xi}_{i}\big{)}\] \[-\frac{4\lambda\eta}{nm^{2}}\sum_{i=1}^{n}\ell_{i}^{\prime 2(t)} \big{(}\text{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i}\mathbf{\mu}\rangle)\|\mathbf{ \mu}\|^{2}y_{i}\mathbf{\mu}+\text{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},\mathbf{\xi}_{i }\rangle)\|\mathbf{\xi}_{i}\|^{2}\mathbf{\xi}_{i}\big{)}.\] The per-example gradient regularization suppresses the learning of both features and noises, driving them back to zero at suppression speeds approximately \(\lambda\cdot\Theta(\eta\|\mathbf{\mu}\|^{4})\) and \(\lambda\cdot\Theta(\eta\|\mathbf{\xi}_{i}\|^{4}/n)\), respectively. Recall the growing rate \(\Theta(\eta\|\mathbf{\mu}\|^{2})\) and \(\Theta(\eta\|\mathbf{\xi}_{i}\|^{2}/n)\), we can tune \(\lambda\) to make the suppression speeds \(\Theta(\eta\|\mathbf{\mu}\|^{4}/\|\mathbf{\xi}_{i}\|)\) and \(\Theta(\eta\|\mathbf{\xi}_{i}\|^{3}/n)\), which indicates that the regularization imposes a stronger suppression on noise memorization while still promoting signal learning. The analysis presented above suggests that feature learning can outperform noise memorization when the signal-to-noise ratio (SNR) is small. In contrast, standard training is likely to fail in such a low-SNR regime. By appropriately adjusting the regularization parameter, we can enhance the suppression of noise memorization and promote the learning of features, which can lead to improved performance in low-SNR scenarios. In fact, the reason behind this is that the PEGR implicitly controls the variance of the gradients when using different training data points. As a consequence, the network is encouraged to make use of the **signal component** rather than the **noise component** to fit the training data, since the signal component, which is shared for all data in the same class, has a significantly smaller variance than that of the noise components, which vary for different data. ### Analysis in the maximum admissible iterations. In this section, we give the analysis in the maximum admissible iterations \(T^{*}\). Since the vectors \(\mathbf{\mu}\) and \(\mathbf{\xi}_{i}\), where \(i\in[n]\), are linearly independent with probability \(1\), we can introduce the following definition based on the gradient descent update rule in Lemma 5.1. **Definition 5.2**: _Let \(\mathbf{w}_{j,r}^{(t)}\) for \(j\in\{\pm 1\}\), \(r\in[m]\) be the convolution filter of the CNN at the \(t\)-th iteration of gradient descent. Then there exists unique coefficients \(\gamma_{j,r}^{(t)}\) and \(\rho_{j,r,i}^{(t)}\) such that_ \[\mathbf{w}_{j,r}^{(t)}=\mathbf{w}_{j,r}^{(0)}+j\cdot\gamma_{j,r}^{(t)}\frac{ \mathbf{\mu}}{\|\mathbf{\mu}\|^{2}}+\sum_{i=1}^{n}\rho_{j,r,i}^{(t)}\frac{\mathbf{\xi}_{i} }{\|\mathbf{\xi}_{i}\|^{2}}. \tag{5.1}\] Easy to see \(\gamma_{j,r}^{(0)}=\rho_{j,r,i}^{(0)}=0\). We refer to (5.1) as the signal-noise decomposition of \(\mathbf{w}_{j,r}^{(t)}\). The uniqueness can be found in the proof of Lemma A.2 in Appendix A. In this decomposition, \(\gamma_{j,r}^{(t)}\) characterizes the progress of learning the signal vector \(\mathbf{\mu}\), and \(\rho_{j,r,i}^{(t)}\) depicts the degree of noise memorization by the filter. Based on the decomposition, if some \(\gamma^{(t)}_{j,r}\) is large enough while \(|\rho^{(t)}_{j,r,i}|\) is small, then the CNN will achieve small training gradient and test error, if some \(|\rho^{(t)}_{j,r,i}|\) is large enough while all \(\gamma^{(t)}_{j,r}\) are small, then the CNN will achieve a small training gradient but a large test error. Thus, Definition 5.2 provides a method for us to study the convergence of the training procedure and the property of the test error with or without gradient regularization under SGD. Note that the dynamic analysis is related to a non convex optimization problem, and the key technique is the signal-noise decomposition in Definition 5.2, which is further investigated in the following lemma. **Lemma 5.3**: _The coefficients \(\gamma^{(t)}_{j,r},\rho^{(t)}_{j,r,i}\) in Definition 5.2 satisfy the following equations:_ \[\zeta^{(t)}_{i} =\sum_{j^{\prime}}\sum_{r^{\prime}=1}^{m}\big{(}\texttt{ReLU}^{2 }(\langle\mathbf{w}^{(t)}_{j^{\prime},r^{\prime}},y_{i}\boldsymbol{\mu} \rangle)\|\boldsymbol{\mu}\|^{2}+\texttt{ReLU}^{2}(\langle\mathbf{w}^{(t)}_{j ^{\prime},r^{\prime}},\boldsymbol{\xi}_{i}\rangle)\|\boldsymbol{\xi}_{i}\|^{2 }\big{)},\] \[\gamma^{(t+1)}_{j,r} =\gamma^{(t)}_{j,r}-\frac{2\eta}{nm}\sum_{i=1}^{n}\bigg{[}\ell^{ \prime(t)}_{i}\bigg{(}1+\frac{4\lambda\zeta^{(t)}_{i}}{m}\cdot\ell^{\prime \prime(t)}_{i}\bigg{)}\texttt{ReLU}(\langle\mathbf{w}^{(t)}_{j,r},y_{i}\cdot \boldsymbol{\mu}\rangle)\|\boldsymbol{\mu}\|^{2}\] \[\quad+\frac{2\lambda}{m}\ell^{\prime 2(t)}_{i}\texttt{ReLU}( \langle\mathbf{w}^{(t)}_{j,r},y_{i}\cdot\boldsymbol{\mu}\rangle)\|\boldsymbol{ \mu}\|^{4}\cdot jy_{i}\bigg{]},\] \[\rho^{(t+1)}_{j,r,i} =\rho^{(t)}_{j,r,i}-\frac{2\eta}{nm}\bigg{[}\ell^{\prime(t)}_{i} \bigg{(}1+\frac{4\lambda\zeta^{(t)}_{i}}{m}\cdot\ell^{\prime\prime(t)}_{i} \bigg{)}\texttt{ReLU}(\langle\mathbf{w}^{(t)}_{j,r},\boldsymbol{\xi}_{i} \rangle)\|\boldsymbol{\xi}_{i}\|^{2}\cdot jy_{i}\] \[\quad+\frac{2\lambda}{m}\ell^{\prime 2(t)}_{i}\texttt{ReLU}( \langle\mathbf{w}^{(t)}_{j,r},\boldsymbol{\xi}_{i}\rangle)\|\boldsymbol{\xi}_{ i}\|^{4}\bigg{]},\] _the initialization condition is \(\gamma^{(0)}_{j,r}=\rho^{(0)}_{j,r,i}=0\)._ Lemma 5.3 gives the update rule of the coefficients \(\gamma^{(t)}_{j,r}\) and \(\rho^{(t)}_{j,r,i}\), which enables us further analyze the training process. We further define \[\overline{\rho}^{(t)}_{j,r,i}=\rho^{(t)}_{j,r,i}\mathbf{1}\{j=y_{i}\},\quad \underline{\rho}^{(t)}_{j,r,i}=\rho^{(t)}_{j,r,i}\mathbf{1}\{j\neq y_{i}\}.\] Our proof then focuses on a careful assessment of \(\gamma^{(t)}_{j,r}\)\(\overline{\rho}^{(t)}_{j,r,i}\) and \(\underline{\rho}^{(t)}_{j,r,i}\) throughout training. To prepare more detailed proof, we give the update rule of \(\gamma^{(t)}_{j,r}\)\(\overline{\rho}^{(t)}_{j,r,i}\) and \(\underline{\rho}^{(t)}_{j,r,i}\) with \(\lambda=0\). \[\gamma^{(t+1)}_{j,r} =\gamma^{(t)}_{j,r}-\frac{2\eta}{nm}\sum_{i=1}^{n}\ell^{\prime(t)} _{i}\texttt{ReLU}\bigg{(}\langle\mathbf{w}^{(0)}_{j,r},y_{i}\boldsymbol{\mu} \rangle+jy_{i}\cdot\gamma^{(t)}_{j,r}\bigg{)}\|\boldsymbol{\mu}\|^{2}, \tag{5.2}\] \[\overline{\rho}^{(t+1)}_{j,r,i} =\overline{\rho}^{(t)}_{j,r,i}-\frac{2\eta}{nm}\ell^{\prime(t)}_{i} \texttt{ReLU}\bigg{(}\langle\mathbf{w}^{(0)}_{j,r},\boldsymbol{\xi}_{i} \rangle+\sum_{i^{\prime}=1}^{n}\overline{\rho}^{(t)}_{j,r,i^{\prime}}\frac{ \langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{i^{\prime}}\rangle}{\| \boldsymbol{\xi}_{i^{\prime}}\|^{2}}+\sum_{i^{\prime}=1}^{n}\underline{\rho}^ {(t)}_{j,r,i^{\prime}}\frac{\langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{i^{ \prime}}\rangle}{\|\boldsymbol{\xi}_{i^{\prime}}\|^{2}}\bigg{)}\|\boldsymbol{ \xi}_{i}\|^{2},\] \[\underline{\rho}^{(t+1)}_{j,r,i} =\underline{\rho}^{(t)}_{j,r,i}+\frac{2\eta}{nm}\ell^{\prime(t)}_{i} \texttt{ReLU}\bigg{(}\langle\mathbf{w}^{(0)}_{j,r},\boldsymbol{\xi}_{i} \rangle+\sum_{i^{\prime}=1}^{n}\overline{\rho}^{(t)}_{j,r,i^{\prime}}\frac{ \langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{i^{\prime}}\rangle}{\| \boldsymbol{\xi}_{i^{\prime}}\|^{2}}+\sum_{i^{\prime}=1}^{n}\underline{\rho}^ {(t)}_{j,r,i^{\prime}}\frac{\langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{i^{ \prime}}\rangle}{\|\boldsymbol{\xi}_{i^{\prime}}\|^{2}}\bigg{)}\|\boldsymbol{ \xi}_{i}\|^{2}.\] With (5.2) above, we give the following proposition which holds during in the whole training process, no matter there exists gradient regularization or not. **Proposition 5.4**: _Under Condition 3.1, if \(\gamma^{(t)}_{j,r},\overline{\rho}^{(t)}_{j,r,i}\) and \(\underline{\rho}^{(t)}_{j,r,i}\) satisfy the update rule (5.2), \(|\gamma^{(0)}_{j,r}|=O(1)\) and \(|\rho^{(0)}_{j,r,i}|\leq 8\sqrt{\log(8mn/\delta)}\cdot\sigma_{0}\sigma_{p}\sqrt{d}\). Then it holds that_ \[0\leq\gamma^{(t)}_{j,r},\overline{\rho}^{(t)}_{j,r,i}\leq 4\log(T^{*}),\] \[\underline{\rho}^{(t)}_{j,r,i}\geq-\beta-64n\sqrt{\frac{\log(4n^ {2}/\delta)}{d}}\log(T^{*})\geq-4\log(T^{*})\] _for any \(t\in[T^{*}]\), where \(\beta=2\max_{i,j,r}\{|\langle w^{(0)}_{j,r},\boldsymbol{\mu}\rangle|,|\langle w ^{(0)}_{j,r},\boldsymbol{\xi}_{i}\rangle|\}\)._ The initialization conditions for \(\gamma^{(t)}_{j,r}\) and \(\rho^{(t)}_{j,r,i}\) in the following proposition ensure that the inequalities in the proposition will always hold, regardless of whether we are dealing with Theorem 3.2 or Theorem 3.3. We will prove that when we close the gradient regularization in the case of Theorem 3.2, \(\gamma^{(t)}_{j,r}\) and \(\rho^{(t)}_{j,r,i}\) still satisfy the initialization conditions specified in Proposition 5.4. After we prove that Proposition 5.4 holds in both cases, we have the following lemma, which gives an upper bound of the training gradient. **Lemma 5.5**: _Under Condition 3.1, for \(0\leq t\leq T^{*}\) where \(T^{*}=\eta^{-1}\mathrm{poly}(n,m,d,\varepsilon^{-1},\sigma_{0}^{-1},\sigma_{p} \sqrt{d})\) is the maximum admissible iterations. the following result holds._ \[\|\nabla_{\mathbf{W}}L_{S}(\mathbf{W})|_{\mathbf{W}=\mathbf{W}^{(t)}}\|_{F}^ {2}\leq 72\sigma_{p}^{2}dL_{S}(\mathbf{W}^{(t)}).\] With this upper bound, it is clear that if we provide a sharp bound for \(L_{S}(\mathbf{W}^{(t)})\) at some iteration \(t\), the training gradient will be bounded. ### A Two-Stage Analysis in Theorem 3.2 We utilize a two-stage analysis to decouple the complicated relation among the coefficients \(\gamma^{(t)}_{j,r}\), \(\overline{\rho}^{(t)}_{j,r,i}\), and \(\underline{\rho}^{(t)}_{j,r,i}\). In the first stage of the training process, we set the initial neural network weights to be small enough such that \(\ell^{\prime(0)}_{i}\approx-1/2\), and we assign \(\ell^{\prime(t)}_{i}=\ell^{\prime}(y_{i}f(\mathbf{W}^{(t)},\mathbf{x}_{i})) \approx-1/2\) for all \(i\in[n]\). We then show that there exists a significant scale difference among the values of \(\max_{r}\gamma^{(t)}_{j,r}\), \(\max_{r}\overline{\rho}^{(t)}_{j,r,i}\), and \(\max_{r}|\underline{\rho}^{(t)}_{j,r,i}|\) at the final iteration of the first stage. Based on these findings, we proceed to the second stage of the training process, where the loss derivatives are modified. **Stage 1** It can be shown that when \(\gamma^{(t)}_{j,r}\) and \(\rho^{(t)}_{j,r,i}\) reach the order of \(1/\mathrm{polylog}(n)\), the value of \(\ell^{\prime(t)}_{i}\) remains around \(1/2\). This observation allows us to simplify the dynamics of the coefficients in (5.2) by using upper and lower bounds in place of \(\ell^{\prime(t)}_{i}\). Based on these findings, we can summarize our main conclusion in the first stage of training with gradient regularization as follows: **Proposition 5.6**: _Under the same conditions as Theorem 3.2, define \(T_{1}\) satisfies_ \[T_{1}=\frac{m}{\eta\|\boldsymbol{\mu}\|^{2}}\log\Big{(}\frac{4}{\sigma_{0}\| \boldsymbol{\mu}\|\cdot\log{(n)}\sqrt{2\log(8m/\delta)}}\Big{)}.\] _Then the following facts hold:_ 1. _For any_ \(i\in[n]\)_,_ \(|\Upsilon^{(t)}_{i}|=|\ell^{\prime(t)}_{i}+\frac{1}{2}|=O(\frac{1}{\log^{2}{( n)}})\)_._ 2. \(\gamma_{j,r}^{(t)}\leq 5/\log(n)\) _for all_ \(j\in\{\pm 1\}\)_,_ \(r\in[m]\) _and_ \(t\in[T_{1}]\)_._ 3. _For each_ \(j\)_, there exists_ \(c_{1}>0\) _such that_ \(\max_{r}\gamma_{j,r}^{(T_{1})}\geq 1/(\sqrt{2\log(8m/\delta)}\log(n))\)_. Moreover, for_ \(\rho_{j,r,i}\) _we have_ \[0\geq\rho_{j,r,i}^{(T_{1})}\geq-8\sqrt{\log(8mn/\delta)}\cdot\sigma_{0}\sigma_ {p}\sqrt{d}.\] Proposition 5.6 demonstrates that CNN can effectively capture the signal under training with gradient regularization. At the end of this stage, \(\max_{r}\gamma_{j,r}^{(t)}\) reaches a value on the order of \(1/\mathrm{polylog}(n)\), which is sufficiently larger than \(\rho_{j,r,i}^{(t)}\). In the next stage, we show that reaching this threshold for \(\max_{r}\gamma_{j,r}^{(t)}\) is sufficient to prove that the test error and training gradient are small. **Stage 2** In this stage, we take into full consideration the exact definition \(\ell^{\prime(t)}\), and show that the training loss will converge to any \(\varepsilon>0\). We give the following proposition, which directly shows the results in Theorem 3.2. **Proposition 5.7**: _Under Condition 3.1, for any \(\varepsilon>0\), define \(\varepsilon_{0}=1-e^{-\varepsilon}\), and_ \[T_{2}=\frac{2nm}{\eta\varepsilon_{0}\|\boldsymbol{\mu}\|^{2}}\log\big{(}\sqrt {2\log(8m/\delta)}\cdot\log(n)d\big{)}\] _then there exists \(t\in[T_{1},T_{1}+T_{2}]\) such that_ \[L_{S}(\mathbf{W}^{(t)})\leq\varepsilon.\] We prove Propositiuon 5.7 by contradiction. With Proposition 5.4, Proposition 5.6 and Proposition 5.7, we can prove Theorem 3.2. Details can be found in Appendix D.3. ### Implications to full gradient regularization (FGR) In this section, we provide a brief discussion on the performance of full gradient regularization (FGR). Our goal is to train the function \(f(\mathbf{W},\mathbf{x})\) by minimizing the cross-entropy loss with gradient regularization. Specifically, we seek to minimize the function \[\widetilde{L}(\mathbf{W})=L_{S}(\mathbf{W})+\frac{\lambda}{2}\|\nabla_{ \mathbf{W}}L_{S}(\mathbf{W})\|_{F}^{2}\] using gradient descent. Although we do not present a rigorous analysis of FGR, for the sake of clarity, we make the simplifying assumption that \(\langle\boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle\) and \(\langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{i^{\prime}}\rangle\) are all equal to \(0\) for \(i\neq i^{\prime}\). With this assumption, we derive the following update rule. **Lemma 5.8**: _Under the full gradient regularization, suppose that \(\langle\boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle=\langle\boldsymbol{\xi}_{ i},\boldsymbol{\xi}_{i^{\prime}}\rangle=0\) for all \(i\neq i^{\prime}\),_ _then the update rule is_ \[\mathbf{w}_{j,r}^{(t+1)} =\mathbf{w}_{j,r}^{(t)}-\frac{2\eta}{nm}\sum_{i=1}^{n}\ell_{i}^{ \prime(t)}\big{(}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i}\boldsymbol{ \mu}\rangle)j\boldsymbol{\mu}+\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},jy_{ i}\boldsymbol{\xi}_{i}\rangle)\boldsymbol{\xi}_{i}\big{)}\] \[-\frac{4\lambda\eta}{n^{2}m^{2}}\sum_{j^{\prime}}\sum_{r^{\prime}= 1}^{m}\bigg{(}\sum_{i=1}^{n}\ell_{i}^{\prime(t)}\texttt{ReLU}(\langle\mathbf{w} _{j^{\prime},r^{\prime}}^{(t)},y_{i}\boldsymbol{\mu}\rangle)\bigg{)}\cdot\bigg{(} \sum_{i=1}^{n}\ell_{i}^{\prime\prime(t)}\Big{(}\texttt{ReLU}(\langle\mathbf{w} _{j,r}^{(t)},y_{i}\boldsymbol{\mu}\rangle)j\boldsymbol{\mu}+\texttt{ReLU}( \langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)jy_{i}\boldsymbol{ \xi}_{i}\Big{)}\] \[-\frac{4\lambda\eta}{n^{2}m^{2}}\sum_{j^{\prime}}\sum_{r^{\prime}= 1}^{m}\sum_{i=1}^{n}\ell_{i}^{\prime(t)}\ell_{i}^{\prime\prime(t)}\cdot\Big{(} \texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)j \boldsymbol{\mu}+\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{ i}\rangle)jy_{i}\boldsymbol{\xi}_{i}\Big{)}\cdot\texttt{ReLU}^{2}(\langle \mathbf{w}_{j^{\prime},r^{\prime}}^{(t)},\boldsymbol{\xi}_{i}\rangle)\| \boldsymbol{\xi}_{i}\|^{2}.\] The proof of Lemma 5.8 can be found in Appendix A.2. To provide a high-level discussion on the performance of FGR, we omit all the high-order terms in Lemma 5.8 and focus on the dominant ones. Accordingly, we simplify the update rule as follows: \[\mathbf{w}_{j,r}^{(t+1)} \approx\mathbf{w}_{j,r}^{(t)}-\frac{2\eta}{nm}\sum_{i=1}^{n}\ell_ {i}^{\prime(t)}\big{(}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i} \boldsymbol{\mu}\rangle)j\boldsymbol{\mu}+\texttt{ReLU}(\langle\mathbf{w}_{j,r }^{(t)},jy_{i}\boldsymbol{\xi}_{i}\rangle)\boldsymbol{\xi}_{i}\big{)}\] \[-\frac{4\lambda\eta}{n^{2}m^{2}}\bigg{(}\sum_{i=1}^{n}\ell_{i}^{ \prime(t)}\texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},y_{i}\boldsymbol{\mu} \rangle)\bigg{)}\cdot\bigg{(}\sum_{i=1}^{n}\ell_{i}^{\prime(t)}\mathbf{1}( \langle\mathbf{w}_{j,r}^{(t)},y_{i}\boldsymbol{\mu}\rangle>0)y_{i}\boldsymbol{ \mu}\bigg{)}\|\boldsymbol{\mu}\|^{2}\] \[-\frac{4\lambda\eta}{n^{2}m^{2}}\sum_{i=1}^{n}\ell_{i}^{\prime 2(t)} \texttt{ReLU}(\langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)\| \boldsymbol{\xi}_{i}\|^{2}\boldsymbol{\xi}_{i}.\] As discussed in Lemma 5.1, we can approximate \(\ell_{i}^{(t)}\sim-0.5\), \(\ell_{i}^{\prime\prime(t)}\sim 1/4\), and \(\ell_{i}^{\prime 2(t)}\sim 1/4\). Furthermore, we assume that \(\boldsymbol{\xi}_{i}\) is orthogonal to all other noise vectors. Employing the previously mentioned update rule, we note that regularization propels signal learning and noise memorization back to zero at suppression speeds of approximately \(\lambda\cdot\Theta(\eta\|\boldsymbol{\mu}\|^{4})\) and \(\lambda\cdot\Theta(\eta\|\boldsymbol{\xi}_{i}\|^{4}/n^{2})\), respectively. In contrast, as discussed in Section 5.1, PEGR has a stronger suppression speed for noise memorization, i.e., \(\lambda\cdot\Theta(\eta\|\boldsymbol{\xi}_{i}\|^{4}/n^{2})\). Additionally, recalling the growth rates under standard training is \(\Theta(\eta\|\boldsymbol{\mu}\|^{2})\) and \(\Theta(\eta\|\boldsymbol{\xi}_{i}\|^{2}/n)\), if we adjust \(\lambda\) and establish the suppression speeds as \(\Theta(\eta\|\boldsymbol{\mu}\|^{4}/\|\boldsymbol{\xi}_{i}\|)\) and \(\Theta(\eta\|\boldsymbol{\xi}_{i}\|^{3}/n^{2})\), more stringent conditions (compared to PEGR) will be required to inhibit noise memorization and encourage signal learning. Additionally, the intricate dynamics under FGR may also hinder the successful learning of the signal, as demonstrated by the numerical experiments in Section 4.1. Consequently, in practical applications, we advocate for the utilization of PEGR over FGR to enhance signal learning. ## 6 Conclusion and Future Work In this paper, we employ a signal-noise decomposition framework to investigate the impact of gradient regularization on the training of a two-layer convolutional neural network (CNN). Specifically, we provide precise conditions under which the CNN will prioritize learning signals over memorizing noises, and demonstrate the benefits of gradient regularization in inhibiting the memorizing of noises while encouraging the learning of signals. Our results theoretically demonstrate the effectiveness of gradient regularization in facilitating the learning of signals. As a next step, we aim to extend our analysis to deep convolutional neural networks and investigate the signal learning dynamics when using mini-batch stochastic gradients, which represent a critical area of ongoing research. ## Acknowledgement We would like to thank Yuan Hua for a helpful discussion about the experiments.
2309.12888
The algebra of symmetric tensors on smooth projective varieties
We discuss in this note the algebra H^0(X, Sym*TX) for a smooth complex projective variety X . We compute it in some simple examples, and give a sharp bound on its Krull dimension. Then we propose a conjectural characterization of non-uniruled projective manifolds with pseudo-effective tangent bundle.
Arnaud Beauville, Jie Liu
2023-09-22T14:21:25Z
http://arxiv.org/abs/2309.12888v4
# The algebra of symmetric tensors on smooth projective varieties ###### Abstract. We discuss in this note the \(\mathbb{C}\)-algebra \(H^{0}(X,\mathbb{S}^{\bullet}T_{X})\) for a smooth complex projective variety \(X\). We compute it in some simple examples, and give a sharp bound on its Krull dimension. Then we propose a conjectural characterization of non-uniruled projective manifolds with pseudo-effective tangent bundle. A. Beauville is indebted to Feng Shao for several useful comments and references. J. Liu would like to thank S. Druel for useful communication. J. Liu is supported by the National Key Research and Development Program of China (No. 2021YFA1002300), the NSFC grant (No. 12288201) and the CAS Project for Young Scientists in Basic Research (No. YSBR-033). ## 2. Some examples ### Abelian varieties We start with a trivial case: if \(X\) is an abelian variety of dimension \(n\), we have \(T_{X}\cong\mathscr{O}_{X}^{n}\), hence \(S(X)\) is a polynomial algebra in \(n\) variables. ### Projective space Let \(V\) be a vector space. We let \(I\in V\otimes V^{*}\) be the image of the identity by the isomorphism \(\operatorname{End}(V)\rightharpoonup\!\!\!\to V\otimes V^{*}\). **Proposition 1**.: _The graded algebra \(S(\mathbb{P}(V))\) is isomorphic to the quotient of \(\bigoplus_{d\geq 0}(\mathsf{S}^{d}V\otimes\mathsf{S}^{d}V^{*})\) by the ideal generated by \(I\)._ Proof.: The projective cotangent bundle \(\mathbb{P}T_{\mathbb{P}(V)}^{*}\) can be identified with the incidence hypersurface \(Z\subset\mathbb{P}(V)\times\mathbb{P}(V^{*})\) consisting of pairs \((x,H)\) with \(x\in H\); the tautological line bundle \(\mathscr{O}_{Z}(1)\) is induced by \(\mathscr{O}_{\mathbb{P}(V)}(1)\boxtimes\mathscr{O}_{\mathbb{P}(V^{*})}(1)\). The Proposition follows from the exact sequence \[0\to\mathscr{O}_{\mathbb{P}(V)}(d-1)\boxtimes\mathscr{O}_{\mathbb{P}(V^{*})}( d-1)\xrightarrow{\times I}\mathscr{O}_{\mathbb{P}(V)}(d)\boxtimes\mathscr{O}_{ \mathbb{P}(V^{*})}(d)\to\mathscr{O}_{Z}(d)\to 0\,.\qed\] ### Rational homogeneous manifolds In this section we will use some general facts about nilpotent orbits, which can be found for example in [Fu]. Let \(X=G/P\), where \(G\) is a reductive algebraic group and \(P\) a parabolic subgroup. We denote by \(\mathfrak{g}\) and \(\mathfrak{p}\) their Lie algebras, and by \(\mathfrak{n}\) the nilradical of \(\mathfrak{p}\). The Killing form of \(\mathfrak{g}\) provides an isomorphism of \(G\)-modules \(\mathfrak{n}\rightharpoonup(\mathfrak{g}/\mathfrak{p})^{*}\); using this we identify the cotangent bundle \(T^{*}(G/P)\) to the homogeneous bundle \(G\times^{P}\mathfrak{n}\). Associating to a pair \((g,N)\) in \(G\times\mathfrak{n}\) the element \(\operatorname{Ad}(g)\cdot N\) of \(\mathfrak{g}\) defines a generically finite, \(\mathbb{C}^{*}\)-equivariant map \(\pi:T^{*}(G/P)\to\mathfrak{g}\), whose image \(\mathscr{N}\) is the closure of a nilpotent orbit. We will consider the case where the induced map \(\bar{\pi}:T^{*}(G/P)\to\mathscr{N}\) is birational. In this case \(\bar{\pi}\) is a resolution of the normalization \(\tilde{\mathscr{N}}\) of \(\mathscr{N}\), and we have \(S(X)=\mathscr{O}(\tilde{\mathscr{N}})\). For \(G=\operatorname{GL}(n)\) all parabolic subgroups have this property, and \(\mathscr{N}\) is normal, so \(S(X)=\mathscr{O}(\mathscr{N})\). In the other classical cases there is a precise description of the parabolic subgroups for which \(\bar{\pi}\) is birational [Fu, 3.3]; we will content ourselves with the example of quadrics. ### Flag varieties, Grassmannians Let \(V\) be a vector space, and let \((0)=V_{0}\subset V_{1}\subset\ldots\subset V_{s+1}=V\) be a (partial) flag in \(V\). The stabilizer \(P\) of this flag is a parabolic subgroup of \(\operatorname{GL}(V)\), and all parabolics are obtained in this way. The variety \(G/P\) is the variety of flags \((0)=F_{0}\subset F_{1}\subset\ldots\subset F_{s+1}=V\) with \(\dim F_{i}=\dim V_{i}\). The Lie algebra \(\mathfrak{p}\) is the stabilizer of \((V_{i})\) in \(\operatorname{End}(V)\), and its nilradical \(\mathfrak{n}\) is the subspace of \(u\in\operatorname{End}(V)\) satisfying \(u(V_{i+1})\subset V_{i}\) for \(0\leq i\leq s\). Therefore \(\mathscr{N}\) is the subvariety of endomorphisms \(u\in\operatorname{End}(V)\) for which there exists a flag \((F_{i})\) in \(G/P\) with \(u(F_{i+1})\subset F_{i}\) for \(0\leq i\leq s\). Let us spell out this in the case of the Grassmannian \(\mathbb{G}:=\mathbb{G}(r,V)\) of \(r\)-dimensional subspaces of \(V\). We put \(n:=\dim V\). **Proposition 2**.: \(S(\mathbb{G}(r,V))=\mathscr{O}(\mathscr{N})\)_, where \(\mathscr{N}\subset\operatorname{End}(V)\) is the subvariety of endomorphisms \(u\) satisfying \(u^{2}=0\) and \(\operatorname{rk}u\leq\min\{r,n-r\}\)._ Proof.: Since \(\mathbb{G}(r,V)\cong\mathbb{G}(n-r,V)\), we can assume \(r\leq n/2\). By the previous discussion, \(\mathscr{N}\) consists of endomorphisms \(u\) for which there exists a \(r\)-dimensional subspace \(W\subset V\) with \(u(V)\subset W\) and \(u(W)=0\), that is, \(\operatorname{Im}u\subset W\subset\operatorname{Ker}u\). This implies \(u^{2}=0\) and \(\operatorname{rk}u\leq r\); conversely, if this is satisfied, we have \(\operatorname{Im}u\subset\operatorname{Ker}u\) and \(\dim\operatorname{Ker}u=n-\operatorname{rk}u\geq n-r\geq r\), so any \(r\)-dimensional subspace \(W\) with \(\operatorname{Im}u\subset W\subset\operatorname{Ker}u\) does the job. _Remarks_.- 1) Taking \(r=1\) we recover Proposition 1. 2) If \(r=\lfloor\frac{n}{2}\rfloor\) the condition \(u^{2}=0\) implies \(\operatorname{rk}u\leq r\), so \(\mathscr{N}\) is simply the variety of square zero endomorphisms of \(V\). ### Quadrics Let \(V\) be a vector space, and let \(q\) be a non-degenerate quadratic form on \(V\), defining a quadric \(Q:=V(q)\) in \(\mathbb{P}(V)\). **Proposition 3**.: \(S(Q)\) _is isomorphic to the quotient of the homogeneous coordinate ring of \(\mathbb{G}(2,V)\subset\mathbb{P}(\bigwedge^{2}V)\) by the ideal generated by \(\wedge^{2}q\)._ Proof.: Let \(\ell\) be an isotropic line in \(V\) and let \(P\) be the stabilizer of \(\ell\), so that \(Q=\operatorname{O}(V)/P\). The Lie algebra \(\mathfrak{o}(V)\) consists of endomorphisms of \(V\) which are skew-symmetric (with respect to \(q\)), and \(\mathfrak{p}\) is the stabilizer of \(\ell\) in \(\mathfrak{o}(V)\). The nilradical \(\mathfrak{n}\) of \(\mathfrak{p}\) consists of skew-symmetric endomorphisms \(u\) such that \(u(\ell^{\perp})\subset\ell\) and \(u(\ell)=0\). Such a map is of the form \[x\,\mapsto\,q(w,x)v-q(v,x)w,\text{ where }v\in\ell\text{ and }w\in\ell^{\perp}\,. \tag{1}\] Varying \(\ell\), we see that \(\mathscr{N}\) consists of the maps of the form (1) such that the restriction of \(q\) to \(\langle v,w\rangle\) has rank \(\leq 1\). Such maps correspond bijectively to decomposable bivectors \(v\wedge w\in\bigwedge^{2}V\), and the condition on \(q\) can be written \(\wedge^{2}q(v\wedge w)=0\). This implies the Proposition. ### Intersection of two quadrics The following result is proved in [BEHLV]: **Proposition 4**.: _Let \(X\subset\mathbb{P}^{n+2}\) be a smooth complete intersection of two quadrics. Then \(S(X)\) is a polynomial algebra in \(n\) variables of degree \(2\)._ It is somewhat surprising that the answer is much simpler in this case that for a single quadric. ### Completely integrable systems Let \(V\) be a graded vector space, endowed with the associated \(\mathbb{C}^{*}\)-action. Suppose that we have a \(\mathbb{C}^{*}\)-equivariant morphism \(\Phi:T^{*}X\to V\) whose general fiber is of the form \(Y\smallsetminus Z\), where \(Y\) is a complete variety and \(Z\) a closed subvariety of codimension \(\geq 2\). Then the functions on \(T^{*}X\) are constant on the fibers of \(\Phi\), hence the homomorphism \(\Phi^{*}:\mathscr{O}(V)=\mathsf{S}^{\bullet}V^{*}\to\mathscr{O}(T^{*}X)=S(X)\) is an isomorphism of graded algebras. A famous example of this situation is given by the _Hitchin fibration_ [Hi]. Let \(C\) be a curve of genus \(g\geq 2\). We fix coprime integers \(r,d\geq 1\), and consider the moduli space \(\mathscr{M}\) of stable vector bundles on \(C\) of rank \(r\) and degree \(d\). It is a smooth projective variety. By deformation theory the tangent space \(T_{E}(\mathscr{M})\) at a point \(E\) of \(\mathscr{M}\) identifies with \(H^{1}(C,\mathscr{E}nd(E))\); by Serre duality, its dual \(T^{*}_{E}\mathscr{M}\) identifies with \(\operatorname{Hom}(E,E\otimes K_{C})\). Let \(V\) be the graded vector space \(\bigoplus_{i=1}^{r}H^{0}(C,K_{C}^{i})\) (with \(\deg H^{0}(C,K_{C}^{i})=i\)). For \(u\in\operatorname{Hom}(E,E\otimes K_{C})\), we have \(\operatorname{Tr}\wedge^{i}u\in H^{0}(C,K_{C}^{i})\). Associating to \(u\) the vector \(\operatorname{Tr}u+\ldots+\operatorname{Tr}\wedge^{r}u\) gives a \(\mathbb{C}^{*}\)-equivariant map \(\Phi:T^{*}\mathscr{M}\to V\). **Proposition 5**.: _The homomorphism \(\Phi^{*}:\mathscr{O}(V)=\mathsf{S}^{\bullet}V^{*}\to\mathscr{O}(T^{*}\mathscr{ M})=S(\mathscr{M})\) is an isomorphism._ Proof.: \(T^{*}\mathscr{M}\) admits an open embedding into the moduli space \(\mathscr{H}\) of stable Higgs bundles (of rank \(r\) and degree \(d\)), and \(\Phi\) extends to a proper map \(\bar{\Phi}:\mathscr{H}\to V\) [Hi]. The codimension of \(\mathscr{H}\smallsetminus T^{*}X\) is \(\geq 2\) [Fa, Theorem II.6], hence \(\operatorname{codim}(\bar{\Phi}^{-1}(v)\smallsetminus\Phi^{-1}(v))\geq 2\) for \(v\) general in \(V\). By the previous remarks this implies the result. There are a number of variations on this theme. First of all, one can fix a line bundle \(L\) of degree \(d\) on \(X\) and consider the subspace \(\mathscr{M}_{L}\) of \(\mathscr{M}\) parameterizing the vector bundles \(E\) with \(\det E=L\); then \(\Phi\) maps \(\mathscr{M}_{L}\) onto the graded subspace \(V_{0}:=\bigoplus_{i=2}^{r}H^{0}(C,K_{C}^{i})\) of \(V\), and we get as before an isomorphism of \(S(\mathscr{M}_{L})\) with \(\mathsf{S}^{\bullet}V_{0}^{*}\). Note that in the case \(g=r=2\)\(\mathscr{M}_{L}\) is a complete intersection of two quadrics in \(\mathbb{P}^{5}\), so we recover the case \(n=3\) of Proposition 4. We can also consider the moduli space \(\mathscr{M}_{\mathrm{par}}\) of stable parabolic vector bundles on \(C\) of rank \(r\), degree \(d\) and weights \(\alpha\), with a parabolic structure along a divisor \(D=p_{1}+\ldots+p_{s}\) -- we refer for instance to [BGL] for the precise definitions. For generic weights \(\mathscr{M}_{\mathrm{par}}\) is smooth and projective; the Hitchin map \(\Phi:T^{*}\mathscr{M}_{\mathrm{par}}\to V_{\mathrm{par}}\) takes its values in the vector space \(V_{\mathrm{par}}:=\bigoplus_{i=1}^{r}H^{0}(C,K_{C}((i-1)D))\). It extends to a proper map from the moduli space \(\mathscr{H}_{\mathrm{par}}\) of parabolic Higgs bundle to \(V_{\mathrm{par}}\), and \(\mathscr{M}_{\mathrm{par}}\smallsetminus T^{*}\mathscr{M}_{\mathrm{par}}\) has codimension \(\geq 2\) provided \(g\geq 4\), or \(g=3\) and \(r\geq 3\), or \(g=2\) and \(r\geq 5\) [BGL, Proposition 5.10]. If this holds, we get as before an isomorphism \(\mathsf{S}^{\bullet}V_{\mathrm{par}}^{*}\stackrel{{\sim}}{{ \longrightarrow}}S(\mathscr{M}_{\mathrm{par}})\). ### An example: ruled surfaces Contrary to what the previous examples might suggest, \(S(X)\) is _not_ invariant under deformation of \(X\); a typical example is provided by ruled surfaces. Let \(C\) be a curve of genus \(\geq 2\), and \(E\) a stable rank 2 vector bundle on \(C\) with trivial determinant1. We put \(X=\mathbb{P}_{C}(E)\). Footnote 1: Such a bundle is isomorphic to its dual, so we will not bother to distinguish them. **Proposition 6**.: _For general \(E\) we have \(S(X)=\mathbb{C}\)._ Proof.: Denote by \(p:X\to C\) the structure map and by \(\mathscr{O}_{X}(1)\) the tautological line bundle. The exact sequence \[0\to\mathscr{O}_{X}(2)\to T_{X}\to p^{*}T_{C}\to 0\,.\] gives rise to exact sequences \[0\to\mathscr{O}_{X}(2p)\to\mathsf{S}^{p}T_{X}\to\mathsf{S}^{p-1}T_{X}\otimes p ^{*}T_{C}\to 0\,. \tag{2}\] We claim that \(H^{0}(X,\mathsf{S}^{p-1}T_{X}\otimes p^{*}T_{C})=0\). Indeed we get from (2) exact sequences \[0\to\mathscr{O}_{X}(2q)\otimes p^{*}T_{C}^{r}\to\mathsf{S}^{q}T_{X}\otimes p^{ *}T_{C}^{r}\to\mathsf{S}^{q-1}T_{X}\otimes p^{*}T_{C}^{r+1}\to 0\,.\] We have \(H^{0}(X,\mathscr{O}_{X}(2q)\otimes p^{*}T_{C}^{r})=H^{0}(C,\mathsf{S}^{2q}E \otimes T_{C}^{r}))=0\) for \(r\geq 1\), because \(\mathsf{S}^{2q}E\) is semi-stable [Ha, ch. I, Theorem 10.5] and \(\deg T_{C}<0\). Since \(H^{0}(C,T_{C}^{r+1}))=0\), we get by induction \(H^{0}(X,\mathsf{S}^{q}T_{X}\otimes p^{*}T_{C})=0\), hence (2) gives isomorphisms \[H^{0}(X,\mathsf{S}^{p}T_{X})\cong H^{0}(X,\mathscr{O}_{X}(2p))\cong H^{0}(C, \mathsf{S}^{2p}E)\,. \tag{3}\] Now for general \(E\) the bundles \(\mathsf{S}^{q}E\) are stable [Ha, _loc. cit._], so \(H^{0}(X,\mathsf{S}^{p}T_{X})=0\) for \(p>0\). For special bundles \(E\) the algebra \(S(X)\) can be quite nontrivial. If \(E\) is _unstable_ the tangent bundle \(T_{X}\) is big [Ki], hence \(S(X)\) has Krull dimension \(3\). This does not hold if \(E\) is stable, but one can get interesting algebras of dimension 2. Let \(V\) be a 2-dimensional Hermitian space, and let \(G\) be a finite subgroup of \(\operatorname{SU}(V)\), acting irreducibly on \(V\). Recall that \(G\) is the pull-back by the covering map \(\operatorname{SU}(2)\to\operatorname{SO}(3)\) of a group \(\bar{G}\) isomorphic to the dihedral group \(D_{n}\) or to \(\mathfrak{A}_{4},\mathfrak{S}_{4}\) or \(\mathfrak{A}_{5}\). Given a Galois covering \(\pi:\tilde{C}\to C\) with group \(G\), the vector bundle \(E_{\pi}:=\tilde{C}\times^{G}V\) on \(C\) is stable, of rank 2, with trivial determinant. The space \(H^{0}(C,\mathsf{S}^{p}E_{\pi})\) is canonically isomorphic to the \(G\)-invariant subspace of \(\mathsf{S}^{p}V\). Note that this is zero if \(p\) is odd, since \(G\) contains the element \(-1_{V}\). Therefore it follows from (3) that \(S(X)\)_is isomorphic to the graded algebra of invariants_\((\mathsf{S}^{\bullet}V)^{G}\), the algebra of regular functions on the quotient variety \(V/G\). The determination of \((\mathsf{S}^{\bullet}V)^{G}\) goes back to Klein [Kl, Ch. II]. It is generated by 3 homogeneous elements \(x,y,z\), subject to one weighted homogeneous relation \(F(x,y,z)=0\). Putting \(\mathbf{d}=(\deg x,\deg y,\deg z)\), we have: \(\bullet\) For \(\bar{G}=D_{n}\), \(\mathbf{d}=(2n+2,2n,4)\), \(F=x^{2}+y^{2}z+z^{n+1}\). \(\bullet\) For \(\bar{G}=\mathfrak{A}_{4}\), \(\mathbf{d}=(4,4,6)\), \(F=x^{2}+y^{3}+z^{3}\). \(\bullet\) For \(\bar{G}=\mathfrak{S}_{4}\), \(\mathbf{d}=(12,8,6)\), \(F=x^{2}+y^{3}+z^{4}\). \(\bullet\) For \(\bar{G}=\mathfrak{A}_{5}\), \(\mathbf{d}=(30,20,12)\), \(F=x^{2}+y^{3}+z^{5}\). ## 3. Cases with \(S(x)=\mathbb{C}\) ### Varieties with \(c_{1}(x)=0\) The following result, proved in [Ko], is a direct consequence of Yau's theorem: **Proposition 7**.: _Let \(X\) be a compact Kahler variety with \(c_{1}(X)=0\) in \(H^{2}(X,\mathbb{Q})\), and \(\pi_{1}(X)\) finite. Then \(S(X)=\mathbb{C}\)._ With no assumption on \(\pi_{1}(X)\), we know that \(X\) is the quotient of a product \(A\times Y\), where \(A\) is a complex torus and \(Y\) is simply connected, by a finite group \(G\) acting freely [B2]. It follows that \(S(X)\) _is isomorphic to the invariant subring \((\mathsf{S}^{\bullet}T_{0}(A))^{G}\)_. ### Varieties of general type **Proposition 8**.: _Let \(X\) be a variety of general type. Then \(S(X)=\mathbb{C}\)._ This is a consequence of the stronger result that \(T_{X}\) is not pseudo-effective [HP2, Proposition 4.11]. ### Hypersurfaces The following result is proved in [HLS]: **Proposition 9**.: _Let \(X\) be a smooth hypersurface of degree \(d\geq 3\) and dimension \(\geq 2\). Then \(S(X)=\mathbb{C}\)._ In fact the authors prove the stronger result \(H^{0}(X,\mathsf{S}^{p}T_{X}(d-3))=0\), and also that \(T_{X}\) is not pseudoeffective. ## 4. The Krull dimension of \(\mathsf{S}(\mathsf{X})\) A complete description of the ring \(S(X)\) is in general intractable, but we can still ask for some of its properties, for instance its Krull dimension. It is equal to \(1+\kappa(\mathscr{O}_{\mathbb{P}T^{*}X}(1))\), where \(\kappa\) denotes the _Iitaka dimension_ (see for instance [La, Ch. 2]). We have \(0\leq\dim S(X)\)\(\leq 2\dim X\), and all cases can occur. In particular, \[\dim S(X)=2\dim X\iff\mathscr{O}_{\mathbb{P}T^{*}X}(1)\text{ big }\Longleftrightarrow\ T_{X}\text{ big}\,.\] This property holds for toric varieties [Hs], and also for all rational homogeneous varieties [GW, Corollary 4.4]. The paper [Li] contains a number of other examples of varieties with a group action whose tangent bundle is big. Though the most interesting cases occur when the Kodaira dimension \(\kappa(X)\) is \(-\infty\), one may ask what can be said when \(\kappa(X)\geq 0\). The condition \(S(X)\neq\mathbb{C}\), or the weaker condition that \(T_{X}\) is pseudo-effective, imposes strong restrictions on \(X\) -- see [HP2, Proposition 4.11]. The following bound is the main result of this section: **Proposition 10**.: \(\dim S(X)\leq\dim X-\kappa(X)\)_. Equality holds if and only if \(X\) admits a finite etale covering of the form \(A\times Y\), where \(A\) is an abelian variety and \(Y\) a variety of general type._ It follows in particular that \(\dim S(X)>\dim X\) implies \(\kappa(X)=-\infty\). Let us first show that the equality holds when there exists an etale covering \(A\times Y\to X\) with \(A\) abelian and \(Y\) of general type. This follows from (2.1), Proposition 8, and the following lemma: **Lemma 1**.: _Let \(X,Y\) be smooth projective varieties._ \(1)\) _We have \(S(X\times Y)\cong S(X)\otimes S(Y)\)._ \(2)\) _If \(\pi:X\to Y\) is an etale morphism, \(\dim S(X)=\dim S(Y)\)._ Proof.: 1) Let \(p_{X},p_{Y}\) be the projections of \(X\times Y\) onto \(X\) and \(Y\). We have \(T_{X\times Y}=p_{X}^{*}T_{X}\oplus p_{Y}^{*}T_{Y}\), hence \(\mathsf{S}^{\bullet}T_{X\times Y}=p_{X}^{*}\mathsf{S}^{\bullet}T_{X}\otimes p _{Y}^{*}\mathsf{S}^{\bullet}T_{Y}\). The result follows from the Kunneth formula. 2) \(\pi\) induces a finite etale morphism \(T^{*}X\to T^{*}Y\), hence \(S(X)=\mathscr{O}(T^{*}X)\) is a finite algebra over \(S(Y)\), thus \(\dim S(X)=\dim S(Y)\). For the rest of the proof, we will need some preliminary results. ### Slope and positivity of vector bundles We fix an ample divisor class \(H\) on \(X\). We will say that a vector bundle is stable if it is slope-stable with respect to \(H\) -- same for semi-stability and polystability. Let \(\mathscr{E}\) be a torsion free coherent sheaf of rank \(r\) on \(X\). Recall that the _slope_\(\mu(\mathscr{E})\) of \(\mathscr{E}\) is \(\frac{1}{r}(c_{1}(\mathscr{E})\cdot H^{n-1})\). We denote by \(\mu_{\max}(\mathscr{E})\) the maximum of \(\mu(\mathscr{F})\) for \(\mathscr{F}\subseteq\mathscr{E}\), \(\mathscr{F}\neq 0\). **Lemma 2**.: _Let \(E\) and \(F\) be two vector bundles on \(X\)._ \(1)\)_\(\mu_{\max}(E\otimes F)=\mu_{\max}(E)+\mu_{\max}(F)\)._ \(2)\)_\(\mu_{\max}(\mathsf{S}^{p}E)=p\,\mu_{\max}(E)\)._ _In particular, if \(E\) and \(F\) are semi-stable, then so are \(E\otimes F\) and \(\mathsf{S}^{q}E\) for any \(q\geq 1\)._ Proof.: 1) is proved in [CP, Corollary 5.5]. 2) Let \(\mathscr{F}\) be a subsheaf of \(E\) with \(\mu(\mathscr{F})=\mu_{\max}(E)\). Then \((\mathsf{S}^{p}\mathscr{F})^{**}\) is a subsheaf of \(\mathsf{S}^{p}E\), hence \(\mu_{\max}(\mathsf{S}^{p}E)\geq\mu((\mathsf{S}^{p}\mathscr{F})^{**})\geq p\, \mu(\mathscr{F})=p\,\mu_{\max}(E)\). On the other hand since \(\mathsf{S}^{p}E\) is a subsheaf of \(E^{\otimes p}\), we have \(\mu_{\max}(\mathsf{S}^{p}E)\leq\mu_{\max}(E^{\otimes p})=p\,\mu_{\max}(E)\) by 1), hence 2) holds. ### Symmetric algebra of vector bundles Let \(E\) be a vector bundle of rank \(r\) on \(X\). We will denote by \(S(E)\) the graded algebra \(H^{0}(X,\mathsf{S}^{\bullet}E)\). **Lemma 3**.: \(1)\) _Assume that \(E\) is polystable, and \(\mu(E)=0\). Then \(\dim S(E)\leq r\)._ \(2)\) _Assume \(E=F\oplus G\), where \(\mu_{\max}(F)\leq 0\) and \(\mu_{\max}(G)<0\). Then \(S(E)=S(F)\)._ Proof.: 1) If \(E\) is stable and \(h^{0}(E)\neq 0\), there is an injective homomorphism \(\mathscr{O}_{X}\to E\), which must be an isomorphism; hence \(h^{0}(E)\leq 1\). It follows that \(h^{0}(E)\leq r\) if \(E\) is polystable. Now \(\mathsf{S}^{q}E\) is also polystable [HL, Theorem 3.2.11], so \(h^{0}(\mathsf{S}^{q}E)\leq\operatorname{rk}\mathsf{S}^{q}E=\binom{q+r-1}{r-1}\), hence \(\kappa(\mathscr{O}_{\mathbb{P}(E)}(1))\leq r-1\) (see e.g. [La, Corollary 2.1.38]) and \(\dim S(E)\leq r\). 2) By Lemma 2 we have, for \(p,q\in\mathbb{N}\), \(q>0\): \[\mu_{\max}(\mathsf{S}^{p}F\otimes\mathsf{S}^{q}G)=p\,\mu_{\max}(F)+q\,\mu_{ \max}(G)<0\,,\text{ hence }\,H^{0}(\mathsf{S}^{p}F\otimes\mathsf{S}^{q}G)=0\,.\] Therefore \(H^{0}(\mathsf{S}^{p}E)=H^{0}(\mathsf{S}^{p}F)\), and \(S(E)=S(F)\). ### Proof of Proposition 10 Without loss of generality, we may assume \(\kappa(X)\geq 0\) and \(\dim S(X)\geq 1\). In particular, the projective manifold \(X\) is not uniruled and \(T_{X}\) is pseudo-effective. Moreover, since \(\dim S(X)\) and \(\kappa(X)\) are invariant under finite etale covering (Lemma 1 and [Ue, Theorem 5.13]), we may replace \(X\) by any finite etale covering. Proposition 4.11 of [HP2] provides a decomposition \[T_{X}=F\oplus G \tag{4}\] where \(F\) and \(G\) are integrable subbundles, \(c_{1}(F)=0\), and the restriction of \(G^{*}\) to a general curve complete intersection of hypersurfaces in \(|mH|\), for \(m\gg 0\), is ample. Since a quotient of an ample bundle is ample, this implies \(\mu(\mathscr{F})<0\) for any nonzero subsheaf \(\mathscr{F}\subset G\), hence \(\mu_{\max}(G)<0\). Then by Lemma 3 the algebra \(S(X)\) is isomorphic to \(S(F)\). By [PT, Lemma 2.1], \(F\) is polystable, hence Lemma 3 implies \[\dim S(X)=\dim S(F)\leq\operatorname{rk}F\,.\] By [PT, Proposition 2.6], \(\det F\) is a torsion line bundle; passing to a finite etale covering we may assume \(\det F=\mathscr{O}_{X}\), so that \(\det G^{*}\cong K_{X}\). The natural inclusion \(G^{*}\subset\Omega^{1}_{X}\) induces an inclusion \(\det G^{*}\subset\Omega^{k}_{X}\), where \(k=\operatorname{rk}G\). Then the Bogomolov inequality ([Bo, Theorem 4]) gives \[\kappa(X)=\kappa(\det G^{*})\leq k=\operatorname{rk}G,\] hence \[\dim S(X)=\dim S(F)\leq\operatorname{rk}F=\dim X-\operatorname{rk}G\leq\dim X -\kappa(X),\] which proves our bound. Suppose that equality holds. Then \(\dim S(F)=\operatorname{rk}F\) and \(\kappa(\det G^{*})=k\). By [Bo, Lemma 12.4], the latter condition implies that there exists a rational map \(f:X\dashrightarrow Y\) to a \(k\)-dimensional projective manifold such that \(\det G^{*}\subset\Omega^{k}_{X}\) coincides with the saturation of the subsheaf \(f^{*}K_{Y}\subset\Omega^{k}_{X}\). This implies that the foliation \(F\subset T_{X}\) is induced by \(f\) and thus is a regular algebraically integrable foliation. Since \(\det F\cong\mathscr{O}_{X}\), by the global version of the Reeb stability theorem [D3, Theorem 8.1], after replacing \(X\) by a finite etale covering, we may assume that \(X\) is a product \(Z\times Y\), with \(F=\operatorname{pr}_{Z}^{*}T_{Z}\) and \(G\cong\operatorname{pr}_{Y}^{*}T_{Y}\). In particular, we obtain \[\dim(Y)=\kappa(X,\det G^{*})=\kappa(Y)\] hence \(Y\) is of general type. Finally we use the first condition \(\dim S(F)=\operatorname{rk}F\). Since \(S(F)\) is canonically isomorphic to \(S(Z)\), we get \(\dim S(Z)=\dim Z\). Since \(c_{1}(F)=0\), we have \(c_{1}(Z)=0\), hence \(Z\) admits a finite etale covering of the form \(A\times T\), where \(A\) is an abelian variety and \(T\) a simply connected smooth projective variety with \(c_{1}(T)=0\)[B1]. By Proposition 7 and Lemma 3 we have \(S(Z)\cong S(A)\), hence \(\dim Z=\dim S(Z)=\dim(A)\) (2.1), so that \(X=Z\times Y\) admits a finite etale covering by \(A\times Y\). ## 5. Pseudo-effective tangent bundle We discuss in this section the structure of non-uniruled projective manifolds \(X\) with pseudo-effective tangent bundle. **Lemma 4**.: \(1)\) _Let \(D\) be a big divisor on \(X\). A vector bundle \(E\) is pseudo-effective if and only if for any \(c>0\), there exist positive integers \(i\) and \(j\) such that \(i>cj\) and_ \[H^{0}(X,\mathsf{S}^{i}E\otimes\mathscr{O}_{X}(jD))\neq 0.\] \(2)\) _If \(E\) is a pseudo-effective vector bundle, then \(\mu_{\max}(E)\geq 0\) for any polarization \(H\)._ \(3)\) _Let \(F\to E\) be an injective map of vector bundles. If \(F\) is pseudo-effective, \(E\) is pseudo-effective._ \(4)\) _Let \(f:Y\to X\) be a surjective morphism between smooth projective varieties, and let \(E\) be a vector bundle on \(X\). Then \(E\) is pseudo-effective if and only if \(f^{*}E\) is pseudo-effective._ \(5)\) _Let \(X=Y\times Z\) be a product of smooth projective varieties. Then \(T_{X}\) is pseudo-effective if and only if one of \(T_{X}\) and \(T_{Z}\) is pseudo-effective._ Proof.: 1) is proved in [HLS, Lemma 2.2]. 2) If \(H^{0}(X,\mathsf{S}^{i}E\otimes\mathscr{O}_{X}(jD))\neq 0\), there is an inclusion \(\mathscr{O}_{X}(-jD)\subset\mathsf{S}^{i}E\). By Lemma 2, we have \[\mu_{\max}(E)=\frac{1}{i}\mu_{\max}(\mathsf{S}^{i}E)\geq-\frac{1}{i}(jD\cdot H ^{n-1})>-\frac{1}{c}(D\cdot H^{n-1}).\] As \(c\) is arbitrary, we obtain \(\mu_{\max}(E)\geq 0\), which proves 2). 3) follows from 1) and the natural inclusion \(\mathsf{S}^{i}F\otimes\mathscr{O}_{X}(jD)\subset\mathsf{S}^{i}E\otimes \mathscr{O}_{X}(jD)\). 4) Assume first \(\operatorname{rk}E=1\). We only need to show that if \(f^{*}E\) is pseudo-effective, then so is \(E\) itself. Indeed, assume the opposite. By [BDPP, Theorem 0.2], there exists a covering family \(\{C_{t}\}_{t\in T}\) of curves such that \((c_{1}(E)\cdot C_{t})<0\). Let \(\{C_{t^{\prime}}\}_{t^{\prime}\in T^{\prime}}\) be a covering family of curves on \(Y\) such that a general curve \(C_{t^{\prime}}\) is mapped onto some \(C_{t}\). Then we have \((c_{1}(f^{*}E)\cdot C_{t^{\prime}})<0\) by the projection formula, so \(f^{*}E\) is not pseudo-effective by [BDPP, Theorem 0.2]. If \(\operatorname{rk}E>1\), \(f\) induces a surjective morphism \(\bar{f}:\mathbb{P}(f^{*}E^{*})\to\mathbb{P}(E^{*})\) such that \(\bar{f}^{*}\mathcal{O}_{\mathbb{P}(E^{*})}(1)\cong\mathcal{O}_{\mathbb{P}(f^{*}E ^{*})}(1)\); 4) follows from the previous result applied to \(\bar{f}\). 5) By 3) and 4), if \(T_{Y}\) or \(T_{Z}\) is pseudo-effective so is \(T_{X}\). Assume that \(T_{X}\) is pseudo-effective. Let \(H_{Y}\) and \(H_{Z}\) be ample line bundles on \(Y\) and \(Z\), respectively. Then \(H:=H_{Y}\boxtimes H_{Z}\) is ample. By 1), for any \(c>0\), there exist positive integers \(i\) and \(j\) such that \(i>2cj\) and \[H^{0}(X,\mathsf{S}^{i}T_{X}\otimes H^{j})=H^{0}(X,\mathsf{S}^{i}(T_{Z} \boxtimes T_{Y})\otimes H^{j})\neq 0.\] By restricting to \(Y\times\{z\}\) and \(\{y\}\times Z\), for \(y,z\) general, it follows that there exist non-negative integers \(p\) and \(q\) such that \(p+q=i\), \(H^{0}(Y,\mathsf{S}^{p}T_{Y}\otimes H_{Y}^{j})\neq 0\) and \(H^{0}(Z,\mathsf{S}^{q}T_{Z}\otimes H_{Z})\neq 0\). Moreover, as \(p+q>2cj\), we also have either \(p>cj\) or \(q>cj\). Since \(c\) is arbitrary and \(H\) is ample, it follows from 1) that one of \(T_{Z}\) and \(T_{Y}\) is pseudo-effective. _Remark.-_ In general, if the tangent bundle \(T_{X}\) of a smooth projective variety \(X\) is pseudo-effective and splits into a direct sum \(F\oplus G\) of vector bundles, it is not clear to us whether one of \(F\) or \(G\) is pseudo-effective. Indeed, the splitting of \(T_{X}\) in general does not imply the splitting of \(X\) itself, as simple abelian varieties or Hilbert modular varieties show. However, it is conjectured by the first author in [B3] that this splitting should come from a splitting of the universal cover of \(X\). Recall that a rank \(r\) vector bundle \(E\) on \(X\) is called _unitary flat_ if it is associated to an irreducible representation \(\pi_{1}(X)\to\operatorname{U}(r)\). _Conjecture 1_.: Let \(X\) be a non-uniruled projective manifold. Then \(T_{X}\) is pseudo-effective if and only if there exists a finite etale covering \(X^{\prime}\to X\) such that \(T_{X^{\prime}}\) contains a nonzero unitary flat subbundle. _Remarks.-_ 1) A unitary flat vector bundle is nef, hence pseudo-effective. So if \(T_{X^{\prime}}\) contains a nonzero unitary flat subbundle, it is pseudo-effective (Lemma 4, 3)), hence \(T_{X}\) is pseudo-effective (Lemma 4, 4)). 2) If the tangent bundle \(T_{X}\) of a non-uniruled projective \(X\) contains a unitary flat subbundle \(F\), then \(F\) is actually a regular foliation with \(\det(F)\) torsion by [PT, Lemma 2.1 and Proposition 2.6]. We refer the reader to [PT] for more discussion on the structure of this kind of foliations. 3) Very recently J. Jia, Y. Lee and G. Zhong have studied in [JLZ] the non-uniruled smooth projective surfaces \(S\) with pseudo-effective tangent bundle. They prove that up to a finite etale covering, \(S\) is either an abelian surface or a product \(E\times C\) of an elliptic curve \(E\) and a curve \(C\) of genus \(\geq 2\). This solves Conjecture 1 in dimension two. In higher dimension, it is asked in [JLZ, Question 1.2] whether the pseudo-effectivity of the tangent bundle of a \(n\)-dimensional non-uniruled projective manifold \(X\) is equivalent to \(c_{n}(X)=0\) and \(\widehat{q}(X)>0\), where \(\widehat{q}(X)\) is the _augmented irregularity_ of \(X\). The answer is negative in general. For instance, let \(X=Y\times Z\) be the product of an irreducible simply connected Calabi-Yau variety \(Y\) with vanishing top Chern class2 and a variety \(Y\) of general type with \(q(Y)>0\). The tangent bundles of \(Z\) and \(Y\) are not pseudo-effective (see [HP1, Theorem 1.6] and [HP2, Proposition 4.11]). So Lemma 4 says that \(T_{X}\) itself is not pseudo-effective. Footnote 2: See for instance [KS, p. 1221] for the construction of threefolds with this property. Because of the decomposition (4), Conjecture 1 is closely related to the following conjecture proposed by J.V. Pereira and F. Touzet in [PT, SS 6.5]: _Conjecture 2_.: Let \(X\) be a non-uniruled projective manifold, and let \(F\subsetneq T_{X}\) be a regular foliation such that \(F\) is stable for some polarization, \(c_{1}(F)=0\), and \(c_{2}(F)\neq 0\). Then \(F\) is algebraically integrable. **Proposition 11**.: _Assume that Conjecture 2 holds for \(\dim(X)\leq n\). Then Conjecture 1 holds for \(\dim(X)\leq n\)._ Proof.: Assume that \(T_{X}\) is pseudo-effective. Let \(T_{X}=F\oplus G\) be the decomposition (4). Then \(F\) is a regular foliation with \(c_{1}(F)=0\). By [D1, Theorem 6.9], there exist complex projective manifolds \(Y\) and \(Z\), a finite etale cover \(\pi:Y\times Z\to X\), and a regular foliation \(H\) on \(Y\) with \(c_{1}(H)=c_{2}(H)=0\) such that \(\pi^{*}F=p_{Y}^{*}H\oplus p_{Z}^{*}T_{Z}\). Since \(H\) is polystable [PT, Lemma 2.1], it is a direct sum of unitary flat bundles [UY, Corollary 8.1]. Therefore it suffices to prove that \(H\neq 0\). Since \(c_{1}(F)=0\) and \(c_{1}(H)=0\), we get \(c_{1}(Z)=0\). Therefore there exists a finite etale covering \(A\times T\to Z\), where \(A\) is an abelian variety and \(T\) is a simply connected smooth projective variety with \(c_{1}(T)=0\)[B1]. Without loss of generality, we may assume that \(Z=A\times T\). Moreover, after replacing \(Y\) by \(A\times Y\), we may assume in addition that \(Z\) is simply connected. In particular, the tangent bundle \(T_{Z}\) is not pseudo-effective [HP1, Theorem 1.6], so \(T_{Y}\) is pseudo-effective by Lemma 4. Applying [PT, Theorem 2.2] to \(Y\) yields a regular foliation \(J\) on \(Y\) such that \(T_{Y}=H\oplus J\). Since \(T_{X}=p_{Y}^{*}(H\oplus J)\oplus p_{Z}^{*}T_{Z}=\pi^{*}F\oplus p_{Y}^{*}J\), we have \(p_{Y}^{*}J\cong\pi^{*}G\). Since \(\mu_{\max}(G)<0\), \(J\) is not pseudo-effective (Lemma 4). Therefore \(H\neq 0\) and we are done. Conjecture 2 is wide open in general. It is known in the following cases, proved by F. Touzet and S. Druel ([To] and [D1]). **Proposition 12**.: _Conjecture 2 holds if \(\operatorname{rk}(F)\leq 3\) or \(\operatorname{rk}(F)=\dim(X)-1\). In particular, it holds for \(\dim(X)\leq 5\)._ Proof.: If \(\operatorname{rk}(F)\leq 3\), this is proved in [D1, Proposition 6.8]. Assume \(\operatorname{rk}(F)=\dim(X)-1\), and that \(F\) is not algebraically integrable. By [To, Theoreme 1.2], there exists an abelian variety \(A\), a smooth projective variety \(Y\) with \(c_{1}(Y)=0\), a finite etale covering \(\pi:A\times Y\to X\) and a linear foliation \(H\) on \(A\) such that \(\pi^{*}F=p_{A}^{*}H\oplus p_{Y}^{*}T_{Y}\). Since \(F\) is stable for some polarization, Proposition 8.1 of [D2] implies that \(Y\) is a point. Then \(\pi^{*}F=H\) is trivial, hence \(c_{2}(F)=0\), a contradiction. **Corollary**.: _Conjecture \(\mathbf{1}\) holds for \(\dim(X)\leq 5\)._
2310.00337
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware
In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a promising avenue for efficient storage and manipulation of neural network weights. However, the transition from trained floating-point models to hardware-constrained analog architectures remains a challenge. In this work, we combine a quantization technique specifically designed for such architectures with a novel self-correcting mechanism. By utilizing dual crossbar connections to represent both the positive and negative parts of a single weight, we develop an algorithm to approximate a set of multiplicative weights. These weights, along with their differences, aim to represent the original network's weights with minimal loss in performance. We implement the models using IBM's aihwkit and evaluate their efficacy over time. Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms.
Arseni Ivanov
2023-09-30T10:47:25Z
http://arxiv.org/abs/2310.00337v1
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware ###### Abstract In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a promising avenue for efficient storage and manipulation of neural network weights. However, the transition from trained floating-point models to hardware-constrained analog architectures remains a challenge. In this work, we combine a quantization technique specifically designed for such architectures with a novel self-correcting mechanism. By utilizing dual crossbar connections to represent both the positive and negative parts of a single weight, we develop an algorithm to approximate a set of multiplicative weights. These weights, along with their differences, aim to represent the original network's weights with minimal loss in performance. We implement the models using IBM's aihwkit and evaluate their efficacy over time. Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms. ## 1 Introduction An emerging area in neural network hardware is the analog compute paradigm. In order to get around the von-Neumann bottleneck, compute and memory is moved into a shared area, often implemented using crossbar arrays [11]. This allows us to reduce the computational complexity of certain operations, such as Matrix-vector multiplication(MVM) from O(\(N^{2}\)) to O(1) by utilizing properties of analog electronics with Kirchoffs laws. ### _Background and Challenges_ In all current proposed variations of analog hardware, we find a certain weakness that causes there to be a trade-off between the device and required qualities for neural network implementation. Phase Change Memory(PCM) is a device variation which has shown promise in the field[1]. As for the weaknesses with PCM devices, it is that they are susceptible to various kinds of noise. These are: write/programming noise, read noise, and weight/conductance drift. Write and read noise is applied when the respective action is performed on the analog weight, whilst the weight drift is tied to the inherent material properties of the PCM device. A concise description of a PCM device can be found in aihwkit's documentaiton[1]. "A PCM device consists of a small active volume of phase-change material sandwiched between two electrodes. In PCM, data is stored by using the electrical resistance contrast between a high-conductive crystalline phase and a low-conductive amorphous phase of the phase-change material. The phase-change material can be switched from low to high conductive state, and vice-versa, through applying electrical current pulses. The stored data can be retrieved by measuring the electrical resistance of the PCM device." These noise types can drive weights away from their intended values, leading to network inaccuracies. Existing techniques to counteract this include noise-aware training, differential weight representation, and global weight drift compensation. ### _Our Contribution_ We propose a solution that combines a built-upon existing technique for differential weight representation, weight quantization as well as a novel self-correcting mechanism. Our algorithm minimizes the loss between the original and quantized weights by finding optimal quantization bins through simulated annealing. The self-correcting mechanism further ensures long-term network stability. ## 2 Method ### _Theoretical setup_ We employ a two-element differential representation of each weight which we can visualize it in the simplified diagram in Figure 1. In reality, we will also need source lines and converters between analog and digital. This structure has previously been employed in analog neural networks [14] as it reduces the effects of weight drift/perturbations that affects the hardware. If all weights are shifted 5 mV, a weight represented by a difference will stay the same. The inputs get sent to both the positive and negative weight for that input, which themselves accumulate onto the output line using Kirchoff's laws. All weights in the system are represented with positive resistances, which means that we can subtract the accumulated output from the negative output line from the positive one. This lets us have negative weights represented by positive numbers in the system, which often are required by neural networks to work efficiently. ### _Network selection and training procedure_ Firstly, we need to select a problem and train a neural network to perform a task. For this experiment, we use the MNIST dataset, and a simple convolutional neural network that is chosen from a known architecture that was previously successfully implemented on crossbar arrays[14]. We then impose some constraints during the training of the neural network. This includes adding weights below a value \(\epsilon\) to the loss function. This discourages weights \(w<\epsilon\), which would otherwise either require very small bins, or a very small difference between two bins in our architecture. Both of these are unwanted as the noise will affect those weights in a much larger proportion to their size. We can visualize the effect of this constraint in Figure 2. We also add a constraint to large weights above a value \(\theta\). This is due to the conductance drift in the weights, which is larger the larger the weight is when using PCM-based crossbar arrays. ### _Simulated Annealing for Bin Optimization_ Then, we perform simulated annealing to find the best bins for the task. The constraints for the optimization are as following: * **Quantization Levels Constraint:** We should find two sets, one positive and one negative set. Each set should have \(N\) distinct quantization levels and together create a set of bins. * **Bin Constraint:** The possible bins in any found quantization set are given by \(SQ=\{d_{\text{pos}},d_{\text{neg}},(d_{\text{pos}_{i}}-d_{\text{neg}_{j}}\,|\,i \in d_{\text{pos}},j\in d_{\text{neg}})\}\). * **Divisibility Constraint:** Each quantization level in a set must be divisible by the smallest factor in the set. They do not, however, have to be linearly distributed. * **SNR Constraint:** The step-multiple values \(d[0]_{pos}\) and \(d[0]_{neg}\) in the set should be larger than the write noise \(\delta\) constraint which depends on the hardware and the programming procedure. * **Bin Difference Constraint:** The difference between the smallest positive and negative bins in the set (\(abs(d[0]_{pos}-d[0]_{neg})\)) must be larger than read noise error threshold \(\epsilon\). * The number of distinct quantization levels in each set. * The set of positive quantization levels. * The set of negative quantization levels. * The write noise constraint, which depends on the hardware and the programming procedure. * The read noise error constraint, which depends on the trade-off between write noise and read noise. We provide details on the cost function, cooling schedule, and selection mechanism, showcasing how this approach leads to optimal bin selection. The goal of the algorithm is to minimize the error between original weights, and the weights quantized using a found quantization set combination. Below is a pseudocode implementation of the algorithm: It is possible to choose in step 7 in **Algorithm 1** if we want to enforce a linear constraint on the found bins such that any bin is a previous bin with the smallest factor \(N[0]\) added. A linear constraint can simplify the search, but might not find the best result. ``` 1:Input: Neural net weights \(W\), positive and negative parts \(W_{\text{pos}}\), \(W_{\text{neg}}\), number of bins \(N\) 2:Initialize \(d_{i}\) for \(W\in\{W_{\text{pos}},W_{\text{neg}}\}\) 3:Create quantization sets and calculate set \(SQ\) 4:Initialize current error, best error, and temperature \(T\) 5:for iteration \(i\) in range(iterations) do 6: Update temperature \(T\) 7: Perturb positive and negative bases 8: Propose new positive and negative bins 9: Compute error for proposed bins 10:if proposed error \(<\) current error or random value \(<\exp\left(-\frac{\text{proposed error}-\text{current error}}{T}\right)\)then 11: Update current positive base, negative base, and error 12:if proposed error \(<\) best error then 13: Update best positive base, negative base, and error 14:endif 15:endif 16:endfor 17: Return best positive bins, negative bins ``` ``` **Algorithm 1** Optimization of Bins Using Simulated Annealing ### _Self-Correction Mechanism_ In our framework, we introduce a self-repairing mechanism that leverages the quantized weight levels to correct drifts in analog weight representations over time. The mechanism consists of four main components: an error threshold, a correction condition, a weight identification process, and an on-chip correction methodology. #### 2.4.1 Error Threshold To quantify the deviation in the network's state, we define an error threshold based on the modulus of the weight values. Specifically, if any weight value modulus grows beyond \(\frac{N}{3}\) of its initial quantized level, the weight is considered a candidate for adjustment. Here, \(N\) is the quantization level multiple that was used initially for that specific weight. The error threshold comes with an power/accuracy trade-off. If we wait too long with re-adjusting bins, the weight might drift to an extent where the closest multiple no longer is the initial multiple. This leads to an irreversible degradation in the overall network performance for the remainder of its operational lifetime as we will no longer be able to get the initial network values until we reset the weights using a different mechanism. #### 2.4.2 Correction Condition The network-wide condition for triggering the self-correction mechanism is based on global error estimation. By periodically pulsing an identity matrix through the network and accumulating the outputs, we can compare the current state of each layer against a baseline recorded at \(t=0\). If the sum of the absolute differences across all weights exceeds a pre-defined global threshold, the self-correction mechanism is triggered. #### 2.4.3 Weight Identification Once the correction condition is met, we proceed to identify the weights contributing most to the drift. This is done by selecting groups of weights, for example a layer of weights, and comparing the identity matrix output with it's initial output at \(t=0\). If we have exceeded a layer-based drift difference threshold \(dt\), we move on to the correction. In some cases, it can be cheaper to just reprogram the entire network, but in other cases where we have noise-sensitive layers such as CNN's, it might be sufficient to only reprogram those. Fig. 1: Simplified view of the two-element representation of two weights(\(w_{1}\) and \(w_{2}\)) and two inputs \(x_{1}\) and \(x_{2}\) creating a matrix multiplication output \(y_{mat}\) by using the difference between the positive and negative lines. #### 2.4.4 On-Chip Correction Methodology To correct the identified weights, we use short programming pulses to nudge them back to their original multiple-based states. The magnitudes and durations of these pulses are determined based on the difference between the current and target states of each weight, as well as the current magnitude of the weight. This can be performed by an on-chip pulse generator[22]. #### 2.4.5 Advantages and Applications The self-correction mechanism enhances the network's resilience to hardware-induced drifts, thus making it more robust for long-term deployments in edge computing scenarios. Moreover, the mechanism opens the door to more aggressive quantization strategies, as minor errors introduced by quantization can be periodically corrected, further reducing the computational and storage overhead. ### Compression Another benefit of the chosen multiple-quantization is that we can efficiently apply compression techniques such as those used in weight clustering to the weights. We can represent the positive and negative layers with integer matrices in range [0,M] where M is the largest multiple-factor used. This allows us to use N-bit representations of the weights, more generally \(2^{N}-1<M\) of the value, such as 4-bit weight representations if M \(<\) 16. The lower representation range of values yields more repetition in the weight matrices, and allows for more aggressive compression of the weights. ### Testing methodology The accuracy of the self-repairing and the hardware-awarely trained networks are tested in time steps of 5 minutes. During every step, noise is added to the weights. At every timestep, the self-repairing neural network is probed for repair if a threshold of the cumulative layer error is exceeded. We compare the networks over 20 timesteps and note the accuracies in Figure 3 and Figure 4. ## 3 Results We train the candidate CNN network in a traditional fashion and achieve a f1-accuracy of 97.7% on the MNIST dataset. We then apply the quantization and visualize the distribution of the weights in Figure 2. We can see that due to our constraints on the network weights enforced by the loss function, we find the first bins at \(\epsilon\) distance away from 0. This quantization of weights keeps our initial accuracy of 97.7%. We then evaluate the network with 20 time-steps of 300 seconds drift each. At every time-step, we let the self-repairing network adjust it's weights into the closest positive and negative multiples. Alongside the self-repairing network, we train a hardware-aware analog neural network using the same network architecture and plot it's performance over the same timespan in Figure 4. Both models were trained with the same analog noise configuration. The PCM noise configuration is given in Appendix A. ## 4 Discussion We find that the self-repairing network manages to keep it's accuracy stable once correcting itself, but in between the corrections it has a much wider variance of accuracy compared to the analog-awarely trained network. The constraint of weight being larger than \(\delta\) allows us to represent small weights as a combination of a positive and negative weight. This is useful as shown by [22] where the proposed on-chip pulse generator has a significantly larger pulse error for smaller pulses. Pulses of size 100nA have up to 6% average programming error, whilst pulses of 1.28mA have a 0.2% average error. Note that we do have to keep in mind that since we are working with small numbers, a high enough read noise error will mean that we will due to propagation of uncertainty get a much larger percentual error if a positive and a negative bin are close to each other. It is therefore important that we put a constraint on how close the positive and negative bins multiples are allowed to be. ### Layer-specific findings We confirm the findings of [1] which claim that CNN's are more susceptible to noise on analog format. This was found by a larger loss of accuracy when drift was applied to the CNN layers compared to dense layers. We also find that there is inter-layer dependency between the layers given the type and amount of noise applied. aihwkit's **drift_analog_weights**-function drift weights equally if the same RPU-config is given. This means that often we will find that the layers drift in a similar stochastic fashion. This means that adjusting one single layer that has reached over a drift threshold \(dt\) will often degrade the performance, as the inter-layer weight representation is dramatically changed instead of stochastically translated using the noise. This means that an approach where the entire net Figure 3: Digital weights over time with drift applied every 300 seconds. The red points signify accuracy after drift, while the blue points after a dotted red line signify the accuracy after adjustment. Figure 2: Scatter plot of amount of weights in each quantized bin in the set SQ with the best found quantization. Red dots signify negative weights, blue positive weights and green the weights defined using combinations of a positive and a negative weight. work is re-adjusted can sometimes be better given the network and the conditions. ### Future research In order to access the methodology in practice, there needs to be an implementation of both techniques to hardware, and the result should be compared after periods of time. A more robust approach would be to investigate the feasibility of an algorithm that combines the two methodologies, meaning that we do hardware-aware training whilst keeping the weights constrained close to multiples. Another interesting area to explore is self-repair using bit-sliced network weights. This means that a network is represented with weights that are sliced into binary representations of 0s or 1s. This would make the weight adjustment scheme much more simple and flexible to various weights at the cost of more required hardware connections per weight. Lastly, it would be interesting to see how the methodology performs on other types of analog memory architectures, such as RRAM which do not suffer from the same kinds of noise as PCM-based architectures. ## 5 Conclusion We show that by using a constrained bin weight scheme, we can regain lost performance over time using a weight-multiple adjustment over a positive and negative part of the weight. We do however note that by not performing analog-aware training for PCM modules, the network becomes less stable. Despite regaining the accuracy back, the drift will affect the result between the resets more negatively than by using purely analog-awarely trained neural networks. Figure 4: Analog-awarely trained weights over time with drift applied every 300 seconds. ## Appendix A Analog Noise Configuration The following Python code snippet provides the configuration for the analog noise in the Phase-Change Memory (PCM) model. It sets up various parameters including weight noise, clip type, and drift compensation. rpu_config = InferenceRPUConfig() rpu_config.forward.out_res = -1.0 # Turn off (output) ADC discretization. rpu_config.forward.w_noise_type = WeightNoiseType.ADDITIVE_CONSTANT rpu_config.forward.w_noise = 0.02 # Short-term w-noise. rpu_config.clip.type = WeightClipType.FIXED_VALUE rpu_config.clip.fixed_value = 1.0 rpu_config.modifier.pdrop = 0.03 # Dropconnect. rpu_config.modifier.type = WeightModifierType.ADD_NORMAL # Fwd/bwd weight noise. rpu_config.modifier.std_dev = 0.1 rpu_config.modifier.rel_to_actual_wmax = True # Inference noise model. rpu_config.noise_model = PCMLikeNoiseModel(g_max=25.0) # drift compensation rpu_config.drift_compensation = GlobalDriftCompensation()
2302.00126
Simulated sulfur K-edge X-ray absorption spectroscopy database of lithium thiophosphate solid electrolytes
X-ray absorption spectroscopy (XAS) is a premier technique for materials characterization, providing key information about the local chemical environment of the absorber atom. In this work, we develop a database of sulfur K-edge XAS spectra of crystalline and amorphous lithium thiophosphate materials based on the atomic structures reported in Chem. Mater., 34, 6702 (2022). The XAS database is based on simulations using the excited electron and core-hole pseudopotential approach implemented in the Vienna Ab initio Simulation Package. Our database contains 2681 S K-edge XAS spectra for 66 crystalline and glassy structure models, making it the largest collection of first-principles computational XAS spectra for glass/ceramic lithium thiophosphates to date. This database can be used to correlate S spectral features with distinct S species based on their local coordination and short-range ordering in sulfide-based solid electrolytes. The data is openly distributed via the Materials Cloud, allowing researchers to access it for free and use it for further analysis, such as spectral fingerprinting, matching with experiments, and developing machine learning models.
Haoyue Guo, Matthew R. Carbone, Chuntian Cao, Jianzhou Qu, Yonghua Du, Seong-Min Bak, Conan Weiland, Feng Wang, Shinjae Yoo, Nongnuch Artrith, Alexander Urban, Deyu Lu
2023-01-31T22:23:14Z
http://arxiv.org/abs/2302.00126v1
Simulated sulfur K-edge X-ray absorption spectroscopy database of lithium thiophosphate solid electrolytes ###### Abstract X-ray absorption spectroscopy (XAS) is a premier technique for materials characterization, providing key information about the local chemical environment of the absorber atom. In this work, we develop a database of sulfur K-edge XAS spectra of crystalline and amorphous lithium thiophosphate materials based on the atomic structures reported in _Chem. Mater._, 34, 6702 (2022). The XAS database is based on simulations using the excited electron and core-hole pseudopotential approach implemented in the Vienna Ab initio Simulation Package. Our database contains 2681 S K-edge XAS spectra for 66 crystalline and glassy structure models, making it the largest collection of first-principles computational XAS spectra for glass/ceramic lithium thiophosphates to date. This database can be used to correlate S spectral features with distinct S species based on their local coordination and short-range ordering in sulfide-based solid electrolytes. The data is openly distributed via the Materials Cloud, allowing researchers to access it for free and use it for further analysis, such as spectral fingerprinting, matching with experiments, and developing machine learning models. Department of Chemical Engineering, Columbia University, New York, New York 10027, USA Computational Science Initiative, Brookhaven National Laboratory, Upton, New York 11973, USA National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, New York 11973, USA Material Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland 20899, USA Interdisciplinary Science Department, Brookhaven National Laboratory, Upton, New York 11973, USA Columbia Center for Computational Electrochemistry, Columbia University, New York, New York 10027, USA Materials Chemistry and Catalysis, Debye Institute for Nanomaterials Science, Utrecht University, 3584 CG Utrecht, The Netherlands Columbia Electrochemical Energy Center, Columbia University, New York, New York 10027, USA Center for Functional Nanomaterials, Brookhaven National Laboratory, Upton, New York 11973, USA ## 1 Background & Summary The glass/ceramic lithium thiophosphates (_gc_-LPS) along the composition line Li\({}_{3}\)S-P\({}_{2}\)S\({}_{5}\) are considered promising electrolytes for solid-state batteries because of their superionic lithium conductivity at room temperature (>10\({}^{-3}\) Scm-\({}^{1}\)), soft mechanical properties, and low grain boundary resistance.[1, 2] Although _gc_-LPS lacks long-range atomic ordering, it exhibits characteristic short-ranged structural motifs that vary with the LPS composition and can affect the Li conductivity. **Figure 1** illustrates how the local coordination of S atoms with Li and P atoms in the crystalline phases of LPS changes with increasing Li\({}_{3}\)S content \(x\) in (Li\({}_{3}\)S)(P\({}_{2}\)S)\({}_{1-x}\): P\({}_{2}\)-S and P-S-Li\({}_{2}\) in LiPS\({}_{3}\) ([Li\({}_{3}\)S)\({}_{0.5}\)(P\({}_{2}\)S)\({}_{0.5}\)]; P\({}_{2}\)-S\({}_{-i}\), P-S-Li\({}_{2}\), P-S-Li\({}_{3}\) and P-S-Li\({}_{2}\) in Li\({}_{7}\)P\({}_{31}\)([Li\({}_{3}\)S)\({}_{0.7}\)(P\({}_{2}\)S)\({}_{0.3}\)]; P-S-Li\({}_{2}\), P-S-Li\({}_{3}\) and P-S-Li\({}_{2}\) in Li\({}_{7}\)P\({}_{31}\)([Li\({}_{3}\)S)\({}_{0.7}\)(P\({}_{2}\)S)\({}_{0.25}\)]; P-S-Li\({}_{3}\), S-Li\({}_{3}\) and S-Li\({}_{2}\) in LiPS\({}_{6}\)([Li\({}_{3}\)S)\({}_{0.875}\)(P\({}_{2}\)S)\({}_{0.125}\)). To understand the short-range ordering and its impact on Li conductivity in _gc_-LPS, several characterization tools have previously been employed, including X-ray diffraction (XRD),[3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] Raman spectroscopy,[10, 12, 13, 14, 15, 16, 17, 18] magnetic resonance (NMR),[19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 10, 12, 15, 16, 17, 18, 19, 21] \(\dagger\) X-ray photoelectron spectroscopy (XPS),[22, 23, 24, 25, 26, 27, 28, 29, 30, 16, 31, 32, 33, 34] and X-ray absorption spectroscopy (XAS).[16, 24, 33, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 183, 186, 188, 187, 188, 189, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 23, 241, 25, 26, 272, 28, 29, 300, 31, 32, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 112, 13, 14, 14, 14, 14, 14, 15, 16, 17, 18, 19, 19, 19, 11, 10, 11, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 36, 38, 39, 41, 42, 43, 43, 44, 45, 46, 47, 48, 49, 50, 52, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90, 911, 92, 93, 94, 95, 96, 97, 98, 999, 101, 99, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 82, 83, 84, 85, 86, 87, 89, 90,9, 101, 92, 93, 94, 95, 96, 97, 98, 101, 99, 11, 10, 112, 13, 14, 15, 16, 17, 18, 19, 19, 20, around S atoms and a red shift of the S K-edge.[24] Tender energy XAS spectroscopy is therefore a natural choice to probe the local geometric and electronic structures in _gc_-LPS. Sulfur is known to participate to a greater extent than phosphorus in interfacial reactions during cycling, forming LiS at the negative electrode or other metal sulfides at the positive electrode (_e.g._, NiS).[33] In contrast, phosphorus is mostly bound in the center of PS4-tetrahedra (as PS5+ species), except for the direct P-P bonding in P2S6- motifs (as P4+ species). There is no direct Li-P bonding in _gc_-LPS and hence sulfur is more sensitive to the change in Li stoichiometry. Based on these considerations, sulfur K-edge spectroscopy can be expected to yield important insights into the electrochemical reactions in LPS-based solid-state batteries. Commonly, XAS spectra are interpreted by comparison with characteristic features in spectra taken from reference materials, however, this approach is challenging when the composition and structure of the material cannot be readily identified.[35] In order to aid with the interpretation of XAS measurements and to understand the nature of the short-range ordering and its impact on properties such as Li conductivity and the electronic structure, first-principles XAS simulations have previously been conducted.[35, 36, 37, 38, 39, 40, 41, 42, 43] These simulations involve the modelling of the excitation of a core electron into the conduction bands, leaving behind a core hole. Within methods based on density functional theory (DFT) band structure, two approaches are commonly used to account for the core-hole final state effect: _(i)_ the excited electron and core-hole (XCH) method with self-consistent relaxation of valence electrons[44, 45, 46], which is implemented in, e.g., XSPECTRA[47] and the Vienna _Ab Initio_ Simulation Package (VASP)[48], and _(ii)_ many-body perturbation theory based on the Bethe-Salpeter equation treating the screening of valence electrons with linear response, which is implemented in, e.g., OCEAN[49] and EXCITING[50]. Generally, the many-body perturbation theory-based method is computationally more demanding.[35] In comparison, the XCH approach can provide a reasonable accuracy that is sufficient to compare trends with experimental measurements at a moderate computing time[36, 44, 51, 52, 53, 54] and is therefore a good choice for the compilation of a large XAS database. In addition, Pascal _et al._ demonstrated that the XCH approach can reliably predict the features of the S K-edge of distinct coordination environments in Li-S batteries.[36] XAS simulations have so far been limited to crystalline LPS phases, and to our knowledge no XAS simulations of _gc_-LPS have been reported owing to the complexity of the glassy phases. We recently mapped the phase diagram of _gc_-LPS by combining DFT, artificial neural network (ANN) potentials, genetic-algorithm (GA) sampling, and _ab initio_ molecular dynamics (AIMD) simulations, to compile a database of stable and metastable _gc_-LPS atomic structures.[55] This _gc_-LPS phase diagram is the foundation for the herein reported database of simulated _gc_-LPS S K-edge XAS spectra. Here, we report the S K-edge XAS simulations for an extensive database of LPS/_gc_-LPS structures. A workflow for automated calculations using the XCH approach (**Figure 2**) was implemented using the open-source Pymatgen package[56] and VASP.[48] The final database contains 2681 simulated S K-edge XAS spectra for 66 crystalline and glassy structures. Where possible, the simulated spectra were benchmarked by comparison with tender energy XAS spectroscopy measurements. The database is distributed via the Materials Cloud repository,[57] enabling open access by other researchers for further exploration. The workflow is available as **Supporting Information**, providing a tool for researchers to construct their own XAS spectral databases. ## Methods ### Density Functional Theory Calculations All DFT calculations were carried out within the projector-augmented-wave (PAW) approach[58, 59] as implemented in VASP[58, 60]. The simulation parameters were carefully tested to ensure numerical convergence with the energy cut-off for the plane-wave basis set, the supercell size for XCH calculations, the density of the k-point meshes, and the number of unoccupied bands. Different exchange-correlation functionals, pseudopotentials from the VASP library, and core-hole charges (full vs. half) were compared to determine the optimal VASP input parameters for XAS simulations. The following parameters yielded converged results that compared best with experimental reference spectra. Ground-state energy calculations and XAS simulations were performed with the local-density approximations (LDA) exchange-correlation functional[61] and the VASP GW pseudopotentials, which achieve a more accurate description of the post-edge region than regular LDA potentials because the GW pseudopotentials were optimized to yield more accurate scattering properties at high energies Figure 1: Schematic illustration of the local coordination of S atoms with Li and P atoms in selected (Li\({}_{2}\)S)\({}_{k}\)(P\({}_{2}\)S)\({}_{1-x}\) crystalline structures. Li: green; S: yellow; P: purple. LiP\({}_{5}\): orange region; LiP\({}_{3}\)S\({}_{11}\): red region; Li\({}_{3}\)P\({}_{5}\): green region; LiP\({}_{5}\): blue region. Figure 2: Flowchart with the workflow for building the sulfur K-edge XAS spectral database of crystalline and amorphous lithium thiophosphate materials. well above the Fermi level. A kinetic energy cut-off for the wavefunction of 400 eV, supercells with an edge length of at least 10\({}^{\sim}\)15 A in each direction, and a full core hole were used for all calculations. Three times as many unoccupied bands as occupied bands were included in all calculations to ensure the convergence of the conduction band in the relevant energy range. Gaussian smearing with a width of 0.05 eV was used, and total energies were converged to better than 10\({}^{\sim}\)5 eV/atom. The first Brillouin zone was sampled using \(\Gamma\)-centered k-point meshes with the resolution of 0.25 A\({}^{-1}\) generated with VASP's automatic sampling method. In XCH simulations, a constant Lorentzian broadening of 0.05 eV was introduced. Additional broadening can be added during post-processing when compared to experiment as discussed below. In XAS simulations with the XCH approach, the final state is treated self-consistently subject to the presence of a core-hole in the S 1s orbital [44, 53]. The XAS spectrum is calculated as the imaginary part of the frequency-dependent dielectric matrix averaged over the diagonal matrix elements within the PAW frozen-core approximation. For comparison with measured reference spectra, the simulated XAS spectra were convoluted with a Gaussian function with a full width at half maximum of 0.5 eV to simulate instrument broadening and with a Lorentzian function with an energy-dependent width of 0.59 eV + \(a~{}\times(E_{\mathrm{c}}-E_{\mathrm{cbm}})\) to simulate the core-hole lifetime broadening and quasiparticle life time broadening, where \(a\) is a fitting parameter and \(E_{\mathrm{c}}\) and \(E_{\mathrm{cbm}}\) are DFT energy levels of conduction bands and conduction band minimum, respectively. Each absorption edge was aligned using the excitation onset determined from the total energy difference between the final state and the initial state, following a previously reported procedure [54]. ### Structure selection The structures for the XAS simulations were selected from the LPS structure library by Guo _et al._[55]. The _gc_-LPS structures in this dataset were generated by iterative manipulation of the known crystal structures along the (Li\({}_{2}\)S)\({}_{h}\)(P\({}_{2}\)S)\({}_{h-x}\) composition line (LiPS\({}_{3}\), Li\({}_{p}\)P\({}_{2}\)S\({}_{7}\), Li\({}_{p}\)S\({}_{11}\), \(\alpha\)-Li\({}_{p}\)S\({}_{4}\), \(\beta\)-Li\({}_{p}\)S\({}_{4}\), \(\gamma\)-Li\({}_{p}\)S\({}_{4}\), and Li\({}_{p}\)PS\({}_{6}\)) using a previously established protocol [62, 63]. In short, (A) a supercell of a crystal structure was created, (B) either Li and S atoms were removed with a ratio of 2:1 (Li\({}_{2}\)S), or P and S atoms were removed with a ratio of 2:5 (P\({}_{2}\)S), and (C) low-energy configurations of the new composition were determined with a genetic (evolutionary) algorithm using an artificial neural network (ANN) interatomic potential as implemented in the atomic energy network (eenet) package [64, 65, 66]. For further details we refer the reader to reference [55]. From this dataset, the (Li\({}_{2}\)S)\({}_{d}\)(P\({}_{2}\)S\({}_{5}\))\({}_{l\to x}\) structures with the lowest formation energies relative to Li\({}_{2}\)S and P\({}_{2}\)S\({}_{5}\) at each composition were chosen for XAS simulations. In addition, the above crystalline LPS compounds and the crystal structures of the sulfur-deficient Li\({}_{2}\)PS\({}_{3}\) and Li\({}_{48}\)P\({}_{16}\)S\({}_{61}\) were included. ### Automated DFT workflow for constructing XAS database On the basis of the determined parameters from the benchmark systems, a workflow was devised for automated XCH calculations for generating an XAS database (**Figure 2**). For each optimized LPS structure, the workflow based on symmetry automatically determines the inequivalent S sites and their respective weights. Our implementation makes use of the symmetry tools from the Pymatgen package [56]. Pymatgen functions were further used to create supercells and generate VASP input files for single-point LDA calculations to obtain the ground state energy, and for XCH calculations for all symmetrically distinct S atoms in the supercell. Raw data from completed DFT calculations are post-processed, which mainly involves two steps: 1) applying the peak alignment to distinct S atoms using the excitation onset determined from the total energy difference between the final state and the initial state and 2) averaging the aligned spectra with the correct weights to compute the XAS spectrum of the whole system. Note that the data without averaging contains information about the XAS features of local atomic structures, which could be used for further exploration, _e.g._, machine-learning-assisted spectral interpretation. #### Sample preparation and XAS measurements The experiment S K-edge XAS spectra were measured at the 8-BM and 7-ID-2 beamlines at National Synchrotron Light Source II (NSLS-II). The P\({}_{2}\)S\({}_{5}\) spectrum was measured at 8-BM in fluorescence yield (FY) mode, and Li\({}_{2}\)S, NiS, and \(\beta\)-LPS were measured at 7-ID-2 in electron yield (EY) mode. We used unfocused beam with spot sizes of 2.5 mm x 5 um and 1 mm x 1 mm at 8-BM and 7-ID-2, respectively. Prior to the measurements, the samples were pressed into pellets with 1 cm diameter. For the 8-BM measurement, the sample was sealed between Kapton tape and polypropylene film in an argon-filled glovebox, and then transferred into the helium chamber at the beamline. For the 7-ID-2 measurements, the samples were mounted on a sample bar and sealed in an aluminized polymer bag in the glovebox, and then transferred into the vacuum chamber at the beamline using an argon-filled transfer bag. The experiment XAS spectra were processed with Athena software package [67]. ### Data Records The database contains 66 structures with between 12 and 162 atoms, 18 of which are crystalline and the rest are amorphous (see Table 1). For each structure, a ground state self-consistent field (SCF) calculation is computed and stored in a directory named input_SCF. For every symmetrically inequivalent S site (between 1 and 86 per structure), a core-hole calculation is performed using the S core-hole pseudopotential. For each individual VASP calculation, we provide all input files except the pseudopotentials (POTCAR files) since those are distributed with VASP: INCAR, POSCAR and KPOINTS. That way, calculations can be rerun after reconstructing the appropriate potential file for each calculation. Due to the large size of many VASP output files, we only keep those necessary for presenting and reproducing the spectral database. These include the INCAR, POSCAR and KPOINTS input files, and the OSZICAR output file (to demonstrate the convergence of the calculation). Additionally, we save the Fermi energy where relevant (efermi.txt) and post-process the XAS from the OUTCAR (mu.dat; note: most regions with zero intensity are discarded to save space). Each spectrum consists of four columns: the energy, and the three components of the XAS (corresponding to the three polarization directions along the Cartesian coordinates). VASP 6.2.1 was used with GPU acceleration, and no post-processing was performed, such that the database is essentially preserved exactly as output by VASP. We provide short post-processing scripts for extracting key observables, such as the energy and spectral intensity. The spectral data are stored in the Materials Cloud ([https://www.materialscloud.org](https://www.materialscloud.org)). ### Technical Validation _Benchmark of the XAS simulations_ Our calculations started with the benchmark of the XAS simulations using reference sulfur compounds. Some of the most relevant compounds from _gc_-LPS/electrode interfacial degradation, including Li\({}_{2}\)S, P\({}_{2}\)S\({}_{5}\) and NiS, were selected as validation systems. To validate the VASP XAS simulations, the simulated spectra were compared against experimental measurements for three benchmark systems (Li\({}_{2}\)S, P\({}_{2}\)S\({}_{5}\) and NiS) as shown in **Figure 3**. The simulated spectra successfully reproduce the \begin{table} \begin{tabular}{|l l l l|} \hline **Database** & **Compositions** & **Structures** & **S Sites** \\ \hline Crystalline & 9 & 18 & 141 \\ \hline Glassy & 21 & 48 & 2540 \\ \hline Total & 28 & 66 & 2681 \\ \hline \end{tabular} \end{table} Table 1: The construction of S K-edge XAS database in _gc_-LPS. Note that two compositions (Li\({}_{2}\)PS\({}_{4}\) and Li\({}_{2}\)P\({}_{2}\)S\({}_{7}\)) appear in both crystalline and glassy structure models, so the total number of compositions is 28 instead of 30. main features in reference systems. It is known that Kohn-Sham DFT underestimates band gaps and concomitantly band widths, due to inaccurate estimates of quasiparticle (excitation) energies based solely on the Kohn-Sham eigenspectrum [36]. Therefore, the calculated XAS spectra may underestimate peak separations compared to experiments, as seen in the LiS spectrum. The XCH simulations successfully reproduce the three main features in the Li\({}_{2}\)S spectrum at 2474, 2476.5, and 2484 eV, as well as the peak shoulder at 2480 eV, while the energy separation of the first two peaks was slightly underestimated. The relative intensities of these peaks are mostly successfully reproduced, with a slight underestimate of the intensity of the third peak at 2484 eV. The spectrum of P\({}_{2}\)S\({}_{5}\) exhibits a pronounced pre-edge feature. The structure of (P\({}_{2}\)S\({}_{5}\))\({}_{2}\) is composed of two types of S atoms: 4 terminal S and 6 bridging S, denoted as S(1) and S(2), respectively in **Figure 3b** and indicated in structural motifs in **Figure 1**. The terminal S atom is coordinated with one P atom, and the P-S bond length is around 1.9 A. In comparison, the bridging S atom is coordinated with two P atoms; the charge distribution over the bridging S is less negative than the terminal S, leading to a longer P-S bond length of 2.1 A and blueshift of the absorption onset as shown in Figure 3b. Our results are the first demonstration that the pre-edge and main edge of the S K-edge in P\({}_{2}\)S\({}_{5}\) can be attributed to two types of differently coordinated S atoms. This also demonstrates that XCH simulations can distinguish the inequivalent absorption sites. To study the interfacial reaction between LPS and Ni-based cathodes, we also computed S K-edge XAS for NiS, a common degradation product of LPS in contact with Ni-based cathode materials, without and with a Hubbard \(U\) correction of 3.9 eV to account for the correlation of the Ni _d_-band electrons. As shown in **Figure 3c**, the Hubbard \(U\) correction does not change the main absorption edge, but leads to a broadened absorption edge and increased intensity in the post-edge region. While the \(U\) value is dependent on the species and materials and must be tested carefully, the overall excellent agreement between the XCH simulations and experiments on exemplary reference compounds demonstrates the robustness of our approach and the reliability of our dataset [41]. ### Validation of DFT calculated S K-edge in crystalline \(\theta\)-Li\({}_{2}\)S\({}_{4}\). To further validate the simulated XAS spectra of the sampled _gc_-LPS phases, we also conducted experimental measurements of XAS reference data for LPS crystal structures. The S K-edge in crystalline \(\theta\)-Li\({}_{2}\)PS\({}_{4}\) was measured under fluorescence mode at NSLS-II. As shown in **Figure 3d**, our computed XAS spectra are in excellent agreement with the experimental data. The absorption edge is around 2471 eV, which is likely due to the S 1s to S 3p \(\sigma^{*}\) transition (dumbbell-shaped S\({}_{2}\)-) [24]. The simulated XAS not only reproduces most features, but also yields a comparable peak splitting for the S K-edge in the \(\theta\)-Li\({}_{2}\)PS\({}_{4}\) crystal. In \(\theta\)-Li\({}_{2}\)PS\({}_{4}\), there are three inequivalent S sites (denoted as S\({}^{(3)}\), S\({}^{(4)}\), S\({}^{(5)}\) in **Figure 1 and 3d**), where the local coordination is shown in **Figure 1**. While the charge distribution at the three S sites is comparable, the P-S and Li-S bond lengths exhibit a sizable variation. In this case, the core-level chemical shift cannot be simply explained by the bond length and charge transfer. It is an interesting future research direction to develop optimal structural and chemical descriptors for the interpretation of XAS spectral features in _gc_-LPS and especially the interphases at solid-state interfaces with Li metal anodes and Ni-based cathodes. Figure 3: Benchmark results of XAS simulations of (a) Li\({}_{2}\)S, (b) P\({}_{2}\)S\({}_{5}\), (c) NiS, and (d) \(\beta\)-Li\({}_{3}\)P\({}_{4}\). In each subfigure, the black curve indicates the experimental spectrum, and the red curve indicates the simulated spectrum. Spectra calculated for different distinct S sites are shown as dashed lines in (b) and (d). The experimental spectra in (a, c, d) were measured in electron yield mode at Beamline 7-ID-2 of NSLS-II. The experimental spectrum in (b) was measured in fluorescence yield (FY) mode at Beamline 8-BM of NSLS-II, where the peaks of the P\({}_{2}\)S\({}_{5}\) experimental spectrum are damped due to self-absorption in FY mode. DFT calculations with (orange) and without Hubbard \(U\) correction for Ni (red) are shown in (c). The labels of different S sites in (d) correspond to the local motifs in Figure 1b. ## Usage Notes See Code Availability. ### Code Availability Short scripts used for extracting useful information from the VASP output files, such as the XAS and energies, are provided with the database. ### Data Availability The data set can be obtained from the Materials Cloud (doi.org/10.24435/materialscloud:92-0a) and contains VASP input and output files, Python scripts, metadata, and processed data. ### Acknowledgements We acknowledge financial support by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, Contract No. DE-SC0012704. The research used the theory and computational resources of the Center for Functional Nanomaterials and Beamlines 7-ID-2 and 8-BM of NSLS-II, which are the U.S. DOE Office of Science User Facilities, and the Scientific Data and Computing Center, a component of the Computational Science initiative, at Brookhaven National Laboratory under the Contract No. DE-SC0012704. We also acknowledge computing resources from Columbia University's Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. Disclaimer: Commercial equipment, instruments, or materials are identified in this paper to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose. ### Author contributions Each author's contribution to the work should be described briefly, on a separate line, in the Author Contributions section. H.G.: structure selection, DFT calculations, benchmarking and parameter optimization of the EXC simulations, workflow conception, writing - initial draft and editing. M.R.C.: workflow implementation, DFT and spectral calculations, analysis, writing - review and editing. C.C., Y.D., S.B., C.W.: experimental XAS data acquisition. J.Q.: workflow implementation. F.W.: project conception, supervision. N.A.: project conception, structure selection, workflow conception and implementation, writing - review and editing. A.U.: conception, implementation of the workflow, writing - review and editing. D.L.: project conception, supervision, writing - review and editing. ### Competing interests The authors declare no competing interests.
2309.10047
A Modular Spatial Clustering Algorithm with Noise Specification
Clustering techniques have been the key drivers of data mining, machine learning and pattern recognition for decades. One of the most popular clustering algorithms is DBSCAN due to its high accuracy and noise tolerance. Many superior algorithms such as DBSCAN have input parameters that are hard to estimate. Therefore, finding those parameters is a time consuming process. In this paper, we propose a novel clustering algorithm Bacteria-Farm, which balances the performance and ease of finding the optimal parameters for clustering. Bacteria- Farm algorithm is inspired by the growth of bacteria in closed experimental farms - their ability to consume food and grow - which closely represents the ideal cluster growth desired in clustering algorithms. In addition, the algorithm features a modular design to allow the creation of versions of the algorithm for specific tasks / distributions of data. In contrast with other clustering algorithms, our algorithm also has a provision to specify the amount of noise to be excluded during clustering.
Akhil K, Srikanth H R
2023-09-18T18:05:06Z
http://arxiv.org/abs/2309.10047v1
# A Modular Spatial Clustering Algorithm with Noise Specification ###### Abstract Clustering techniques have been the key drivers of data mining, machine learning and pattern recognition for decades. One of the most popular clustering algorithms is DBSCAN due to its high accuracy and noise tolerance. Many superior algorithms such as DBSCAN have input parameters that are hard to estimate. Therefore, finding those parameters is a time consuming process. In this paper, we propose a novel clustering algorithm 'Bacteria-Farm', which balances the performance and ease of finding the optimal parameters for clustering. Bacteria-Farm algorithm is inspired by the growth of bacteria in closed experimental farms - their ability to consume food and grow - which closely represents the ideal cluster growth desired in clustering algorithms. In addition, the algorithm features a modular design to allow the creation of versions of the algorithm for specific tasks / distributions of data. In contrast with other clustering algorithms, our algorithm also has a provision to specify the amount of noise to be excluded during clustering. **Keywords - clustering algorithms; modular clustering; noise tolerance in clustering; spatial clustering** ## I Introduction In recent times, clustering has been the center piece of major fields such as data science, machine learning, knowledge discovery, statistics and data mining. In the information age, due to the presence of a plethora of uncleaned, unlabeled data, extraction of insights from this data is very essential in many applications. Clustering is the process of breaking down data into meaningful subdivisions called clusters based on the similarity between data points. The points in a cluster have a higher similarity than the ones across clusters. There is always room for improvement in the clustering paradigm where a newer algorithm is more efficient and effective to a certain distribution of data. One of the most important problems faced while designing a clustering algorithm is choosing the parameters of the algorithm. If the algorithm is susceptible to a tiny change in those parameters, the robustness of the algorithm is affected. Due to this problem, most of the time is spent on selecting the ideal parameters for the given data than in clustering. Algorithms such as DBSCAN [1] use parameters that are hard to estimate in a short period of time. _Partitioning clustering algorithms_ are the simplest kind of clustering algorithms. The idea is to breakdown the entire data into arbitrary \(k\) clusters where the partitions optimize a given function. For every cluster, a represent-er in the form of _centroid_, _medoid_, etc. is used to iteratively optimize the clusters with the addition of new data points into the cluster. The advantage of using these algorithms lie in the efficiency of their linearity. But, due to the reliance on the initial configuration of clusters, these algorithms lack robustness. Also, they are not suitable for non-convex data or data with noise. _Hierarchical clustering algorithms_ produce a nested structure of clustering data points. They contain two types: _top-down_ and _bottom-up_. In _top-down_ algorithms, initially, the entire data set is taken as a single cluster and it is sequentially broken down into smaller clusters until they are singleton clusters. On the other hand, _bottom-up clusters_ consider every point as a singleton cluster and sequentially combines the data points into bigger clusters than in the previous level. The advantages of using these algorithms lie in the flexibility of choosing the most appropriate number of clusters and their sizes from different levels of clusters. Like the _partitioning cluster algorithms_, they are very sensitive to the presence of noise. Also, they might encounter difficulties in handling convex and large data. Hence, they can prove to be ineffective for real data. _Density based clustering algorithms_ group objects / data points based on the density of the locality rather than the proximity between data points. The high density regions are considered as clusters and low density ones as noise. With the advent of density based algorithms, clustering performance was boosted due to the provision of dealing with noise and non-convex data. But, these algorithms are very sensitive to the input parameters as small changes in the values of the parameters can completely shift the structure of clusters. Nevertheless, the performance of density based algorithms is generally greater than partitioning algorithms. _Distribution-based clustering algorithms_ group data based on likelihood of data points belonging to a distribution (or cluster). Objects / data points which most likely belong to the same distribution are clustered together. Though their theoretical foundation is sound, they suffer from _over fitting_ as complex models are generated easily. Hence, estimation of the complexity of the model is difficult. Moreover, real data may not belong to a precise distribution model and the presence of such models will lead to poor performance in these algorithms [2]. However, distribution-based algorithms work well on complex, spatial data. With a plethora of clustering algorithms with their own advantages and disadvantages, a general algorithm is desired. In this paper, we introduce a modular design to our model to accommodate these various needs of clustering algorithms. In this design, we obtain hyper-parameters for the novel algorithm by pre-clustering a fraction of data with the best standard algorithm for that distribution and fine-tuning these results with our algorithm. Along with this design, the model contains a salient feature to specify the amount of noise to be excluded by the algorithm. This paper is organized as follows: Related work on clustering, modular algorithms are briefly discussed in Section 2. In Section 3, the design and implementation of the new algorithm are comprehensively explained. In Section 4, the performance evaluation of the algorithm when compared to _k_-means and DBSCAN is presented. Section 5 concludes the paper and some ideas for future research are discussed. ## II Related Work and Definitions ### _Related Work_ One of the first kinds to enter the clustering paradigm are partitioning clustering algorithms. Reference [3] proposes a partitioning algorithm having \(k\) clusters. Each cluster is represented by a _medoid_ and sum of distances within clusters serves as the optimization function. Reference [4] seeks a _local optima_ instead of a _global optima_ to enhance clustering performance. Reference [5] implements an efficient version of the _Lloyd's k-means_ algorithm to further improve performance. Reference [6] proposes a _k-d tree_ organization of data to efficiently find patterns in the data. Reference [7] proposes a _global k-means_ clustering algorithm which incorporates a deterministic global optimization method and employs the _k_-means algorithm as a local search method. The main disadvantage of the work until then was the sensitivity of the algorithm to initial _centroid_ positions in the k-means algorithm. By using this method, the issue of randomly selecting initial cluster _centroids_ is eliminated and the algorithm proceeds in an incremental way to optimally add a new cluster center to the previous stage. Though this reduces the randomness involved in the k-means algorithm, the sequential addition of a cluster center affects the execution performance. Reference [8] proposes an improved _k_-means algorithm which requires some information on the required domain. With this prerequisite, the algorithm incorporates background information in the form of _instance-level_ constraints. Reference [9] proposes a method to reduce the _euclidean_ distance calculations in the the original _k_-means algorithm. Reference [10] proposes a _genetic k-means_ algorithm, a hybrid _genetic_ algorithm that replaces the _crossover_ operation with a _k-means operator_ to generate an efficient genetic algorithm for clustering. Reference [11] improvises on the _genetic k-means_ algorithm by ensuring the convergence to a global optimum among other improvisations over its parent version. DBSCAN, a density based algorithm proposed in [1] improved performance drastically with noise handling and design for spatial clustering. It set the benchmark for modern clustering. It introduces a sequential algorithm designed to discover clusters of arbitrary shape. Many versions of DBSCAN were proposed over the years with improvements in efficiency, accuracy and the 'power' of the algorithm. Reference [12] proposes a sampling-based DBSCAN which improve time efficiency without compromising accuracy. But, there was a problem with this. The sampling cluster might sometimes not represent the population and the clustering deviates from the expected result. Reference [13] presents a hybrid DBSCAN algorithm called l-DBSCAN which uses two _prototypes_ to cluster at coarser and finer levels. With this setup, the algorithmic time efficiency and accuracy is greatly improved. Reference [14] introduces ST-DBSCAN which incorporates extensions of DBSCAN to discover clusters for spatial, non-spatial and temporal data as opposed to just spatial data by its parent algorithm. With all the above improvements, the disadvantages of the original DBSCAN were mitigated. Reference [15] uses rough-set theory to create a hybrid clustering technique to derive _prototypes_ using the _leader's clustering_ method and use the _prototypes_ to derive density based clusters. This split allows a reduction in time complexity from \(O(n^{2})\) to \(O(n)\). Reference [16] introduces MR-DBSCAN which uses the _MapReduce_ parallel programming platform to create an efficient implementation of DBSCAN. Reference [17] presents a revised version of DBSCAN that considerably improves DBSCAN's performance in dense adjacent clusters. Reference [18] presents G-DBSCAN which consists of a GPU accelerated algorithm for density-based clustering. It is evident that the DBSCAN algorithm has evolved since its inception in 1996 but one of its core problems, the presence of parameters which are time-consuming to estimate, is yet to be solved. Other density based clustering algorithms have also found success, such as the one in [19]. Its algorithm, DBRS, incorporates random sampling and checks a point's neighborhood to decide whether a point belongs to a cluster or not. Reference [20] presents DBCLASD, a non-parametric algorithm which can form clusters of arbitrary shape by analyzing the distance distributions between data points. ### _Definitions_ #### Ii-B1 front-runners front-runners are defined as the points which are "active" during the course of the algorithm. They are the points which exist on the periphery of the cluster. #### Ii-B2 Dormant points Dormant points are points which are not "active". All the points in the cluster which are not _front-runners_ are considered as dormant points. They are called so because we don't calculate distances to dormant points during the clustering process. Fig. 1: Flow of control in Bacteria-Farm ## III A New Modular Spatial Clustering Algorithm ### _Working of the Algorithm_ The model is divided into two phases: In the first phase, we sample a portion (typically 20 percent) randomly from the data and apply a standard clustering algorithm on it. This presents flexibility in our model as every distribution of data has its own optimized algorithm which works wonderfully on it. Once the standard algorithm clusters the sample, we extract two parameters from the result - the clustering _centroids_ and the proportion of data points in each of the clusters. The proportions act as the threshold for each cluster used in the second phase of the model. _Figure 1_ shows the flow of control in our algorithm. In the second phase, the core algorithm is executed. It starts with the _centroid_ and expands outward. We have defined a parameter called as _front-runners_ which are typically the surface points in a cluster. The distance between every point in the data and the _front-runners_ is calculated and the nearest point to the _front-runners_ (and hence, to the cluster as they represent the cluster) is selected. This point is included into that cluster. Initially when the number of points in the cluster are less than the number of _front-runners_ required, every new point in the cluster becomes a _front-runner_. In the later stages when the number of points in the cluster exceeds the number of _front-runners_, **the _front-runner_ which is closest to the recently selected point goes dormant and is replaced by the new point as the new _front-runner_**. This ensures that the surface points stay as the _front-runners_ and the number of _front-runners_ stay constant. _Figure 2_ illustrates the growth of the cluster in Bacteria-Farm. Iteratively, new points are added to the cluster and the _front-runners_ are constantly updated till the exit condition - the number of points in the cluster is equal to the threshold of that cluster (calculated in the first phase) - is satisfied. Once both phases are completed, the clusters are separated from the data and the remaining points - which is noise - are discarded. With the flexibility to specify the number of points a cluster can include in itself, a unique property is observed: The difference between the total number of points in the data and the sum of the number of points clustered, can be defined as the noise in the data. We use this property for noise specification in the model. When X percent of data is specified as noise to the model, the model excludes X percent of the total data when the proportions of points in each cluster are calculated. Suppose there are two clusters with 60 percent and 40 percent as their proportions of number of points. Assume that the noise specified for the model is 10 percent. With the exclusion of noise, the new proportions are 60 percent and 40 percent of the remaining data i.e., 54 percent and 36 percent of the total data. It can be observed that this restriction is on the exit condition of the algorithm and hence, guaranteed (from the design) that the noise specified is excluded only from the periphery of the cluster. _Figure 3_ illustrates this property of Bacteria-Farm algorithm. It can be observed that as noise specification increases, the peripheral points are labelled as noise instead of the inner points of a cluster. ### _Pseudo-code of the Algorithm_ In the following, we present the pseudo code of the Bacteria-Farm algorithm. Important details are explained separately. The major functions required by Bacteria-Farm are expanded before the core algorithm. ``` Input data set _df_, number of front-runners \(n_{fr}\), noise factor \(n\) Output centroids centroids, thresholds thresholds sample = take a sample of data from the data set _df_. centroids, threshold = retrieve-parameters(sample, n). return centroids, thresholds Fig. 3: The Iris data set with 15 percent and 20 percent noise specification respectively. Fig. 2: Illustration of the transition of _front-runners_ in the Bacteria-Farm algorithm. ``` Input sample \(s\), noise factor \(n\) Output centroids centroids, thresholds thresholds Use a standard clustering algorithm to fit the sample data and obtain labels for them. The data is divided into clusters from this algorithm. forcluster in clustersdo centroid = mean of all instances in the cluster. Append centroid to the list centroids. threshold = ratio of number of points in the cluster to the number of points in the sample multiplied by the noise factor \(n\). Append threshold to the list thresholds. endfor return centroids, thresholds ``` **Algorithm 2** retrieve-parameters ``` Input data set \(df\), number of front-runners \(n_{fr}\) Output clusters clusters centroids, thresholds = sampling(df,\(n_{fr}\)) Initialize a list of lists clusters of size centroids, to empty lists and each inner list is of size \(n_{fr}\). for centroid \(c\) in list centroidsdo frs = list of corresponding front-runners for each \(c\) and initialize the first front-runner as \(c\) itself. while true do if size of cluster corresponding to \(c\) is greater than threshold corresponding to \(c\)then break endif minInstance = get closest point to the set of front-runners frs and add it to current cluster in clusters. Replace the closest front-runner fr (in frs) to the minInstance, with minInstance itself. endwhile endfor ``` **Algorithm 3** Bacteria-Farm #### Iii-C1 Additional Explanation In retrieve-parameters, we subtract the noise percentage from 100 and multiply that factor with the proportion of points in a cluster obtained from the standard clustering algorithm. This is the threshold for each cluster used in second phase of Bacteria-Farm algorithm. In the core Bacteria-Farm algorithm, we calculate the distance from the front-runners frs to every point in the data set \(df\) and pick the one with the smallest Euclidean distance as minInstance. The algorithmic complexity of this step is \(O(log(n))\) by using spatial indexing. Once the closest point minInstance is chosen, the corresponding front-runner fr in frs which is closest to minInstance is replaced with minInstance as the new front-runner and this instance is included to the current cluster. ## IV Performance Evaluation We evaluate Bacteria-Farm according to the major requirements of clustering algorithms - efficiency, input parameters and ability to cluster data of arbitrary shape. We choose _Silhouette Coefficient_ and _Calinski-Harabasz Index_ as the performance metrics for evaluation. We compare Bacteria-Farm with established algorithms such as DBSCAN and \(k\)-Means in terms of efficiency and the fore-mentioned performance metrics. ### _Choice of comparison algorithms_ We have selected DBSCAN and \(k\)-Means for comparison as they are the most popular density-based and partitioning clustering algorithms respectively. We chose DBSCAN as it is an established algorithm for clustering data of arbitrary shape and size. Over time, many versions of DBSCAN have been proposed but the core algorithm remains the same. Hence, we decided to choose the vanilla version of DBSCAN for comparison with the vanilla version of Bacteria-Farm. We have chosen \(k\)-Means as its time complexity is \(O(n)\) and helps in estimating the real performance of the Bacteria-Farm algorithm. ### _Choice of performance metrics_ We have selected _Silhouette Coefficient_ and _Calinski-Harabasz Index_ as the two performance metrics. The definition of these metrics along with the reasons for their selection are given below. #### Iv-B1 Silhouette Coefficient Let \(a(i)\) be the average distance between a datum \(i\) and all other points in its cluster. \(a(i)\) is a measure of the intra-cluster distance. Lower the value of \(a(i)\), denser is the cluster and better the assignment of \(a(i)\) to the cluster. Let \(b(i)\) be the average distance between the datum \(i\) and all other points in any other cluster in the data set. \(b(i)\) is a measure of the inter-cluster distance. Higher the value of \(b(i)\), better the separation of clusters. _Silhouette Coefficient_\(s(i)\) can be defined as : \[s(i)=\begin{cases}1-a(i)/b(i)&\text{if }a(i)\leq b(i)\\ 0&\text{if }a(i)=b(i)\\ a(i)/b(i)-1&\text{if }a(i)\geq b(i)\end{cases}\] For \(s(i)\) close to 1, it implies \(a(i)\ll b(i)\). A small \(a(i)\) means that a datum \(i\) is closely matched with other data in the same cluster and a large \(b(i)\) indicates that the datum \(i\) is poorly matched with data present in other clusters. Therefore, a high value of \(s(i)\) can conclude that the data has clustered well. We chose _Silhouette Coefficient_ as it has been a good indicator of clustering performance, in the past. #### Iv-B2 Calinski - Harabasz Index Let \(SS_{B}\) be the overall inter-cluster variance, \(SS_{W}\) be the overall intra-cluster variance, \(k\) be the number of clusters and \(N\) be the number of points in the data set. _Calinski-Harabasz Index_\(CH_{k}\) for \(k\) clusters (with standard notations) can be defined as : \[CH_{k}=\frac{SS_{B}}{SS_{W}}\times\frac{N-k}{k-1}\] A high value for the first fraction in the above equation indicates that \(SS_{B}\gg SS_{W}\). The inference is that the data between clusters are very different from each other and the ones within the same cluster are very similar. This is another indication of good clustering in the data. We chose _Calinski-Harabasz Index_ because we can infer that the ratio of variances in the equation is a good indicator of the compact-ability of a cluster. This is a similar to the previous metric. Overall, we have taken two performance indicators (along with time taken for the algorithm to execute) to measure the overall performance of Bacteria-Farm for convex as well as non-convex data. #### Iii-B3 Input parameters It is hard to optimize the parameters for given data in many clustering algorithms. For example, DBSCAN has two parameters - _Epsilon_ distance and _minPts_. To explain those parameters, we define a _core_ point. A _core_ point is a point which has a minimum number of points within a certain distance from itself. _Epsilon_ distance specifies how close the points should be to a core point to consider those points as a part of the cluster and _minPts_ specifies how many points should be in the _Epsilon_ distance from a point for it to become a core point. Both these parameters are continuous values and are hard to optimize. Users usually resort to running the algorithms multiple times to arrive at the optimal values for these parameters or use optimization techniques to obtain the optimized parameters. This process is time consuming and hence, there is a need for "better" parameters. On the other hand, _k_-Means require the number of clusters _apriori_ and this is hard to obtain from visual inspection in higher dimension data. After considering these problems, we have devised a different approach to obtain the parameters inherently from our model. As discussed earlier, we have two parts in our model - the first phase which runs a standard algorithm to obtain the clusters and the second phase which runs the core Bacteria-Farm algorithm. Due to the modular design of the model, we can use a parameter-less algorithm in the first phase to obtain clusters. Once the clusters are obtained, the _centroids_ of those clusters are sent to the second phase. Effectively, we have two parameters for Bacteria-Farm : the percent of noise to be specified and the number of _front-runners_ desired. Also, the robustness of the model allows for some error in choosing the number of _front-runners_. _Figure 4_ illustrates the robustness in the algorithm with varying number of _front-runners_. Many clustering models including DBSCAN fail to account for this error and thus are highly sensitive to small changes in their parameters. #### Iii-B4 Ability to cluster data of arbitrary shape Spatial databases may contain convex, non-convex and other data of arbitrary shape, and good clustering algorithms can cluster any data sufficiently well. We will evaluate DBSCAN and Bacteria-Farm with respect to their ability to cluster data of arbitrary shape. We consider a small real data set of small dimensions to illustrate the clustering in Bacteria-Farm algorithm. _Figure 5_ shows the clustering comparison between DBSCAN and Bacteria-Farm for the Alcohol data set [21] with 411 instances. With a noise specification of 1.24 percent for the Bacteria-Farm algorithm, it can be verified visually that the data points are assigned to their correct clusters. It can also be observed that the data points "seen" as noise are not included in any of the clusters in the Bacteria-Farm algorithm. Although it can be observed that some points are mis-clustered, they belong to a minority. #### Iii-B5 Efficiency DBSCAN and Bacteria-Farm are comparable in time complexity of \(O(nlog(n))\), by using spatial indexing. Whereas, K-Means has a time complexity of \(O(n)\). All the measurements are done on a single machine to maintain consistency. Fig. 4: Comparison of results with varying number of _front-runners_ (3,5,7) for the Iris data set to demonstrate robustness. Fig. 5: Visual comparison of performance between DBSCAN and Bacteria-Farm for a data set of arbitrary shape. #### Iv-B6 Performance We have used 92 different data sets with 200 to approximately 1000 instances to compare the run times and the other performance metrics. _Table I_ tabulates the comparison of performance metric averages between K-Means, DBSCAN and Bacteria-Farm over these data sets. We have chosen data with a small number of instances to verify the inferences and clustering progression in all the three algorithms. It can be derived from the algorithm in the earlier section that the time complexity of Bacteria-Farm is \(O(nlog(n))\) (by using Spatial Indexing to retrieve distances between points) and we expect their real run times to be in the same neighborhood. Since the tasks were not too CPU intensive, the performance comparisons were done on a local computer (Apple MacBook Pro Early 2013) with an Intel HD Graphics 4000 GPU. The _Silhouette Coefficient_ of Bacteria-Farm and DBSCAN are comparable, with Bacteria-Farm performing slightly better. This indicates that the algorithm is able to form clusters with high inter-cluster distance and low intra-cluster distance. Since most linear real datasets such as the one in _Figure 5_ have a low _Silhouette Coefficient_, we have used an average value to compare the overall performance of the algorithms. On the other hand, a _Calinski - Harabasz_ indices of the algorithms are not comparable. This is due to the linear increase of the index with the size of the data. Though it offers some degree of comparison, it is not as effective as _Silhouette Coefficient_. Intuitively, the metrics should have comparable values for data of same shape but of different sizes. And due to this dependency on the variance of size of the data, _Calinski - Harabasz_ is used only as a secondary metric. ## V Conclusion and Future Work With the modular design, the versatility of the algorithm to cluster the target distribution of data has a significance improvement. With different "underlying" algorithms suited for different distributions and types of data, suitable parameters of the data are transferred to the "core" Bacteria-Farm algorithm which uses a novel approach to effectively cluster the data. In this paper, we introduce a novel clustering algorithm / model Bacteria-Farm which is designed to handle noise, introduce parameters which are easy to optimize and display superior performance. Our notion of a cluster depends on the limit of points a cluster can accommodate. The core algorithm is designed to work well with convex and non-convex data. Furthermore, the robustness of the algorithm can be demonstrated by varying the values for _front-runners_ as it does not significantly alter the performance of the algorithm. Also, Bacteria-Farm has a provision to specify the amount of noise to exclude from the clusters. As the clusters extend outward, it is guaranteed that the labelled noise mirror the actual noise in the data. This unique property of noise specification enables applications to generate suitable clusters. Experiments on real data demonstrate that Bacteria-Farm performs _better_* than algorithms such as DBSCAN and K-Means in the chosen evaluation metrics. The results also indicate Bacteria-Farm performs well for real data. Future research can include further optimizing the time complexity of retrieval of the closest point to a cluster from \(O(log(n))\) and thus, drastically improve the performance of Bacteria-Farm. Also, _projected clustering_ can be used to improve Bacteria-Farm for sparse, high dimensional data. The use of modular design to improve efficiency in other clustering algorithms, and by extension, other paradigms can be explored. Furthermore, we will consider the application of Bacteria-Farm on non-spatial data and explore suitable designs to improve the performance of clustering in non-spatial data. * The algorithm performs better than K-Means and is on par, if not better, when compared to DBSCAN; with respect to performance metric Silhouette Coefficient. ## acknowledgement We thank the Department of Computer Science and Engineering, PES University for providing the necessary resources to experiment with our models.
2309.10721
Short cycles of random permutations with cycle weights: point processes approach
We study the asymptotic behavior of short cycles of random permutations with cycle weights. More specifically, on a specially constructed metric space whose elements encode all possible cycles, we consider a point process containing all information on cycles of a given random permutation on $\{1,\ldots,n\}$. The main result of the paper is the distributional convergence with respect to the vague topology of the above processes towards a Poisson point process as $n\to\infty$ for a wide range of cycle weights. As an application, we give several limit theorems for various statistics of cycles.
Oleksii Galganov, Andrii Ilienko
2023-09-19T16:04:07Z
http://arxiv.org/abs/2309.10721v1
# Short cycles of random permutations with cycle weights: ###### Abstract We study the asymptotic behavior of short cycles of random permutations with cycle weights. More specifically, on a specially constructed metric space whose elements encode all possible cycles, we consider a point process containing all information on cycles of a given random permutation on \(\{1,\ldots,n\}\). The main result of the paper is the distributional convergence with respect to the vague topology of the above processes towards a Poisson point process as \(n\to\infty\) for a wide range of cycle weights. As an application, we give several limit theorems for various statistics of cycles. keywords: random permutation, cycle structure, point process, Poisson convergence Msc: [2020] 60C05, 60G55 + Footnote †: journal: Computer Science ## 1 Introduction Random permutations are a classical object of combinatorial probability. Uniform permutations (that is, uniformly distributed random elements of the symmetric group \(\mathcal{S}_{n}\)) have been studied since de Montmort's matching problem. In recent years, there has been an extensive literature on non-uniform permutations \(\sigma_{n}\) with cycle weights, which are defined by the probability distribution \[\mathbb{P}\left\{\sigma_{n}=\pi\right\}=\frac{1}{h_{n}n!}\prod_{k=1}^{\infty} \theta_{k}^{C_{k}(\pi)},\qquad\pi\in\mathcal{S}_{n}. \tag{1}\] Here \(\theta_{k}\) are non-negative parameters, \(C_{k}(\pi)\) stands for the number of \(k\)-cycles in \(\pi\), and \(h_{n}\) is a normalization ensuring that \(\sum_{\pi\in\mathcal{S}_{n}}\mathbb{P}\left\{\sigma_{n}=\pi\right\}=1\). These permutations were introduced in Betz and Ueltschi (2011), motivated by the theory of Bose-Einstein condensate. For other applications and connections, see Ercolani and Ueltschi (2014) and references therein. Note that special cases of (1) are the uniform random permutation (with \(\theta_{k}=1\) for all \(k\) and \(h_{n}=1\)) and Ewens random permutations (with \(\theta_{k}=\theta\) for all \(k\), \(h_{n}=\theta^{(n)}/n!\), and \(\theta^{(n)}\) standing for the rising factorial). The latter are based on the Ewens sampling formula which was introduced in population genetics and subsequently found numerous applications, see Crane (2016). A rich theory of Ewens permutations was developed in Arratia et al. (2003). An important subject of study of random permutations is their cycle structure and, in particular, the asymptotic statistics of short cycles (that is, cycles of bounded length) as \(n\to\infty\). It is well known that, for Ewens permutations \(\sigma_{n}\) on \(\mathcal{S}_{n}\), \(n\geq 1\), \[(C_{k}(\sigma_{n}),k\geq 1)\xrightarrow{d}(Z_{k},k\geq 1) \tag{2}\] in \(\mathbb{Z}_{+}^{\infty}\), where \(Z_{k}\) are independent and Poisson distributed with means \(\theta/k\), see Theorem 5.1 in Arratia et al. (2003). In the case of more general permutations with cycle weights, Corollary 2.2 in Ercolani and Ueltschi (2014) shows that (2) remains true with independent Poisson distributed \(Z_{k}\) with means \(\theta_{k}/k\) provided the stability condition \[\lim_{n\to\infty}\frac{h_{n-1}}{h_{n}}=1 \tag{3}\] holds. In the same paper, it is shown that (3) is satisfied for a wide range of asymptotics of \(\theta_{k}\), from sub-exponential decay to sub-exponential growth. What can be said about the limiting composition of short cycles themselves? Say, for fixed points (that is, 1-cycles), the invariance of (1) under relabeling suggests that, conditionally on \(C_{1}(\sigma_{n})=c_{1}\), the set of fixed points of \(\sigma_{n}\) is distributed as a random equiprobable \(c_{1}\)-sample from \([n]:=\left\{1,\ldots,n\right\}\) without replacement. Similar reasoning can be given for cycles of any fixed length. However, a rigorous description of the limiting behavior is possible only within the framework of convergence of random point measures. Using this approach, in Section 2 we state and prove a multivariate point processes version of (2). Its main advantage is that it allows us to easily prove further limit theorems for different statistics of short cycles, bypassing involved combinatorial calculations and complicated asymptotic analysis. Various examples of such results are given in Section 3. ## 2 Preliminaries and main result We first introduce a metric space appropriate for describing the limiting composition of cycles. For \(k\geq 1\), let \[\mathbb{X}_{k}=\left\{\mathbf{x}=(x_{1},\ldots,x_{k})\in[0,1]^{k}\colon\min\{x_{1}, \ldots,x_{k}\}=x_{1}\right\} \tag{4}\] and denote by \(\rho_{k}\) the Euclidean metric on \(\mathbb{X}_{k}\). The last equality in (4) is due to the fact that any element of a cycle can be regarded as its "beginning". Consider now a multi-level space \(\mathbb{X}=\bigcup_{k=1}^{\infty}\mathbb{X}_{k}\) with metric given by \[\rho(\mathbf{x}_{1},\mathbf{x}_{2})=\begin{cases}\rho_{k}(\mathbf{x}_{1},\mathbf{x}_{2}),&\bm {x}_{1},\mathbf{x}_{2}\in\mathbb{X}_{k},\\ \sqrt{\max\{k_{1},k_{2}\}},&\mathbf{x}_{1}\in\mathbb{X}_{k_{1}},\,\mathbf{x}_{2}\in \mathbb{X}_{k_{2}},\,k_{1}\neq k_{2}.\end{cases}\] The triangle inequality holds since \(\sup_{\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{X}_{k}}\rho_{k}(\mathbf{x}_{1},\mathbf{x}_{2})= \sqrt{k}\). The metric \(\rho\) makes \(\mathbb{X}\) a Polish space. Moreover, \(\mathbb{X}\) equipped with the Borel \(\sigma\)-algebra \(\mathcal{B}(\mathbb{X})\) turns into a measurable space, and \(B\in\mathcal{B}(\mathbb{X})\) if and only if \(B\cap\mathbb{X}_{k}\in\mathcal{B}(\mathbb{X}_{k})\) for all \(k\). This makes it possible to define a measure on \((\mathbb{X},\mathcal{B}(\mathbb{X}))\) by \[\lambda(B)=\sum_{k=1}^{\infty}\theta_{k}\,\lambda_{k}(B\cap\mathbb{X}_{k}), \qquad B\in\mathcal{B}(\mathbb{X}), \tag{5}\] where \(\theta_{k}\) are defined in (1) and \(\lambda_{k}\) stands for the \(k\)-dimensional Lebesgue measure. Let \(\delta_{\mathbf{x}}=\mathds{1}\{\mathbf{x}\in\cdot\}\) be the Dirac measure at \(\mathbf{x}\). We will focus on the limiting behavior of random point measures on \((\mathbb{X},\mathcal{B}(\mathbb{X}))\) given by \[\Psi_{n}=\sum_{k=1}^{\infty}\sideset{}{{}^{\prime}}{\sum}_{i_{1},\ldots,i_{k} \in[n]}\delta_{\left(\frac{i_{1}}{n},\ldots,\frac{i_{k}}{n}\right)}\mathds{1} \left\{\sigma_{n}(i_{1})=i_{2},\ldots,\sigma_{n}(i_{k})=i_{1}\right\}, \tag{6}\] where \(\sum^{\neq}\) means that the sum is taken over \(k\)-tuples with distinct entries. Moreover, since \(\Psi_{n}\) is considered as a measure on \((\mathbb{X},\mathcal{B}(\mathbb{X}))\), the latter sum includes only those tuples in which the minimum element comes first. \(\Psi_{n}\) carries all the information about the cycle structure of \(\sigma_{n}\). Recall that the vague topology on the space of locally finite measures is generated by the integration maps \(\nu\mapsto\int_{\mathbb{X}}f\,\mathrm{d}\nu\) for all continuous functions \(f\) with bounded support; see, e.g., Resnick (1987, Section 3.4) or Kallenberg (2017, Chapter 4) for a general exposition. Denote by \(\xrightarrow{vd}\) the distributional convergence of random point measures with underlying vague topology. Let \(\Psi\) denote the Poisson random measure on \((\mathbb{X},\mathcal{B}(\mathbb{X}))\) with intensity measure \(\lambda\) given by (5). The following theorem can be regarded as a point processes extension of (2). **Theorem 1**.: _Let the stability condition (3) hold. Then \(\Psi_{n}\xrightarrow{vd}\Psi\) as \(n\to\infty\)._ **Remark 1**.: Let \(\Psi_{n}^{(k)}\) and \(\Psi^{(k)}\) be the restrictions to \(\mathbb{X}_{k}\) of \(\Psi_{n}\) and \(\Psi\), respectively. By the restriction property of Poisson processes (see, e.g., Theorem 5.2 in Last and Penrose (2018)), \(\Psi^{(k)}\) are independent homogeneous Poisson processes with intensities \(\theta_{k}\). Due to the vague continuity of the restriction mapping \(\mu\mapsto\mu\mathord{\restriction}_{\mathbb{X}_{k}}\), we also have \(\Psi_{n}^{(k)}\xrightarrow{vd}\Psi^{(k)}\). In particular, as \(C_{k}(\sigma_{n})=\Psi_{n}^{(k)}(\mathbb{X}_{k})\), (2) directly follows from Theorem 1. For the proof, we will first need an asymptotics for probabilities of cycles. **Lemma 2**.: _Let \(r\geq 1\) and \(\boldsymbol{i}^{(j)}=\big{(}i_{1}^{(j)},\ldots,i_{k_{j}}^{(j)}\big{)}\), \(j\in[r]\), be disjoint integer tuples with distinct entries such that \(\boldsymbol{i}^{(j)}/n\in\mathbb{X}_{k_{j}}\). Then, under (3), we have_ \[\mathbb{P}\left\{\sigma_{n}\text{ contains the cycles }\boldsymbol{i}^{(1)}, \ldots,\boldsymbol{i}^{(r)}\right\}\sim\frac{\prod_{j=1}^{r}\theta_{k_{j}}}{ n^{k_{1}+\ldots+k_{r}}},\qquad n\to\infty. \tag{7}\] Proof.: Let \(\mathcal{I}\) be the set of all entries of \(\boldsymbol{i}^{(j)}\), \(j\in[r]\), and \(s=\#\mathcal{I}=k_{1}+\ldots+k_{r}\), where \(\#\) stands for the cardinality of a set. By (1), the probability in (7) equals \[\begin{split}\sum_{\tilde{\pi}\in\mathcal{S}_{[n]\setminus \mathcal{I}}}\!\!\!\mathbb{P}\left\{\sigma_{n}=\boldsymbol{i}^{(1)}\circ\ldots \circ\boldsymbol{i}^{(r)}\circ\tilde{\pi}\right\}&=\sum_{\tilde{ \pi}\in\mathcal{S}_{[n]\setminus\mathcal{I}}}\frac{1}{h_{n}n!}\prod_{k=1}^{ \infty}\theta_{k}^{\#\{j\,:\,k_{j}=k\}+C_{k}(\tilde{\pi})}\\ &\quad=\frac{1}{h_{n}n!}\prod_{j=1}^{r}\theta_{k_{j}}\cdot\sum_{ \tilde{\pi}\in\mathcal{S}_{[n]\setminus\mathcal{I}}}\prod_{k=1}^{\infty} \theta_{k}^{C_{k}(\tilde{\pi})}&=\frac{h_{n-s}(n-s)!}{h_{n}n!} \prod_{j=1}^{r}\theta_{k_{j}},\end{split} \tag{8}\] where the last equality follows again from (1). Hence, the claim follows from (3). Proof of Theorem 1.: Let \(\langle\) mean either ( or [, and the same applies to \(\rangle\). By Theorem 4.18 in Kallenberg (2017), it suffices to prove that 1. \(\lim_{n\to\infty}\mathbb{E}\Psi_{n}(B)=\mathbb{E}\Psi(B)\) for any box \(B=\bigtimes_{i=1}^{k}\langle a_{j},b_{j}\rangle\in\mathbb{X}_{k}\), \(k\geq 1\), 2. \(\lim_{n\to\infty}\mathbb{P}\left\{\Psi_{n}(U)=0\right\}=\mathbb{P}\left\{\Psi(U )=0\right\}\) for any finite union \(U\) of boxes from possibly different levels \(\mathbb{X}_{k}\). Let \(B\) be a box with fixed \(k\geq 1\) and \(B_{\neq}\) the set of points in \(B\) with distinct coordinates. By (6), we have \[\mathbb{E}\Psi_{n}(B)=\!\!\!\!\sum_{(i_{1},\ldots,i_{k})/n\in B}^{\neq}\!\!\! \mathbb{P}\left\{\sigma_{n}(i_{1})=i_{2},\ldots,\sigma_{n}(i_{k})=i_{1}\right\}.\] It follows from (8) and (7) with \(r=1\) that all summands on the right-hand side are equal and asymptotically equivalent to \(\theta_{k}/n^{k}\). Thus, as \(n\to\infty\), \[\mathbb{E}\Psi_{n}(B)\sim\frac{\theta_{k}}{n^{k}}\cdot\#\left(B_{\neq}\cap( \mathbb{Z}^{k}/n)\right)\to\theta_{k}\lambda_{k}(B)=\lambda(B)=\mathbb{E} \Psi(B),\] which proves (i). We now proceed to (ii). Let us fix a finite union \(U\) of boxes and denote \(U_{m}=U\cap\mathbb{X}_{m}\). So, \(U_{m}=\varnothing\) for \(m\) greater than some \(k\geq 1\) (the maximum dimension of boxes in the union), and \(U=\bigcup_{m=1}^{k}U_{m}\). Let \(\boldsymbol{i}_{m}\) and \(\boldsymbol{i}_{m}^{(\cdot)}\) stand for integer \(m\)-tuples with distinct entries and \(\bigcirc_{j_{m}=1}^{r_{m}}\boldsymbol{i}_{m}^{(j_{m})}\) be the composition \(\boldsymbol{i}_{m}^{(1)}\circ\ldots\circ\boldsymbol{i}_{m}^{(r_{m})}\) of \(r_{m}\) cycles defined by such tuples. For any \(R\geq 1\), by Bonferroni's inequality, \[\mathbb{P}\left\{\Psi_{n}(U)=0\right\}=1-\mathbb{P}\!\left\{ \bigcup_{m=1}^{k}\bigcup_{\boldsymbol{i}_{m}/n\in U_{m}}\{\sigma_{n}\text{ \it contains the cycle }\boldsymbol{i}_{m}\}\right\}\] \[\leq\sum_{r_{1},\ldots,r_{k}=0}^{2R}(-1)^{r_{1}+\ldots+r_{k}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \[S_{n}(r_{1},\ldots,r_{k})= \sum_{\mathbf{i}_{1}^{(1)}/n,\ldots,\mathbf{i}_{1}^{(r_{1})}/n\in\mathbb{X }_{1},}\mathds{1}\{\text{all tuples are disjoint}\}\prod_{m=1}^{k}\mathds{1}\left\{\mathbf{i}_{m}^{(1)}/n, \ldots,\mathbf{i}_{m}^{(r_{m})}/n\in U_{m}\right\},\] \[\mathbf{i}_{k}^{(1)}/n,\ldots,\mathbf{i}_{k}^{(r_{k})}/n\in\mathbb{X}_{k}\] and division by \(\prod_{m=1}^{k}r_{m}!\) is due to the fact that the sum in the definition of \(S_{n}\) is taken over ordered sets of tuples. Note that \(\frac{S_{n}(r_{1},\ldots,r_{k})}{n^{r_{1}+\ldots+kr_{k}}}\) can be viewed as an integral sum for \[\int_{U_{1}^{r_{1}}\times\ldots\times U_{k}^{r_{k}}}\mathds{1}\left\{\text{ all components of all $\mathbf{x}_{m}^{(j_{m})}$ are distinct}\right\}\,\prod_{m=1}^{k}\prod_{j_{m}=1}^{r_{m}} \mathrm{d}\mathbf{x}_{m}^{(j_{m})}=\prod_{m=1}^{k}\left(\lambda_{m}(U_{m})\right) ^{r_{m}}.\] Hence, by (3), each summand in (10) converges as \(n\to\infty\) to \[(-1)^{r_{1}+\ldots+r_{k}}\prod_{m=1}^{k}\frac{\left(\theta_{m}\lambda_{m}(U_{ m})\right)^{r_{m}}}{r_{m}!}.\] It now follows from (9) and a similar lower bound that \[\sum_{r_{1},\ldots,r_{k}=0}^{2R-1}(-1)^{r_{1}+\ldots+r_{k}}\prod _{m=1}^{k}\frac{\left(\theta_{m}\lambda_{m}(U_{m})\right)^{r_{m}}}{r_{m}!} \leq\lim_{n\to\infty}\mathbb{P}\left\{\Psi_{n}(U)=0\right\}\] \[\leq\sum_{r_{1},\ldots,r_{k}=0}^{2R}(-1)^{r_{1}+\ldots+r_{k}} \prod_{m=1}^{k}\frac{\left(\theta_{m}\lambda_{m}(U_{m})\right)^{r_{m}}}{r_{m}!}.\] Letting \(R\to\infty\) finally yields \[\lim_{n\to\infty}\mathbb{P}\left\{\Psi_{n}(U)=0\right\} =\sum_{r_{1},\ldots,r_{k}=0}^{\infty}(-1)^{r_{1}+\ldots+r_{k}} \prod_{m=1}^{k}\frac{\left(\theta_{m}\lambda_{m}(U_{m})\right)^{r_{m}}}{r_{m}!}\] \[=\exp\left\{-\sum_{m=1}^{k}\theta_{m}\lambda_{m}(U_{m})\right\}= \exp\left\{-\lambda(U)\right\}=\mathbb{P}\left\{\Psi(U)=0\right\}.\] This concludes the proof of (ii) and, hence, that of Theorem 1. ## 3 Limit theorems for statistics of short cycles Theorem 1 allows us to derive limiting distributions for various statistics of short cycles. In what follows, we will always assume that the stability condition (3) is satisfied. We first give a general result which covers the case of additive statistics. **Proposition 3**.: _Let \(k\geq 1\) and \(f_{m}\colon\mathbb{X}_{m}\to[0,\infty)\), \(m\in[k]\), be a family of continuous functions. Denote by \(\mathcal{C}_{m}(\sigma_{n})\) the set of all \(m\)-cycles in \(\sigma_{n}\). Then_ \[\sum_{m=1}^{k}\sum_{c\in\mathcal{C}_{m}(\sigma_{n})}f_{m}\left(\frac{c}{n} \right)\xrightarrow{d}S,\qquad n\to\infty, \tag{11}\] _where \(c\in\mathcal{C}_{m}(\sigma_{n})\) is understood as an integer tuple whose minimum element comes first, and the limiting random variable \(S\) is defined by its Laplace transform_ \[\mathbb{E}\exp\left\{-tS\right\}=\exp\Big{\{}-\sum_{m=1}^{k}\theta_{m}\int_{ \mathbb{X}_{m}}\left(1-\mathrm{e}^{-tf_{m}(\mathbf{x})}\right)\,\mathrm{d}\mathbf{x} \Big{\}},\qquad t\geq 0. \tag{12}\] Proof.: Define \(f\colon\mathbb{X}\to[0,\infty)\) by \(f(\mathbf{x})=\sum_{m=1}^{k}f_{m}(\mathbf{x})\mathds{1}\{\mathbf{x}\in\mathbb{X}_{m}\}\). The left-hand side of (11) can be written as \(\int_{\mathbb{X}}f(\mathbf{x})\,\Psi_{n}(\mathrm{d}\mathbf{x})\), and \(f\) is continuous with bounded support. By Theorem 1 and Lemma 4.12 in Kallenberg (2017), (11) holds with \(S=\int_{\mathbb{X}}f(\mathbf{x})\,\Psi(\mathrm{d}\mathbf{x})\). The Laplace transform of \(S\) is thus of the form \[\mathbb{E}\exp\left\{-tS\right\}=\mathbb{E}\exp\Big{\{}-\int_{\mathbb{X}}tf( \mathbf{x})\,\Psi(\mathrm{d}\mathbf{x})\Big{\}},\] which coincides with the right-hand side of (12) due to the form of the Laplace functional of a Poisson random measure, see, e.g., Proposition 3.6 in Resnick (1987). As an example, we give a limit theorem for the sum \(S_{n}^{(k)}\) of elements in all \(k\)-cycles of \(\sigma_{n}\). **Proposition 4**.: 1. \(\frac{S_{n}^{(k)}}{n}\xrightarrow{d}S^{(k)}\) _as_ \(n\to\infty\)_, where_ \(S^{(k)}\) _is defined by its Laplace transform_ \[\mathbb{E}\exp\left\{-tS^{(k)}\right\}=\exp\bigg{\{}\frac{\theta_{k}}{k}\bigg{(} \bigg{(}\frac{1-\mathrm{e}^{-t}}{t}\bigg{)}^{k}-1\bigg{)}\bigg{\}},\qquad t>0.\] (13) 2. _If_ \(k=1\)_, that is, for_ \(S_{n}^{(1)}\) _being the sum of fixed points,_ \[\mathbb{P}\left\{S^{(1)}\leq x\right\}=\mathrm{e}^{-\theta_{1}}\sum_{j=0}^{ \lfloor x\rfloor}\frac{(-1)^{j}}{j!}(\theta_{1}(x-j))^{\frac{j}{2}}I_{j}\Big{(} 2\sqrt{\theta_{1}(x-j)}\Big{)}\,,\qquad x\geq 0,\] (14) _where_ \(I_{j}\) _is the modified Bessel function of the first kind, see, e.g.,_ _SS_10.25_(ii) in Olver et al. (_2010_)__._ Proof.: For \(\mathbf{x}\in\mathbb{X}_{k}\), let \(f_{k}(\mathbf{x})\) denote the sum of all its components. By Proposition 3, (i) holds with \[\mathbb{E}\exp\big{\{}-tS^{(k)}\big{\}}=\exp\Big{\{}-\theta_{k}\int_{\mathbb{X}_ {k}}\big{(}1-\mathrm{e}^{-tf_{k}(\mathbf{x})}\big{)}\,\,\mathrm{d}\mathbf{x}\Big{\}},\] where the integral on the right-hand side, due to symmetricity of \(f_{k}\), equals \[\frac{1}{k}\int_{[0,1]^{k}}\big{(}1-\mathrm{e}^{-tf_{k}(\mathbf{x})}\big{)}\,\, \mathrm{d}\mathbf{x}=\frac{1}{k}\Big{(}1-\Big{(}\int_{0}^{1}\mathrm{e}^{-tx}\, \mathrm{d}x\Big{)}^{k}\,\Big{)}.\] This yields (13). To prove (14), we first note that \[\mathbb{E}\exp\big{\{}-tS^{(1)}\big{\}}=\mathbb{E}\int_{0}^{\infty}t\mathrm{e }^{-tx}\,\mathds{1}\{S^{(1)}\leq x\}\,\mathrm{d}x=t\int_{0}^{\infty}\mathrm{e }^{-tx}\,\mathbb{P}\{S^{(1)}\leq x\}\,\mathrm{d}x,\] cf. Liu (2020). Hence, \(\mathbb{P}\left\{S^{(1)}\leq x\right\}\) is the inverse Laplace transform of the right-hand side in (13) for \(k=1\) multiplied by \(t^{-1}\), that is, in expanded form, of the function \[G(t)=\sum_{j=0}^{\infty}\frac{(-\theta_{1})^{j}}{j!}\mathrm{e}^{-\theta_{1}} \,t^{-j-1}\mathrm{e}^{\frac{\theta_{1}}{t}}\mathrm{e}^{-jt},\qquad t>0. \tag{15}\] By the time shifting property and Erdelyi et al. (1954, eq. (5.5.35)), the inverse Laplace transform of the \(j\)-th summand \(G_{j}(t)\) in (15) equals \[F_{j}(x)=\frac{(-1)^{j}}{j!}e^{-\theta_{1}}\,(\theta_{1}(x-j))^{\frac{j}{2}}I _{j}\Big{(}2\sqrt{\theta_{1}(x-j)}\Big{)}\,\mathds{1}\{x\geq j\}.\] This means that, for all \(t>0\) and \(R\in\mathbb{N}\), \[\int_{0}^{\infty}\!\mathrm{e}^{-tx}\sum_{j=0}^{R}F_{j}(x)\,\mathrm{d}x=\sum_{ j=0}^{R}G_{j}(t). \tag{16}\] Since \(I_{j}(x)\) decreases in \(j\) and increases in \(x\) (see, e.g., SS10.37 in Olver et al. (2010)), and \(I_{0}(x)\sim\frac{e^{x}}{\sqrt{2\pi x}}\) as \(x\to\infty\) (ibid., SS10.30(ii)), we have \[\Big{|}\sum_{j=0}^{R}F_{j}(x)\Big{|}\leq e^{-\theta_{1}}I_{0}\Big{(}2\sqrt{ \theta_{1}x}\Big{)}\sum_{j=0}^{\infty}\frac{1}{j!}(\theta_{1}x)^{\frac{j}{2}} \sim e^{-\theta_{1}}\frac{\mathrm{e}^{3\sqrt{\theta_{1}x}}}{2\pi^{\frac{1}{2} }(\theta_{1}x)^{\frac{1}{4}}},\qquad x\to\infty,\] which makes it possible to apply dominated convergence to (16). Hence, the inverse Laplace transform of \(G\) is \(\sum_{j=0}^{\infty}F_{j}\), which yields (14). We now turn to some examples of non-additive statistics. For \(k\geq 2\), let us call the range of a cycle the difference between its maximum and minimum elements and denote by \(r_{n}^{(k)}\) (resp., \(R_{n}^{(k)}\)) the minimum (maximum) range among all \(k\)-cycles in \(\sigma_{n}\). If there are no \(k\)-cycles, we set \(r_{n}^{(k)}=n\) and \(R_{n}^{(k)}=0\). **Proposition 5**.: \(\frac{r_{n}^{(k)}}{n}\xrightarrow{d}r^{(k)}\) _and \(\frac{R_{n}^{(k)}}{n}\xrightarrow{d}R^{(k)}\) as \(n\to\infty\), where \(r^{(k)}\)\((\)resp., \(R^{(k)})\), \(k\geq 2\), have CDF's of the form_ \[\mathbb{P}\left\{r^{(k)}\leq x\right\}=1-\exp\Bigl{\{}-\frac{ \theta_{k}}{k}\left(kx^{k-1}-(k-1)x^{k}\right)\Bigr{\}}, \tag{17}\] \[\mathbb{P}\left\{R^{(k)}\leq x\right\}=\exp\Bigl{\{}\frac{\theta _{k}}{k}\left(kx^{k-1}-(k-1)x^{k}-1\right)\Bigr{\}}, \tag{18}\] _as \(x\in[0,1)\) and \(0\)\((\)resp., \(1)\) to the left \((\)right\()\)._ Proof.: For \(\boldsymbol{x}\in\mathbb{X}_{k}\), let \(f_{k}(\boldsymbol{x})\) denote the difference between its maximum and minimum components. It follows from the interpretation of vague convergence in Proposition 3.13 of Resnick (1987) that the function which maps a finite point measure \(\mu\) on \(\mathbb{X}_{k}\) into \(\min_{\mu\{\boldsymbol{x}\}\geq 1}f_{k}(\boldsymbol{x})\) is vaguely continuous. Hence, by Remark 1 and continuous mapping theorem, \(\frac{r_{n}^{(k)}}{n}\xrightarrow{d}r^{(k)}\) holds with \(r^{(k)}=\min_{\Psi^{(k)}\{\boldsymbol{x}\}\geq 1}f_{k}(\boldsymbol{x})\), and the CDF of \(r^{(k)}\) is of the form \[\begin{split}\mathbb{P}\left\{r^{(k)}\leq x\right\}& =1-\mathbb{P}\left\{\Psi^{(k)}\{\boldsymbol{x}\in\mathbb{X}_{k} \colon f_{k}(\boldsymbol{x})\leq x\}=0\right\}\\ &=1-\exp\bigl{\{}-\theta_{k}\lambda_{k}\{\boldsymbol{x}\in \mathbb{X}_{k}\colon f_{k}(\boldsymbol{x})\leq x\}\bigr{\}}.\end{split} \tag{19}\] The value of the Lebesgue measure on the right-hand side is \[\int_{0}^{1}\biggl{(}\int_{x_{1}}^{\min\{x_{1}+x,1\}}\mathrm{d}x_{2}\ldots \mathrm{d}x_{k}\biggr{)}\mathrm{d}x_{1}=x^{k-1}(1-x)+\frac{x^{k}}{k},\] which together with (19) yields (17). The proof of (18) is similar. As a final example, we consider some statistics of fixed points. Let \(m_{n}\) be the minimum fixed point (\(n+1\) in case there is none), \(M_{n}\) the maximum one (\(0\) in that case), \(\delta_{n}\) the minimum spacing between fixed points (the two extreme spacings of lengths \(m_{n}\) and \(n+1-M_{n}\) are also taken into account), and \(\Delta_{n}\) the maximum one. **Proposition 6**.: \(\left(\frac{m_{n}}{n},\frac{M_{n}}{n},\frac{\delta_{n}}{n},\frac{\Delta_{n}}{n} \right)\xrightarrow[]{d}(m,M,\delta,\Delta)\) _as \(n\to\infty\) with_ \[\mathbb{P}\left\{m\leq x\right\}=1-\exp\left\{-\theta_{1}x\right\},\qquad\mathbb{P}\left\{M\leq x\right\}=\exp\left\{\theta_{1}(x-1)\right\}, \qquad x\in[0,1),\] \[\delta\stackrel{{ d}}{{=}}\frac{X_{\nu+1}}{(\nu+1) \sum_{i=1}^{\nu+1}X_{i}},\qquad\Delta\stackrel{{ d}}{{=}}\frac{ \sum_{i=1}^{\nu+1}\frac{X_{i}}{i}}{\sum_{i=1}^{\nu+1}X_{i}},\] _where \(\nu\) is Poisson distributed with parameter \(\theta_{1}\), \(X_{i}\) are exponentially distributed with unit mean, and all these are independent._ Proof.: As in the proof of Proposition 5, the convergence takes place with \(m\) being the leftmost point of \(\Psi^{(1)}\), \(M\) the rightmost one, \(\delta\) the minimum spacing, and \(\Delta\) the maximum one. All that remains is to derive the corresponding CDF's and distributional equalities. For \(m\) and \(M\), this follows from \[\mathbb{P}\left\{m\leq x\right\}=1-\mathbb{P}\left\{\Psi^{(1)}[0,x]=0\right\}=1-\exp\left\{-\theta_{1}x\right\},\] \[\mathbb{P}\left\{M\leq x\right\}=\mathbb{P}\left\{\Psi^{(1)}(x,1 ]=0\right\}=\exp\left\{-\theta_{1}(1-x)\right\}.\] We now turn to the equalities for \(\delta\) and \(\Delta\). Since \(\Psi^{(1)}\) is a homogeneous Poisson process with intensity \(\theta_{1}\), the conditional distribution of \(\delta\) (resp., \(\Delta\)) given \(\Psi^{(1)}(\mathbb{X}_{1})=r\) coincides with that of the minimum (maximum) spacing \(d_{r}\) (\(D_{r}\)) for \(r\) independent random variables, uniformly distributed on \([0,1]\), see, e.g., Proposition 3.8 in Last and Penrose (2018). It follows from the theory of uniform spacings (see, e.g., Holst (1980), p. 625) that \[d_{r}\stackrel{{ d}}{{=}}\frac{\min Y_{i}}{\sum_{i=1}^{r+1}Y_{i} },\qquad D_{r}\stackrel{{ d}}{{=}}\frac{\max Y_{i}}{\sum_{i=1}^{r +1}Y_{i}}, \tag{20}\] where \(Y_{1},\ldots,Y_{r+1}\) are independent \(\mathsf{Exp}(1)\)-distributed random variables. Let \(Y_{(i)}\), \(i\in[r+1]\), be their order statistics, \(Y_{(0)}=0\), and \(\tau_{i}=Y_{(i)}-Y_{(i-1)}\). By Sukhatme-Renyi decomposition (see, e.g., Theorem 4.6.1 in Arnold et al. (2008)), \(\tau_{i}\) are independent and \(\mathsf{Exp}(r-i+2)\)-distributed with \[\min Y_{i}=\tau_{1},\qquad\max Y_{i}=\sum_{i=1}^{r+1}\tau_{i},\qquad\sum_{i=1} ^{r+1}Y_{i}=\sum_{i=1}^{r+1}(r-i+2)\tau_{i}.\] Denoting \(X_{r-i+2}=(r-i+2)\tau_{i}\sim\mathsf{Exp}(1)\), we can rewrite (20) as \[d_{r}\stackrel{{ d}}{{=}}\frac{X_{r+1}}{(r+1)\sum_{i=1}^{r+1}X_{ i}},\qquad D_{r}\stackrel{{ d}}{{=}}\frac{\sum_{i=1}^{r+1}\frac{X_{i}}{i}}{ \sum_{i=1}^{r+1}X_{i}}.\] Since this holds for any \(r\geq 0\), the claim follows by deconditioning.
2302.14635
H-AES: Towards Automated Essay Scoring for Hindi
The use of Natural Language Processing (NLP) for Automated Essay Scoring (AES) has been well explored in the English language, with benchmark models exhibiting performance comparable to human scorers. However, AES in Hindi and other low-resource languages remains unexplored. In this study, we reproduce and compare state-of-the-art methods for AES in the Hindi domain. We employ classical feature-based Machine Learning (ML) and advanced end-to-end models, including LSTM Networks and Fine-Tuned Transformer Architecture, in our approach and derive results comparable to those in the English language domain. Hindi being a low-resource language, lacks a dedicated essay-scoring corpus. We train and evaluate our models using translated English essays and empirically measure their performance on our own small-scale, real-world Hindi corpus. We follow this up with an in-depth analysis discussing prompt-specific behavior of different language models implemented.
Shubhankar Singh, Anirudh Pupneja, Shivaansh Mital, Cheril Shah, Manish Bawkar, Lakshman Prasad Gupta, Ajit Kumar, Yaman Kumar, Rushali Gupta, Rajiv Ratn Shah
2023-02-28T15:14:15Z
http://arxiv.org/abs/2302.14635v1
# H-AES: Towards Automated Essay Scoring for Hindi ###### Abstract The use of Natural Language Processing (NLP) for Automated Essay Scoring (AES) has been well explored in the English language, with benchmark models exhibiting performance comparable to human scorers. However, AES in Hindi and other low-resource languages remains unexplored. In this study, we reproduce and compare state-of-the-art methods for AES in the Hindi domain. We employ classical feature-based Machine Learning (ML) and advanced end-to-end models, including LSTM Networks and Fine-Tuned Transformer Architecture, in our approach and derive results comparable to those in the English language domain. Hindi being a low-resource language, lacks a dedicated essay-scoring corpus. We train and evaluate our models using translated English essays and empirically measure their performance on our own small-scale, real-world Hindi corpus. We follow this up with an in-depth analysis discussing prompt-specific behavior of different language models implemented. ## 1 Introduction Academic assessments have long used short-text response and essay writing tasks, which have resulted in numerous scoring approaches, prompt design, and assessment methods. These tasks help judge many aspects of a student's language learning abilities and are commonly integrated with curricula and standardized tests worldwide in multiple languages. Automated essay scoring (AES) emulates human judgment when evaluating the quality of these written essays. Traditional essay scoring methods entail a vast corpus of written data that human scorers manually evaluate. Manual scoring is a laborious task which is challenging to scale much beyond the limited classroom settings Kumar et al. (2019); Zhang (2013). AES utilizes Natural Language Processing (NLP) and Machine Learning (ML) techniques to evaluate these essays in a more efficient and scalable manner. Commonly used in standardized tests like the GRE and the TOEFL Attali and Burstein (2006), many organizations and education councils have turned to use AES to reduce workload Singla et al. (2022). Commonly, AES is facilitated by a vast set of training data which are scored using expert-designed evaluation rubrics Weinberger et al. (2011). The scores assigned can be broadly categorized into two types: _Holistic scores_ - a single discrete score value from a given whole number range and _Trait-based scores_ - multiple score values assigned based on multi-dimensional criteria such as relevance to prompt, argument quality, and coherence Ke and Ng (2019); Bamdev et al. (2022). A majority of work done on the task has entailed holistic scoring Ke and Ng (2019), leveraging the Automated Student Assessment Prize (ASAP) corpus1, released on Kaggle in 2012. Eight essay prompts are included in the ASAP corpus covering a broad range of topics. Over time, it has become a widely used corpus for holistic scoring. Most approaches to the AES task are specific to each prompt. Footnote 1: [https://www.kaggle.com/c/asap-aes](https://www.kaggle.com/c/asap-aes) Although short-text response writing is a standard assessment task for students worldwide in different languages, most research has focused on the English language domain, with relatively few focusing on other languages and multilingual approaches. Research in AES for Indic languages is negligible. Hindi is the most spoken language in India, with around 528 million native speakers, according to the 2011 language census INDIA (2011). With millions in the country writing their school-level, graduation, and public-sector examinations in Hindi, there is a dire need to automate the scoring processes. In addition, with the rapidly expanding middle class in India, the number of Hindi telecallers is also increasing at an equally rapid pace. New-age unicorn startups like Apna which hire blue and white collar telecallers for the South Asian market, have instituted automatic scoring as the first filtering step. On the other hand, Hindi NLP methods for automatic scoring are still in the early development stages. The Hindi NLP space's predicament is a veritable lack of data resources. An approach to address this issue is to use English data translated to Hindi, which has been commonly used to train large language models Kakwani et al. (2020); Conneau et al. (2020); Khanuja et al. (2021). The need for feature engineering is removed by neural approaches which discover both simple and complex features on their own. Since a dedicated corpus for AES in Hindi currently does not exist, we use a machine-translated version of the ASAP corpus to train, validate and test our approaches. Our study implements recent approaches used for English AES to the Hindi language domain. Further, to validate that despite translation, models trained on the translated data perform adequately in the natural Hindi language settings, we collect a small-scale natural Hindi corpus and test all our models on this corpus. We employ both classical feature-based models and advanced end-to-end language models based on LSTMs and transformer architectures. The following three sections explain the related material, the corpus used, and the methodologies employed to perform the task on different models. In Section-5, we explain our experimental procedures, provide empirical results and comparisons with relevant English AES benchmark results, and follow it up with a detailed discussion. Section-6 concludes this study and discusses the limitations and future directions that can be taken to build more accurate and robust AES systems in Hindi and other languages. The dataset, rubric and the source code are made publicly available2. Footnote 2: [https://github.com/midas-research/hindi-aes](https://github.com/midas-research/hindi-aes) ## 2 Related Work Research for AES in English has spanned decades [1, 2, 3]. Many studies have treated AES as a regression and text classification problem, while a few apply ranking-based approaches [3, 2]. Classical machine learning techniques like linear regression [13] support vector regression [22, 23, 24], and sequential minimal optimization (SMO) [15] are used typically for regression-based AES. SMO [15], logistic regression [25] and Bayesian network classification [16] utilize classification approaches. For ranking, SVM ranking [26] and LambdaMART [1] have been used. These approaches use custom linguistic features such as errors in grammar [14], readability features [17], length and syntactic features etc. These techniques are widely popular and have been used in the production of evaluation systems, like E-Rater [18] used in high-stake examinations like GRE and TOEFL. Progress in Deep Neural Networks, like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short Term Memory networks (LSTMs), allow for better performance in AES systems [19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. RNNs and LSTMs are typically used for language processing due to their ability to process sequential data. LSTMs have demonstrated better performance than RNNs for longer sequences due to their ability to retain long-term dependencies. These methods do not require any handcrafted components, leading to their widespread adoption. In the case of end-to-end Deep Neural Networks, complex features are automatically discovered, saving the time spent on designing them manually. Out of these models, SKIPFLOW [14] is able to compete with state-of-the-art by capturing coherence, flow, and semantic relatedness over time, which the authors call neural coherence features. Lately, pre-trained language models, built using transformer architecture, such as GPT, BERT [10], and XLNet [23] have pushed forward the limits of language understanding and generation attributed to their excellent generalization and representation abilities. These models have achieved better results in text classification and regression tasks. Although initial studies using transformers for AES [16, 22, 23] fail to demonstrate a significant advantage over other deep learning approaches, work from [17, 24, 25] leverage such pre-trained language models, particularly BERT, and have given state-of-the-art results outperforming various traditional and deep-learning architectures designed for AES. Cao et al. Cao et al. (2020) proposes two self-supervised tasks and a domain adversarial training technique for training optimization. This is the first work that uses pre-trained language models to outperform LSTM-based methods significantly. R\({}^{2}\)BERT [23] combines regression and ranking to fine-tune BERT and obtain the new state-of-the-art. Wang et al. Wang et al. (2022) developed a novel multi-scale essay representation approach based on BERT, employing multiple losses and transfer learning. Attempts have been made in the past to develop AES for languages other than English. These include research in AES for Chinese [15, 24, 25], Arabic [26, 27], Japanese [28], Swedish [29, 20], German [26], Portuguese [25, 27] and more. The lack of dedicated essay corpora as comprehensive as the ones in existence for English is a common feature among most of these studies. While few studies mentioned have explicitly created their organic corpora to work on, many have devised alternate solutions such as scraping the internet for essays, distilling essays from articles and combining datasets used for other tasks. The progress made in language processing for Indic Languages is gradual but on course. A few notable developments include Bhattacharyya (2010), Arora Arora (2020), Kakwani et al. (2020), Ramesh et al. Ramesh et al. (2021), and more. Desai and Dabhi (2021) presents a comprehensive report on the advancements in Hindi NLP while Harish and Rangan Ramesh et al. (2020) offers an in-depth survey on regional Indic language processing. Developments in large pre-trained multilingual models like mBERT [10], XLM-RoBERTa [28], DistilmBERT [29], IndicBERT [2] etc. have included Hindi and a variety of other regional Indic languages. However, as mentioned in the previous section, the lack of annotated data is a challenge that many in the Hindi NLP space are trying to overcome. ## 3 Corpora The Automated Student Assessment (ASAP) corpus has benchmarked a variety of state-of-the-art English language models [14] ever since its release. Due to the ASAP corpus' ubiquity in AES research and its depth in terms of volume and a mix of narrative, expository and source-dependent response prompts, we decided to focus our analysis on a Hindi-translated version of the ASAP dataset itself (ASAP-Hindi). Translation has seen its way through the NLP discourse, especially in the development of multilingual corpora and models. The use of translated data, however, challenges the validity and quality of the corpus itself, with difficulty in determining the quality gain or loss in specific language-related attributes after translation is applied. We verified random subsets from the translated corpus with the help of both skilled bilingual speakers and expert academics. A common observation was that modern neural machine-translation engines have a proclivity to correct spelling mistakes, but the knowledge distilled and a majority of syntactic and semantic features are retained. Prompts 7 and 8 on the ASAP-Hindi dataset have a distinctly atypical scoring nature to the other prompts and real-world scoring rubrics. The first six prompts help generalize results to a range of AES contexts. We also built our own Hindi language corpus consisting of 126 real-world essays written by students between 18-20 years old. The submissions were taken as an online essay writing competition, consisting of a single prompt with "The essence of Travel in one's life" as the central theme. The average essay length is 224 words per essay. The writers of these essays are bilingual, with proficiency in both English and Hindi, with Hindi being their first language. Although much smaller in scale, the use of this corpus helps further corroborate our methods and gives a better understanding of how our methods work on real-world data. The scoring for the responses was in accordance with a comprehensive rubric that we prepared which took into account both subjectivity scores and attribute-based aspects. The subjectivity scores for each essay capture a general idea and take into account the evaluator's general perception of the essay. Using an attribute-based approach, scores are provided for factors such as length sufficiency, coherence, relevance to prompt, argument quality, and vocabulary, which are more objective in nature. We combine these scores to obtain a final holistic score within a [0-12] range. This process is in accordance with a variety of previous scoring techniques in both English and AES in other languages. An expert panel of three Hindi academics performed the scoring following the rubric provided. Scores 1-2, 1-3, and 2-3 have high inter-rater reliability [1] of 0.831, 0.798, and 0.867, respectively. These scores are comparatively higher than the ones calculated for the ASAP dataset. To further reduce the cognitive bias, we average the three scores to obtain the final score. Our final dataset contains responses from the eight prompts of the ASAP-Hindi corpus and our organic prompt. Figure 1 displays the prompt used for the organic corpus. Its translation in English is as follows: _"Travel takes us out of our comfort zone and inspires us to see, taste and try new things. It challenges us to constantly adapt and explore new environments, connect with different people, embrace adventure, and share new and meaningful experiences with friends and loved ones. Travel teaches us about humanity and gives us an appreciation, understanding, and respect for different perspectives and ways of life. It also brings positive changes in our lives and keeps us alive and active. Keeping these things in mind, write an essay on how travel brings new experiences and positively impacts your life."_ ## 4 Methodology In our study we employ various methods ranging from classical machine learning approaches employing feature-extraction, to advanced fine-tuned transformer-based pre-trained models. We implement four models for classical regression and classification approaches; Linear Regression, Support Vector Regression (SVR), Random Forest and XGBoost. These leverage features like essay length, average sentence length, average word length, readability scores, semantic overlap, and vocabulary size. For neural network-based architecture, we implement a Bidirectional Long Short-Term Model (BiLSTM), a Convolutional Neural \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **Prompt** & **Essay Count** & **Score Range** \\ \hline \multirow{6}{*}{**Hindi Translated**} & P1 & 1783 & 2-12 \\ \cline{2-4} & P2 & 1800 & 1-6 \\ \cline{2-4} & P3 & 1726 & 0-3 \\ \cline{2-4} & P4 & 1772 & 0-3 \\ \cline{2-4} & P5 & 1805 & 0-4 \\ \cline{2-4} & P6 & 1800 & 0-4 \\ \cline{2-4} & P7 & 1569 & 0-30 \\ \cline{2-4} & P8 & 723 & 0-60 \\ \hline **Organic Corpus** & Travel & 126 & 0-12 \\ \hline \end{tabular} \end{table} Table 1: Description of the prompts used for Hindi-AES Figure 1: Prompt for the organic corpus Network (CNN) Model, a CNN + LSTM coupled with an attention mechanism, and the popular SKIPFLOW model. Lastly, we fine-tune multiple pre-trained multilingual language models. A wide variety of approaches are used to effectively benchmark corresponding approaches from English AES research, in the Hindi language domain. ### Feature Extraction Our features are largely inspired by the different dimensions of essay quality mentioned in Ke and Ng (2019), readability metrics presented in Sinha et al. (2012) and the attributes presented in the ASAP++ dataset (Mathias and Bhattacharyya, 2018). Due to the tendency of neural translation engines to correct spelling, we do not explicitly employ a spelling-based feature in our feature selection. The extracted features are leveraged by our Linear Regression, SVR, Random Forest and XGBoost models. We extract and use the following six features: * **Essay Length:** The essay's length is determined by the total number of words it contains. * **Average Sentence Length:** The average sentence length is calculated by adding up the lengths of all sentences and then dividing them by the number of sentences. * **Average Word Length:** The average word length is calculated by adding up the lengths of all words and then dividing them by the number of words. * **Readability Scores:**Sinha et al. (2012) presents this as a readability metric for Hindi and Bangla. Structural features such as Average Sentence Length (ASL), Average Word Length (AWL), Number of PolySyllable Words (PSW), Number of Jukta-Akshars (JUK), and more are examined. They utilize Spearman's rank correlation coefficient to analyze these structural features. Our readability scores are calculated using the following formula, based on the results of their regression analysis: \[-2.34+2.14*AWL+0.01*PSW\] * **Vocabulary and OOV words:** An essay's score highly depends on its vocabulary size. Here in our experiments, we have simply taken into consideration the count of unique and out-of-vocabulary words (OOVs) in the essay to get the vocabulary size. Machine translation phonetically renders such OOVs into the target language (Hindi). Although if the OOV word is in frequent occurrence, we discard its use. * **Semantic Overlap and Coherence:** An essay's score depends on how well it is connected and coherent. To determine this we have used mBERT and the mechanism used in SKIPFLOW for generating neural coherence features (Tay et al., 2018). We calculated the semantic similarity of two sentences (using mBERT) at a distance of four sentences. As a result, we can capture the coherence and semantic overlap of the essay. Finally, after capturing the scores of the sentence pairs we averaged them out. ### Neural Approaches Deep neural networks like Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory networks (LSTMs) have facilitated the development of end-to-end machine learning workflows which do not rely on handcrafted features. We implement four popular approaches for AES using neural methods: BiLSTM, CNN, CNN + LSTM + Attention Mechanism, and SKIPFLOW. The training pipeline consists of an initial padding process for every sentence to equalize lengths, followed by a tokenization process using the IndicNLP Tokenizer. Word embeddings for the tokenized sentences were obtained using fastText(Wiki) pre-trained word vectors for Hindi (Bojanowski et al., 2017). These word embeddings are passed to the following model architectures: * **BiLSTM:** A Bidirectional LSTM, or BiLSTM, is a sequence processing model that consists of two LSTMs processing the sequence in both forward and backward directions. We use a BiLSTM over a normal LSTM as essays are dynamically and artistically written literary devices including references to both previous and forthcoming statements. Word embeddings are passed to a model with three BiLSTM layers and two dense layers to obtain the final score. We use Rectified Linear Unit (ReLu) activation functions between hidden layers and apply Batch Normalization after each BiLSTM layer. The first dense layer is followed by a Dropout layer. * **CNN:** When looked at from a different perspective, short-text responses and essays have many literary intricacies and relations that are more concentrated in short windows. Thus, using a CNN seems intuitive in capturing these short dependencies over a fixed window size. We use a CNN with three 1-Dimensional convolutional hidden layers separated by pooling layers, Swish (Ramachandran, Zoph, and Le, 2017) non-linearities, and Batch Normalisation layers. * **CNN + LSTM + Attention:** We use the model proposed by Dong, Zhang, and Yang (2017) to score the essays based on sentence representations using a model with CNN, LSTM, and Attention layers. Initially, a 1-Dimensional convolutional layer is used to extract features from a text in a way that is intuitively similar to window-based feature extraction. LSTM layers are used after that to process sequences. Finally, an attention layer is used to pool the outputs. * **SKIPFLOW:** We implement the SKIPFLOW model introduced by Tay et al. (2018). SKIPFLOW proposes a novel method to calculate the textual coherence in the essays by modeling the relationships between snapshots of the hidden representations of a long short-term memory (LSTM) network as it reads. Subsequently, the semantic relationships between multiple snapshots are used as auxiliary features for prediction. ### Fine-Tuned Transformers A transformer is a deep neural architecture that uses an attention mechanism to handle sequences of ordered data. Transformers include several layers each containing a multi-head self-attention network and a position-wise feed-forward network [21]. Unlike LSTMs, Transformers use non-sequential processing, they do not process words in series but rather as a whole, allowing Transformers to take lesser time steps to process input in comparison to LSTMs and RNNs. Transformers also do not suffer from long-term dependency problems credit to their self-attention mechanism. Multilingual language models use BERT-based architecture and its variants and replace the corpus being used for the task of pre-training in English with the one containing text from multiple languages. Along with this, a few multilingual language models also introduce an additional pre-training task, for instance, Translation Language Modeling (TLM) in the case of XLM-R and MuRIL. IndicBERT and MuRIL are the two prominent Multilingual Language Models which specifically focus on regional Indic languages. We fine-tune prominent pre-trained multilingual transformer language models: Multilingual BERT (mBERT), DistilmBERT, XLM-Roberta, MuRIL, and IndicBERT. The tokenized prompt and essay are fed to the transformer model as input. The input is tokenized with a [CLS] token at the beginning. [SEP] tokens are added at the end of both the prompt and the essay to differentiate them. The prompt and the essay sequences are fed to the transformer encoder and become hidden layer sequences. These sequences are passed to a simple feed-forward network, consisting of two hidden layers, attached to the base transformer architecture and a final numerical score is obtained. ## 5 Experiments This section describes the experimental procedure, evaluation metric, empirical results, and comparison with published results on prominent English AES models. ### Experimental Setup We conduct prompt-specific experimentation in accordance with Taghipour and Ng [16]. While it might seem ideal to train prompts together, it's important to note that each prompt might include genres that are in sharp contrast to one another, such as narrative or argumentative essays. Prompts can also be scored differently according to the level of the students and the scoring rubrics. This makes training prompts together extremely challenging. For feature-based methods, the text was pre-processed to filter out stopwords, named-entities and mentions (denoted by '@' symbols in the ASAP Dataset). The custom feature scores were normalized to improve the stability of the model. The neural LSTM and CNN models were all trained for 100 epochs with a learning rate of 1e-4. For fine-tuning the pre-trained multilingual transformer models we use the AdamW optimizer [14], which is a stochastic optimization method that modifies the typical implementation of weight decay in Adam [15], by decoupling weight decay from the gradient update. We set our learning rate to 5e-5 which decreases linearly to 0. We train all models on a 6-8 epoch range, depending on the base transformer model's ability to learn. For our experiments on all prompts (including our organic prompt corpus) we use a 60/20/20 split for train, validation and test sets. Our normalization process keeps all score ranges within [0,1]. To calculate the Quadratic Weighted Kappa (QWK) scores, the scores are re-scaled to the original prompt-specific scale for prediction. ### Evaluation Metric We use the Quadratic Weighted Kappa (QWK), which is a common measure for evaluating and comparing AES methods [1]. A key reason for its prevalence is its particular sensitivity to differences in scores and its ability to take into account chance agreements. QWK score generally ranges from 0 to 1. When the agreement is lower than expected by chance, the score becomes negative. QWK Score is calculated as follows. Initially, a weight matrix W is created in accordance to Equation 1: \[\textbf{W}i,j=\frac{(i-j)^{2}}{(N-1)^{2}} \tag{1}\] If N is the total number of possible ratings, i and j represent the reference rating (given by a human annotator) and hypothesis rating (awarded by an AES system), The outer product of the reference and hypotheses rating histogram vectors results in an expected count matrix E. This matrix is normalized so that the elements in E and O have the same sum. Finally, given the matrices O and E, the QWK score is calculated according to Equation 2: \[\kappa=1-\frac{\sum i,jw_{i,j}O_{i,j}}{\sum_{i,j}w_{i,j}E_{i,j}} \tag{2}\] ### Results and Comparison In this section, we present the results of our experimentation. Table 2 (rows 1-13) reports the empirical results (QWK scores) on all our approaches on the ASAP Hindi Dataset and our organic prompt. We have also provided an average for results on the ASAP-Hindi set for a better comparison of the models across prompts. As a baseline for AES in Hindi does not exist, Table 3 (rows 1-4) presents published results on prominent models for AES in English. These results provide a benchmark upper limit for us to compare our scores, across all types of models. According to Tables 2 and 3, all 13 models implemented were able to learn the task and perform competitively when compared to results on the English models. Results from the feature-based (Table 2, rows1-4) approaches were also competitive, with Linear Regression and XGBoost outperforming all other models on prompt 1 and prompts 2 and 8 respectively. Averages for the feature-extraction models were not far off from the Ease SVR average benchmark in Table 3, with the lowest and highest averages falling 0.078 and 0.011 points short of the benchmark, respectively. Although the neural models (Table 2, rows 5-8) did not outperform other models on any prompt, their averages were higher than the feature-based models and lower than the fine-tuned transformers, providing decent transitional results from classical to advanced end-to-end methods. SKIPFLOW (Table 2, row-8) gave the highest average score amongst these, followed closely by the CNN + LSTM + Attention average. The SKIPFLOW average fell 0.051 points short off it's original implementation in English. On average, the fine-tuned highly multilingual transformers (Table 2, rows 9-11) gave higher results across all prompts. The fine-tuned mBERT model gives the maximum score for prompt 3. The fine-tuned XLM-R model outperforms all other models on four of the eight ASAP-Hindi prompts (Prompts 4, 5, 6, 7), giving the maximum average QWK as well thereby, establishing a state-of-the-art for AES in Hindi. Compared to R\({}^{2}\)BERT's average the fine-tuned XLM-R is 0.050 points short. Results on the mBERT model closely follow the results on the XLM-R model. The fine-tuned Indic transformers (Table 2, rows 12 and 13) did not perform comparably well. While IndicBERT did try to compete and learn, MuRL's behavior during the training process was highly inconsistent, resulting in unpredictable results. This behavior could be attributed to a variety of factors which will be discussed in the next sub-section. Results on the organic prompt were favourable (in comparison to results on the ASAP-Hindi set) with the fine-tuned mBERT model giving the highest QWK score. It is important to note that in contrast to the other prompts, the QWK scores obtained for organic prompt showed a slightly higher variance during training. It is likely that this is a result of the organic set's smaller magnitude compared to the ASAP-Hindi Dataset. ### Analysis and Discussion A general observation that is consistent with both previously established English AES results and our study, is that results on prompts 4,5, and 6 are higher than the other prompts for the ASAP dataset. Prompts requiring source-dependent responses perform better during training as compared to narrative, persuasive or expository prompts possibly due to a general consistency in syntax, coherence, and availability of source material. Such prompts are coupled with more balanced real-world rubrics allowing for consistency all throughout the writing and the scoring process, making such prompts ideal for generalization. In contrast generalization of ideas is slightly more difficult on prompts that allow for persuasive and expository discussions due to variance in human thought and cognition. Prompts with rubrics that are not consistent with real-world rubrics such as prompt-8, give results much worse than the aforementioned source-dependent prompts. The more discrete nature of feature-based approaches allows for consistent performance, but their failure to under \begin{table} \begin{tabular}{|c|c|c c c c c c c c|c|} \hline **ID** & **Hindi AES Models** & **P1** & **P2** & **P3** & **P4** & **P5** & **P6** & **P7** & **P8** & **Average** & **Organic** \\ \hline 1 & SVR & 0.799 & 0.612 & 0.605 & 0.657 & 0.797 & 0.630 & 0.400 & 0.380 & 0.610 & 0.579 \\ 2 & Linear Regression & **0.800** & 0.614 & 0.588 & 0.624 & 0.768 & 0.605 & 0.680 & 0.635 & 0.664 & 0.681 \\ 3 & Random Forest & 0.705 & 0.608 & 0.621 & 0.685 & 0.791 & 0.665 & 0.652 & 0.560 & 0.661 & 0.762 \\ 4 & XGBoost & 0.794 & **0.667** & 0.573 & 0.676 & 0.792 & 0.653 & 0.713 & **0.641** & 0.688 & 0.827 \\ \hline 5 & CNN & 0.571 & 0.513 & 0.529 & 0.614 & 0.657 & 0.703 & 0.521 & 0.426 & 0.566 & 0.762 \\ 6 & BiLSTM & 0.631 & 0.517 & 0.612 & 0.703 & 0.643 & 0.713 & 0.607 & 0.443 & 0.608 & 0.842 \\ 7 & CNN + LSTM + Attention & 0.723 & 0.597 & 0.677 & 0.711 & 0.781 & 0.791 & 0.701 & 0.593 & 0.696 & 0.827 \\ 8 & SKIPFLOW LSTM (Tensor) & 0.742 & 0.621 & 0.695 & 0.731 & 0.804 & 0.777 & 0.717 & 0.619 & 0.713 & 0.812 \\ \hline 9 & mBERT & 0.683 & 0.652 & **0.711** & 0.775 & 0.828 & 0.785 & 0.781 & 0.548 & 0.720 & **0.852** \\ 10 & DistilmBERT & 0.661 & 0.592 & 0.698 & 0.766 & 0.825 & 0.793 & 0.785 & 0.596 & 0.714 & 0.784 \\ 11 & XLM-RoBERTa & 0.758 & 0.585 & 0.692 & **0.809** & **0.834** & **0.822** & **0.794** & 0.639 & 0.741\({}^{*}\) & 0.831 \\ \hline 12 & MuRIL & 0.620 & 0.412 & 0.528 & 0.756 & 0.812 & 0.713 & 0.547 & 0.327 & 0.589 & 0.528 \\ 13 & IndicBERT & 0.651 & 0.489 & 0.659 & 0.751 & 0.799 & 0.784 & 0.708 & 0.412 & 0.656 & 0.796 \\ \hline \end{tabular} \end{table} Table 2: Experiment results of all models in terms of QWK on ASAP-Hindi corpus and the organic prompt. The bold number is the best performance for each prompt. The best average QWK is annotated with \({}^{*}\). \begin{table} \begin{tabular}{|c|c|c c c c c c c|c|} \hline **ID** & **English AES Models** & **P1** & **P2** & **P3** & **P4** & **P5** & **P6** & **P7** & **P8** & **Average** \\ \hline 1 & EASE (SVR) & 0.781 & 0.621 & 0.630 & 0.749 & 0.782 & 0.771 & 0.727 & 0.534 & 0.699 \\ 2 & CNN + LSTM + Attention & 0.822 & 0.682 & 0.672 & 0.814 & 0.803 & 0.811 & 0.801 & 0.705 & 0.764 \\ 3 & SKIPFLOW LSTM(Tensor) & **0.832** & 0.684 & 0.695 & 0.788 & 0.815 & 0.810 & 0.800 & 0.697 & 0.764 \\ 4 & R\({}^{2}\)BERT & 0.817 & **0.719** & **0.698** & **0.845** & **0.841** & **0.847** & **0.839** & **0.726** & **0.791** \\ \hline \end{tabular} \end{table} Table 3: Published results on prominent models for AES in English. All results in terms of QWK score on the original ASAP Corpus. stand nuance and interpret content limits their potential for improvement when compared to large pre-trained language models. Given the considerably larger scale (parameters and pre-training data) of XLM-R and mBERT they significantly outperform the other models. Although DistilmBERT is approximately 40% lighter than mBERT, it performs almost equally well and was the fastest to train among all large language models. A result observed was the relatively poor performance of the Indic language models, especially MuRIL. Several plausible explanations might explain this finding. IndicBERT's performance can be attributed to its base architecture being an ALBERT model which is almost 90% lighter than the BERT-base. IndicBERT may have been disadvantaged by the lack of trainable parameters because of this, resulting in it not competing with mBERT and XLM-R. MuRIL, however, does have the parameter size to compete. Pail et al. (2021) provides compelling evidence as to why MuRIL might fail to perform on syntactically-complex tasks such as AES, it includes the following points: Both IndicBERT and MuRIL perform masked word-level language modeling and do not have a sentence level pre-training task. An important aspect of AES is the morpho-syntactical relationship between ideas and sentences. mBERT, XLM-R, and DistilmBERT are highly multilingual language models pre-trained on more than 100 languages. It may provide them with linguistic and typological generalizations needed to model morpho-syntax more effectively than Indic models, which are trained on a small number of Indic languages and English. Another factor to add to this is that monolingual and multilingual large language models show syntactic localization across their layers which makes them perform better at complex tasks with long-range syntactical dependencies. In comparison, Indic language models IndicBERT and MuRIL show little localization, with MuRIL showing the least localization across all layers amongst all models. The syntactic directness of source-dependent responses is possibly why MuRIL still remains competitive on prompts 4, 5, and 6. ## 6 Conclusion and Future Work In this study, we implement and analyze various methods for AES in Hindi ranging from classical feature-based Machine Learning (ML) to advanced end-to-end models to set benchmarks and the state-of-the-art for AES in Hindi. We also introduce a single-prompt corpus of student-written essays in Hindi to further substantiate our findings from the Hindi-Translated ASAP corpus. The results of our experiments shed new light on AES research. We obtain competitive results when compared to the benchmark and the state-of-the-art methods in English AES. We also try to explain and analyze the results obtained using our models, attributing to the idiosyncrasies of the models as well as the nature of the prompts on which they are tested. In view of the fact that AES in Hindi is particularly unexplored, multiple future research directions are possible. We plan to extend our work by scaling the organic corpus (both in terms of essays per prompt and the type of prompts) and proposing architectures that learn the syntactical features and nuances of Hindi by leveraging the trait-based scores that are included with the corpus. It is reasonable to use a Hindi-translated version of the ASAP dataset as precedent, but a comprehensive Hindi corpus is essential for AES in Hindi. A larger corpus facilitates more nuanced learning, which enables models to generalize from a wider spectrum of results. We obtain the most favorable results on our fine-tuned pre-trained multilingual transformer language models and to push these results further, we plan to try different training optimization methods including domain adversarial training, multi-scale essay representation approaches and more. Using such training optimization methods might improve performance on Hindi which is more morpho-syntactically complex than English. For the same reason, introducing linguistic knowledge to segment at a more reasonable scale may bring further improvement. We also hope to push the results on the Indic language models IndicBERT and MuRIL through these optimization strategies. Due to the prominence of regional languages in Indian communities, a multilingual essay evaluation approach will allow for more diversity in essay writing and large-scale examinations. ## Acknowledgements Rajiv Ratn Shah was partly supported by the Infosys Center for Artificial Intelligence and the Center of Design and New Media at IIIT Delhi, India.
2310.00451
On the Role of Neural Collapse in Meta Learning Models for Few-shot Learning
Meta-learning frameworks for few-shot learning aims to learn models that can learn new skills or adapt to new environments rapidly with a few training examples. This has led to the generalizability of the developed model towards new classes with just a few labelled samples. However these networks are seen as black-box models and understanding the representations learnt under different learning scenarios is crucial. Neural collapse ($\mathcal{NC}$) is a recently discovered phenomenon which showcases unique properties at the network proceeds towards zero loss. The input features collapse to their respective class means, the class means form a Simplex equiangular tight frame (ETF) where the class means are maximally distant and linearly separable, and the classifier acts as a simple nearest neighbor classifier. While these phenomena have been observed in simple classification networks, this study is the first to explore and understand the properties of neural collapse in meta learning frameworks for few-shot learning. We perform studies on the Omniglot dataset in the few-shot setting and study the neural collapse phenomenon. We observe that the learnt features indeed have the trend of neural collapse, especially as model size grows, but to do not necessarily showcase the complete collapse as measured by the $\mathcal{NC}$ properties.
Saaketh Medepalli, Naren Doraiswamy
2023-09-30T18:02:51Z
http://arxiv.org/abs/2310.00451v2
# On the Role of Neural Collapse in Meta Learning Models for Few-shot Learning ###### Abstract Meta-learning frameworks for few-shot learning aims to learn models that can learn new skills or adapt to new environments rapidly with a few training examples. This has led to the generalizability of the developed model towards new classes with just a few labelled samples. However these networks are seen as black-box models and understanding the representations learnt under different learning scenarios is crucial. Neural collapse (\(\mathcal{NC}\)) is a recently discovered phenomenon which showcases unique properties at the network proceeds towards zero loss. The input features collapse to their respective class means, the class means form a Simplex equiangular tight frame (ETF) where the class means are maximally distant and linearly separable, and the classifier acts as a simple nearest neighbor classifier. While these phenomena have been observed in simple classification networks, this study is the first to explore and understand the properties of neural collapse in meta learning frameworks for few-shot learning. We perform studies on the Omniglot dataset in the few-shot setting and study the neural collapse phenomenon. We observe that the learnt features indeed have the trend of neural collapse, especially as model size grows, but to do not necessarily showcase the complete collapse as measured by the \(\mathcal{NC}\) properties. ## 1 Introduction Human vision has the innate capability of recognizing new categories when a person is shown just a few samples of that category. For instance, when a person is shown a couple of images of an unseen person or an unseen category, he can recognize the new face quickly by implicitly drawing connections from the acquired prior knowledge. Although deep neural networks trained on millions of images have in some cases exceeded human performance in large-scale image recognition [3], under an open-world setting with emerging new categories it remains a challenging problem how to continuously expand the capability of an intelligent agent from limited new samples, also known as few-shot learning [15]. Moreover, in many machine learning applications, training data and labels are limited and require collection/annotation of new data which can be prohibitive [4; 15]. However, modern machine learning models including deep neural networks often require large amounts of training data to learn good representations needed for downstream tasks. As a result, models that can learn how to solve tasks with little training data are desirable. Meta-learning essentially aims to solve this issue without having to re-train a base model on the data from the new classes [5]. Few-shot classification is an instantiation of meta-learning in the field of supervised learning. After splitting into training and testing sets (with different classes), each dataset \(D\) is split into two parts, a support set \(S\) for learning and a query/prediction set B for training or testing, \(D=\langle S,B\rangle\). Often we consider a \(K\)-shot \(N\)-way classification task: the support set contains \(K\) labelled examples for each of \(N\) classes. Intuitively, the goal of the model is to learn from the small subset of data, i.e: the support set \(S\) should classify the data points in the query set \(B\) effectively [5]. ### Meta-Learning There are three major ways of addressing the learning from limited labeled data in general. They are configured into metric-based, model-based and optimization-based learning methods. The model-based meta learning algorithms design a model specifically for fast learning. They tend to use external/internal memory capabilities of a system to adapt for fast learning [16]. Memory augmented networks [11] and Meta-networks [8] are prime examples for model-based methods. The optimization-based methods aims to adjust the optimization algorithm so that the model can be good at learning with just a few labeled examples. It deals primarily with the modification of the gradient descent optimization algorithm for faster learning [16]. In computer vision, the metric based learning algorithms are the most commonly used algorithms where a distance metric is learned between the support set and the query set features to perform the required task at hand [16]. Primarily, we will be using the prototypical networks proposed by Snell et al. (2017) [13], described in further detail in Section 3. Meta-learning models train over several epochs, each of which consists of several _episodes_. An episode consists of the support set \(S\), which is the 'training set' for the episode, while the query set \(B\) is the 'testing set' for the episode. As mentioned earlier, the way classification is performed over the query set \(B\) varies based on the distance metric. Here lies the key essence behind meta-learning. Rather than training over a mini-batch of training data examples, in meta-learning, an epoch comprises a mini-batch of episodes, allowing the model to 'learn how to learn'. ### Neural Collapse Recent work by Papyan et al. (2020) [9] examined the representational properties of deep neural networks and learned that the class features of the final hidden layer associated with training data tend to collapse to the respective class feature means (\(\mathcal{NC}_{1}\)). This in turn simplifies the behaviour of the last layer classifier to that of a nearest-class center decision rule (\(\mathcal{NC}_{4}\)). The class means further tend to form a simplex equiangular tight frame (\(\mathcal{NC}_{2}\)) and the the linear layer classifier weights and the features of the training data can be interchangeable leading to self-duality (\(\mathcal{NC}_{3}\)). These properties were termed to be the \(\mathcal{NC}\) properties ([9]). We intend to check whether these specific properties hold for the meta-learning frameworks. Will these properties generalize to the meta-learning based, few-shot learning scenario? We intend to examine this question in this report. ## 2 Related Work Empirical and heuristic analysis has been the major reason for the exploration of the potential of deep learning networks. Hence understanding the theoretical underpinnings of such models would help us understand the reason behind the good generalization performance observed in such deep networks. Papyan et al. ([9]) studies the representational power of the overparameterized networks at the end phase of training and observed interesting properties in the network behavior. These properties are implicitly understood by the general ML community and have been individually explored. For instance, \(\mathcal{NC}_{1}\) and \(\mathcal{NC}_{4}\) are individually observed in [10] and [1] respectively. Constraining the network weights to be tight frames [7] and reducing intra-class variance has also been studied [12]. However, these observations have been studied individually and the 4 of the \(\mathcal{NC}\) properties are naturally attained by deep networks without any explicit constraints during training and this functionality makes it a unique feature to analyze and understand. While neural collapse has been evaluated for the transfer learning setting for few-shot learning [14], the majorly used meta-learning frameworks have not been analyzed for neural collapse phenomenon. Experimental Setup To investigate this phenomenon, we perform several experiments using prototypical networks (ProtoNet) as first proposed by Snell et al. (2017) [13]. For our datasets, we build on the ProtoNet paper and use the Omniglot dataset [6], specifically developed for few-shot learning. Our code (based on original ProtoNet paper) can be found at [https://github.com/saakethmm/NC-prototypical-networks](https://github.com/saakethmm/NC-prototypical-networks). ### Prototypical Networks As mentioned earlier, we will be using a form of a metric-based meta-learning model known as the Prototypical Network, first proposed by Snell et al. (2017) [13]. Within each episode, the mean support set feature vector is calculated for each class: \[\mathbf{v}_{c}=\frac{1}{|S_{c}|}\sum_{(\mathbf{x}_{i},y_{i})\in S_{c}}f_{\theta }\left(\mathbf{x}_{i}\right) \tag{1}\] where \(f_{\theta}\) is the embedding model, \(S_{c}\) is the labeled support set data points and \(|S_{c}|\) is the number of support set data points for the respective class (equal to \(K\), i.e., number of shots). The query set vectors in \(\mathbf{u}\in B\) are embedded by the model and the \(\ell_{2}\) distance is taken between \(\mathbf{u}\) and \(\mathbf{v}_{c}\) (from Equation 1) for \(c\in\mathcal{C}\) where \(\mathcal{C}=\{1,2,...,N\}\). Formally, the model loss is calculated in 2. \[\mathcal{L}(\theta)=-\log P_{\theta}(y=c|\mathbf{u})=-\log\frac{\exp\left(-\| f_{\theta}(\mathbf{u})-\mathbf{v}_{c}\|_{2}\right)}{\sum_{c^{\prime}\in \mathcal{C}}\exp\left(-\|f_{\theta}(\mathbf{u})-\mathbf{v}_{c}\|_{2}\right)} \tag{2}\] #### 3.1.1 Backbone Architectures Using the loss function in Equation 2, two different model architectures were used. The first is a simple convolutional (referred to as ConvNet hereafter) backbone from the ProtoNet paper consisting of four convolutional blocks (\(Conv\to BatchNorm\to ReLU\to MaxPool\)). The other model tested is ResNet-18 for CIFAR-10/100 [2].The number of parameters is shown in Table 1. The implementations of these models are inspired by He et al. (2015) [2] but are borrowed from [https://github.com/kuangliu/pytorch-cifar](https://github.com/kuangliu/pytorch-cifar). #### 3.1.2 Neural Collapse Metrics We evaluate the neural collapse on two different metrics. The intra-class variability and the formation of simplex equiangular tight frame is estimated by the given two formulae below. It is worth keeping in mind throughout that \(K\) is the number of samples or shots while \(N\) is the number of classes, perhaps contrary to usual convention. First, we define the global mean \(\overline{\mathbf{h}}_{G}\) and class mean \(\overline{\mathbf{h}}_{n}\) of the last layer features \(h_{n,k}\) as \[\overline{\mathbf{h}}_{G}=\frac{1}{KN}\sum_{n=1}^{N}\sum_{k=1}^{K}\mathbf{h}_{n,k} \quad\overline{\mathbf{h}}_{n}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{h}_{n,k}(1\leq n\leq N)\] Next, we calculate the between class and in-class covariance matrix as \begin{table} \begin{tabular}{l l} \hline \hline Name & \# of Parameters \\ \hline ConvNet & 111.2k \\ ResNet-18 & 11.17M \\ \hline \hline \end{tabular} \end{table} Table 1: Number of Parameters in Backbones \[\mathbf{\Sigma}_{W}:=\frac{1}{KN}\sum_{n=1}^{N}\sum_{k=1}^{K}\left(\mathbf{h}_{n,k}-\mathbf{ \overline{h}}_{n}\right)\left(\mathbf{h}_{n,k}-\mathbf{\overline{h}}_{n}\right)^{\top}, \quad\mathbf{\Sigma}_{B}:=\frac{1}{N}\sum_{n=1}^{N}\left(\mathbf{\overline{h}}_{n}-\mathbf{ \overline{h}}_{G}\right)\left(\mathbf{\overline{h}}_{n}-\mathbf{\overline{h}}_{G} \right)^{\top}\] The variability collapse metric is then measured as the magnitude between-class covariance matrix \(\mathbf{\Sigma}_{B}\) and in-class covariance matrix \(\mathbf{\Sigma}_{W}\). The within class variability collapse can be calculated as ([17]): \[\mathcal{NC}_{1}:=\frac{1}{N}\operatorname{trace}\left(\mathbf{\Sigma}_{W}\mathbf{ \Sigma}_{B}^{\dagger}\right)\] The between-class variability collapse is calculated as (important to note that ProtoNets have no last-layer classifier, so the last layer-features \(\mathbf{H}=[\mathbf{\overline{h}}_{1},\mathbf{\overline{h}}_{2},...,\mathbf{\overline{h}}_{N}]\) were used) ([17]): \[\mathcal{NC}_{2}:=\left\|\frac{\mathbf{H}\mathbf{H}^{\top}}{\left\|\mathbf{H}\mathbf{H}^{\top }\right\|_{F}}-\frac{1}{\sqrt{N-1}}\left(\mathbf{I}_{N}-\frac{1}{N}\mathbf{1}_{N}\mathbf{ \mathbf{1}}_{N}^{\top}\right)\right\|_{F}\] ### Datasets As mentioned before, the main dataset under consideration is the Omniglot dataset [6]. The dataset consists of 1623 distinct, black and white handwritten characters from 50 alphabets. Given the number of input channels in ResNet is 3, the Omniglot images were repeated across the channel dimension (not done for the ConvNet). ## 4 Experimental Results The experiments were conducted using ProtoNets on the Omniglot dataset, though using different backbones. We study the training accuracy/loss along with the \(\mathcal{NC}_{1}\)/\(\mathcal{NC}_{2}\) metrics on the Omniglot dataset [6]. For all the experiments, the hyperparameters were left unchanged from the original ProtoNet. During training time, \(N_{support}=N_{query}=60\) and \(K_{support}=K_{query}=5\) per episode. During validation/test-time, \(N_{support}=N_{query}=5\), \(K_{support}=5\), \(K_{query}=15\). Batch size (number of episodes per epoch) is 100. The Adam optimizer was used, with learning rate \(\alpha=0.001\) and decay every 20 epochs. No weight decay regularization was employed. ### Few-shot Classification with ConvNet Prototypical networks with the ConvNet backbone were trained on the Omniglot dataset, as described in 3. The results on the training and validation sets can be seen in 1. It is interesting to note that the validation curves remain below the training curves, suggesting the model is underfitting the training data. When testing neural collapse on the metrics defined in Section 3.1.2, we note that neither \(\mathcal{NC}_{1}\) nor \(\mathcal{NC}_{2}\) takes place. We make two interesting observations, however. Firstly, both the within-class (\(\mathcal{NC}_{1}\)) and between-class variability (\(\mathcal{NC}_{2}\)) drop as training loss decreases to zero, though both stabilize at a trivially non-zero number. It is worth mentioning again that these metrics are measured for _each episode_ and averaged per epoch, suggesting the model is tending to learn how to form a model feature vector for each class per episode. Secondly, the query set and support set collapse metrics are quite similar, suggesting that the model learns the representation space of the class feature vectors quickly (allowing it to produce a similar structure across both sets of vectors. ### Few-shot Classification with ResNet-18 Given that the ConvNet appears to be underfitting the data, we trained a ProtoNet using ResNet-18, which has \(\tilde{10}\) times more parameters (Table 1). As seen in Figure 3, the training error and loss appears to match those of the validation set. Interestingly, when using a larger model such as ResNet-18, the extent of neural collapse increases (Figure 4). Though we still do not observe complete within-class or between-class variability collapse in the embedded features of the support and query sets, the values certainly reach closer to zero. \(\mathcal{NC}_{1}\) also initially begins at a much smaller value compared to Figure 2 and reaches \(\tilde{0}.4\) before stabilizing, though \(\mathcal{NC}_{2}\) shows only slight decrease in comparison. ## 5 Conclusions Neural collapse has been primarily observed in models with the last-layer classifier. Intuitively, such models tune weights to learn strong representations (during training) to achieve similar output Figure 1: The training and validation error (left) for the ConvNet-based ProtoNet, where error is based on the average number of misclassified query set examples. The training and validation loss (right) show a similar trend. The terminal phase of training is reached around the \(100^{th}\) epoch, though zero training error is never reached. Figure 2: The \(\mathcal{NC}_{1}\) score (left) on the support and query sets shows how within-class variability evolves during training for the ConvNet model. The \(\mathcal{NC}_{2}\) score (right) is also shown. Note the noise due to random selection of classes in each episode. scores to the one-hot encoded ground-truth vector. Since metric-based meta-learning methods such as prototypical networks do not use a last-layer classifier and instead use a distance-metric for classification, the phenomenon may not generalize. However, from the experiments conducted so far, some version of within-class variability decrease is seen, though this may not qualify as \(\mathcal{NC}_{1}\) as defined by Papyan et al. (2020) [9]. Between-class variability decrease is even more difficult to observe. Overall, however, an interesting trend is observed where number of parameters correlates with collapse in prototypical networks. Despite the discrepancy in the method of training, the results observed appear to support the observations made in prior work ([9]): overparameterized models lend themselves to stronger neural collapse. Another interesting observation is that the classification decision in prototypical networks is identical to \(\mathcal{NC}_{4}\), raising the question of whether these models 'force' the model to learn structures with distinct decision boundaries across all combinations of classes. If so, it is interesting to note that the structure learned does not necessarily resemble a Simplex-ETF based on the results, despite the model achieving training loss close to zero. ## Acknowledgements We would like to thank Prof. Wei Hu for the opportunity to pursue this work. For guidance on neural collapse and its metrics, we would also like to thank Prof. Qing Qu and Xiao Li. We also acknowledge the UMich Great Lakes and the UC Berkeley clusters for GPU resources provided for running experiments. Figure 4: The \(\mathcal{NC}_{1}\) score (left) on the support and query sets shows how within-class variability evolves during training for the ResNet-18 model. The \(\mathcal{NC}_{2}\) score (right) is also shown. Also note the lack of noisiness in the metrics. Figure 3: The training and validation error (left) for the ResNet-18-based ProtoNet, where error is based on the average number of misclassified query set examples. The training and validation loss (right) show a similar trend.
2309.15659
Federated Deep Equilibrium Learning: A Compact Shared Representation for Edge Communication Efficiency
Federated Learning (FL) is a prominent distributed learning paradigm facilitating collaboration among nodes within an edge network to co-train a global model without centralizing data. By shifting computation to the network edge, FL offers robust and responsive edge-AI solutions and enhance privacy-preservation. However, deploying deep FL models within edge environments is often hindered by communication bottlenecks, data heterogeneity, and memory limitations. To address these challenges jointly, we introduce FeDEQ, a pioneering FL framework that effectively employs deep equilibrium learning and consensus optimization to exploit a compact shared data representation across edge nodes, allowing the derivation of personalized models specific to each node. We delve into a unique model structure composed of an equilibrium layer followed by traditional neural network layers. Here, the equilibrium layer functions as a global feature representation that edge nodes can adapt to personalize their local layers. Capitalizing on FeDEQ's compactness and representation power, we present a novel distributed algorithm rooted in the alternating direction method of multipliers (ADMM) consensus optimization and theoretically establish its convergence for smooth objectives. Experiments across various benchmarks demonstrate that FeDEQ achieves performance comparable to state-of-the-art personalized methods while employing models of up to 4 times smaller in communication size and 1.5 times lower memory footprint during training.
Long Tan Le, Tuan Dung Nguyen, Tung-Anh Nguyen, Choong Seon Hong, Nguyen H. Tran
2023-09-27T13:48:12Z
http://arxiv.org/abs/2309.15659v1
Federated Deep Equilibrium Learning: A Compact Shared Representation for Edge Communication Efficiency ###### Abstract Federated Learning (FL) is a prominent distributed learning paradigm facilitating collaboration among nodes within an edge network to co-train a global model without centralizing data. By shifting computation to the network edge, FL offers robust and responsive edge-AI solutions and enhance privacy-preservation. However, deploying deep FL models within edge environments is often hindered by communication bottlenecks, data heterogeneity, and memory limitations. To address these challenges jointly, we introduce FeDEC, a pioneering FL framework that effectively employs deep equilibrium learning and consensus optimization to exploit a compact shared data representation across edge nodes, allowing the derivation of personalized models specific to each node. We delve into a unique model structure composed of an equilibrium layer followed by traditional neural network layers. Here, the equilibrium layer functions as a global feature representation that edge nodes can adapt to personalize their local layers. Capitalizing on FeDEC's compactness and representation power, we present a novel distributed algorithm rooted in the alternating direction method of multipliers (ADMM) consensus optimization and theoretically establish its convergence for smooth objectives. Experiments across various benchmarks demonstrate that FeDEC achieves performance comparable to state-of-the-art personalized methods while employing models of up to 4 times smaller in communication size and 1.5 times lower memory footprint during training. Federated Learning, Distributed Optimization, Equilibrium Models, Edge Networks. ## I Introduction With the rapid proliferation of Internet-connected devices and the consequent vast data they generate, the shortcomings of centralized computing are becoming increasingly evident. Edge networks have emerged as a compelling countermeasure, processing data closer to its source. Such networks capably manage the escalating data volumes, inherently reducing latency, optimizing bandwidth usage and improving real-time responsiveness. In tandem with this shift towards decentralized processing, Federated Learning (FL) has been recognized as an apt machine learning paradigm for the network edge. Contrasting with traditional methods that necessitate transferring all data to a central server, FL enables multiple edge nodes to collaborate on training a unified machine learning model. Here, each node refines the model using its local data and transmits only the model's updates to a central server, where they are aggregated to improve the global model [1]. This not only aligns with privacy requirements since data never leaves the network edge, but also provides scalable solutions tapping into the distributed processing power of edge devices. Despite offering a scalable, privacy-preserving solution that taps into the distributed processing power of edge devices, fully realizing FL's potential within complex edge environments is contingent upon overcoming three major challenges. First, there are _communication bottlenecks_, resulting from the need to transmit large model parameters between parties. These constraints can lead to significant bandwidth limitations, causing inefficiencies and delays in training. Second, _data heterogeneity_ arises from the variety and diversity of the data held by edge nodes, leading to client drift effects where individual model performances diverge. Finally, _the limitations in memory and storage_ constrain the complexity and size of the models that can be deployed, requiring innovative solutions to maximize the potential of FL in edge network scenarios. Recent literature has presented diverse approaches to address the foregoing challenges, as outlined in Fig. 1. The communication and memory challenges has been tackled using model compression techniques [2, 3] to reduce model size for facilitating transmission and adaptive gradients [4] for efficient parameter updates. On the other hand, personalization has emerged as a key strategy to address the issue of data heterogeneity and client drift. Full model personalization methods, such as meta-learning [5, 6] and multi-task learning [7, 8, 9], have shown efficiency in capturing unique local patterns. However, these techniques can be resource-intensive and may necessitate considerable communication overhead. Conversely, partial model personalization approaches [10, 11] offer a balanced solution by employing a shared representation complemented by personalized local models, enabling both customization and conservation of resources. While these methods typically tackle individual challenges, a comprehensive approach that can addressing both communication and memory constraints, as well as data heterogeneity within FL in edge networks remains yet to be thoroughly explored in research. In this paper, we address a pivotal question: "_How do we leverage FL to exploit a compact shared representation under data heterogeneity in edge networks that not only substantially reduces communication bandwidth and memory footprint but effectively enhance learning performance?_". To this end, we propose a novel framework, namely _FEderated Deep EQuilibrium Learning_ (FeDEC), that leverages consensus optimization to learn a shared representation via deep equilibrium models (DEQs), combined with personalized explicit layers for each edge node. The compact shared layer can capture diverse patterns across edge nodes, while the explicit layers are fine-tuned for personalization using local data. This combination enables us to effectively handle the aforementioned challenges: FeDEO performs on par with state-of-the-art (SOTA) approaches while reducing communication by 2-4 times and memory footprint by 1.5 times. The main contributions of this work are summarized as follows. * We introduce FeDEQ, a novel framework tailored for resource-efficient and personalized FL under data heterogeneity in edge networks. Central to FeDEQ's design is its ability to learn a _compact shared representation_ with constant memory footprint via equilibrium models, thereby adapting this shared base to swiftly fine-tune _personalized layers_ at individual edge node to achieve local personalization. * seeking optimal personalized and shared parameters, and the dual problem - controlling discrepancies between the local and shared representation to mitigate the drifting effect in local training. * We theoretically provide the convergence guarantees for the proposed algorithm for smooth objectives. Under mild assumptions, we show that FeDEQ converges to a stationary point, which achieves a shared representation across all edge nodes and optimal personalized parameters for each node. * Through extensive experimentation on several FL benchmarks, we substantiate that FeDEQ achieves personalization with communication and memory efficiency. Further, the shared layer is highly adaptable to new edge nodes in generalization experiments. To the best of our knowledge, this is the first work that employs deep equilibrium learning and ADMM consensus optimization for simultaneously tackling communication bottlenecks, memory constraints and data heterogeneity in FL for edge networks. ## II Related Work ### _Federated learning at the Network Edge._ The emergence of FL comes in response to three significant challenges in large-scale machine learning, particularly pertinent to edge networks: processing massive amounts of edge data, managing the heavy communication load within networks, and preserving privacy without resorting to centralized storage. The standard federated algorithm - FedAvg [1], which utilizes local stochastic gradient descent (SGD) and averaging, has established the foundational framework for much of the FL research in edge networks. The primary challenges of FL in the context of edge networks can be broadly classified into communication and resource bottlenecks and data heterogeneity. _Communication and memory bottlenecks_ pertain to the large number of model parameters that need to be exchanged and large models that need to be trained on edge nodes. These challenges have prompted prior works to create more communication and computation-efficient strategies including: quantization [12, 13, 14], where local computations reduce the precision of model updates to reduce the size of data transferred; model compression [2], which involves reducing the size of the model to make it more manageable to transmit; and adaptive communication [4], where the amount of data exchanged is adjusted dynamically based on various factors like network conditions. The second category, _data heterogeneity_, is concerned with the non-identically distributed (non-i.i.d.) data that are often present across edge nodes. This aspect of the data can affect the convergence and performance of the global model [15], leading to a variety of approaches to mitigate its impact. Studies have delved into methods for managing the diversity of data across edge nodes, incorporating techniques like data sharing [16], data augmentation [17] or variance reduction [18] to recognize and adapt to the variations in data across different nodes in the network, ensuring that the globally aggregated model accurately reflects the underlying patterns of the entire distributed dataset. ### _Personalized Federated Learning_ Personalized Federated Learning (PFL) has recently gained significant attention to address data heterogeneity [19]. Regarding full model personalization, a simple yet effective personalization approach involves fine-tuning the global model's parameters for each client [20, 21]. Various works adopted multi-task learning (MTL), enabling clients to learn their distinct data patterns while benefiting knowledge shared by others [7, 8, 9]. Meta-learning frameworks are also employed to develop an effective initial model to rapidly adapted to new heterogeneous tasks [5, 6], while model interpolation techniques are used to personalize models using a mixture of Fig. 1: FeDEQ aims to simultaneously address data heterogeneity and communication and resource constraints, while other methods focus on solving individual challenge. global and local models [22, 23]. Recent years, partial model personalization approaches [10, 11, 24, 25, 26, 27, 28, 29] have brought up the idea of exploiting the common representation in heterogeneous settings to learn a personalized model tailored for each device, enabling simple formulation and layer-wise flexibility; however, their applicability is limited to specific settings such as linear representations [10] or pre-trained representations [28]. ### _Deep Equilibrium Learning_ Equilibrium models has emerged as a fascinating topic in the realm of implicit deep learning in recent times. In contrast to the conventional layer stacking models, deep equilibrium models (DEQ) implicitly define layers through an equilibrium point (a.k.a. fixed-point) of an infinite sequence of computation [30]. A general DEQ consists of a single implicit layer \(f_{\theta}\) modeled as a fixed-point system \(z^{\star}=f_{\theta}(z^{\star};x)\). Here, \(\theta\) is the model parameter, x is the input data, and the equilibrium \(z^{\star}\) is approximated by fixed-point iterations or root-finding. Several works [31, 32] delved into the theoretical underpinnings of DEQ, providing valuable insights into the stability and approximation properties of implicit layers. Recent works have successfully applied DEQ on large-scale tasks such as sequential modeling, and semantic segmentation [33, 30]. Despite potential drawbacks such as slow training time and computational complexity, DEQ possesses the capability to mitigate communication overhead and memory-intensive deep models, thereby presenting a compelling solution for FL. ### _ADMM-based learning_ ADMM has made significant advancements in both theory and application over recent decades, particularly in distributed learning [34, 35, 36]. In FL, several ADMM-based decentralized methods have been developed to enhance communication efficiency [37, 38]. However, employing this framework for tackling data heterogeneity is still uncharted. In this work, we also inherit the ideas of partial personalization that efficiently learn non-linear representations with a reduced model size using only edge local data. ## III Problem Formulation ### _Federated Deep Equilibrium Learning_ Consider a federated learning setting within an edge network consisting of a central server and \(n\) edge nodes. Each node \(i\) owns a dataset of \(n_{i}\) independent and identically distributed (i.i.d.) observations, denoted as \(\{(x_{ij},y_{ij})\in\mathcal{X}_{i}\times\mathcal{Y}_{i}:j=1,\ldots,n_{i}\}\). This local dataset represents a crucial piece of the learning puzzle, capturing unique patterns and characteristics inherent to each specific node's region or domain. To distill meaningful insights from this data, we introduce a predictive function \(f_{i}:\mathcal{X}_{i}\rightarrow\mathcal{Y}_{i}\), tailored to the unique statistical properties of each node's local data, and a loss function \(\ell(f_{i}(x_{ij}),y_{ij})\) measuring the discrepancy between \(f_{i}(x_{ij})\) and \(y_{ij}\). The local empirical risk on node \(i\), defined as \(\mathcal{L}_{i}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\ell(f_{i}(x_{ij}),y_{ij})\), reflects how well the model is performing within the individual local context. The primary objective is to minimize the global risk, equal to the sum of all local risks. Conventional FL approaches, such as FedAvg [1], learn a global model by averaging the model parameters obtained from edge nodes after local SGD rounds. While this method promotes collaboration among nodes, it ignores the intricate of heterogeneous data which leads to suboptimal performance. A prominent way to rectify this drawback is through the implementation of partial personalization, where a model decouples into a _representation module_ and a _personalized module_. The former serves as a collaborative learning ground for edge nodes to acquire a common representation (either linear or non-linear) that captures the shared patterns and regularities across all nodes' data. This common learning phase helps maintain a unified and consistent model view. On the other hand, the _personalized module_ adapts and refines itself to the unique characteristics of each node's local data, thereby enabling better local generalization [10, 24, 28]. While this personalization approach offers a more nuanced handling of the data and can lead to improved model performance, it is not without challenges. An effective representation is typically deep and complex, often consisting of numerous parameters and layers - as illustrated by the representation \(r\) in Fig. 1(a), which includes \(M\) layers. This complexity translates into substantial communication costs and \(O(M)\) memory complexity during training. These costs can prove to be significant barriers, particularly in resource-constrained edge networks where efficiency is paramount. In this work, we concurrently tackle these challenges of data heterogeneity, communication overhead, and memory constraints by leveraging partial personalization and model compactness through the utilization of deep equilibrium, as shown in Fig. 1. Specifically, for every node \(i\) we consider a special predictive function \(f_{i}\) of the form \[\begin{split} f_{\theta_{i},w_{i}}(x_{ij})&:=h_{w_{ i}}(g_{\theta_{i}}(z_{ij}^{\star};x_{ij}))\\ \text{s.t.}& z_{ij}^{\star}=g_{\theta_{i}}(z_{ij}^{ \star};x_{ij}),\quad\forall i,j.\end{split} \tag{1}\] By this definition, \(f_{i}\) is a composite of two functions: \(g\) parametrized by \(\theta_{i}\) and \(h\) by \(w_{i}\). A schematic illustration is presented in Fig. 1(b). While \(h_{w_{i}}\) can be any conventional layer (or stack of layers), such as a fully-connected network, \(g_{\theta_{i}}\) is defined implicitly with the constraint \(z_{ij}^{\star}=g_{\theta_{i}}(z_{ij}^{\star};x_{ij})\). Such a representation for \(g_{\theta_{i}}\) is called a _deep equilibrium model_[30], and the key advantage lies in its compactness and expressive power: a single layer can represent an infinitely deep weight-tied network while maintaining a constant memory footprint [30]. This allows the model to learn a sophisticated non-linear representation at a low memory capacity. We describe this in detail in Sec. III-B below. We then formulate a global consensus optimization problem for FeDEQ as follows: \[\begin{split}\min_{\{\theta_{i}\},\{w_{i}\},\theta}& \sum_{i=1}^{n}\left[\mathcal{L}_{i}\left(\theta_{i},w_{i}\right):=\frac{1}{n_{i }}\sum_{j=1}^{n_{i}}\ell\left(f_{\theta_{i},w_{i}}\left(x_{ij}\right),y_{ij} \right)\right]\\ &\text{s.t.}\quad\theta_{i}=\theta,\quad\forall i.\end{split} \tag{2}\] This mathematical representation embodies a critical balance between global coherence and local customization. The consensus constraints \(\theta_{i}=\theta,\forall i\) are to enforce that the local \(\theta_{i}\) are learned through sharing among edge nodes. This encourages the representation layer to benefit from the structure of all nodes' data--the objective of FL. On the other hand, all \(w_{i}\) are fine-tuned locally, adapting from the shared representation to achieve personalization. In Sec. IV below, we present a novel algorithm to solve this optimization problem based on ADMM consensus optimization. ### _Equilibrium Layers: Forward Pass and Backward Pass_ A single equilibrium layer in the constraint of (1) embodies an "infinitely deep" network through a fixed-point system. Here we examine a common non-linear mapping of the form: \[z=g_{\theta}(z;x):=\phi(Bz+Cx+b), \tag{3}\] where \(x\in\mathbb{R}^{d}\) is the input, \(z\in\mathbb{R}^{d_{1}}\) is the output, \(\theta=(B\in\mathbb{R}^{d_{1}\times d_{1}},C\in\mathbb{R}^{d_{1}\times d},b\in \mathbb{R}^{d_{1}})\) is the parameters and \(\phi:\mathbb{R}^{d_{1}}\to\mathbb{R}^{d_{1}}\) is some nonlinear activation functions. Equation (3) implies that \(z^{\star}\) is a fixed point of the function \(z\mapsto\phi(Bz+Cx+b)\) for a given data point (or a mini-batch) \(x\): an equilibrium layer of the form (3) is equivalent to feeding the input through an infinite sequence of weight-tied layers of a feedforward network. It has been observed that such a single layer, while much smaller in size, can perform on par with deep networks containing many explicit layers. This makes implicit representations particularly appealing for FL since the communication of model parameters is often a bottleneck. #### Iii-B1 Forward Pass - Fixed-Point Representation In the general problem (1) or the special case (3), we aim to find an equilibrium state \(z^{\star}\) such that \(z^{\star}=g_{\theta}(z^{\star};x)\), where \(g_{\theta}\) is a neural network parameterized by \(\theta\) and \(x\) is the input to the model. The fixed-point formulation allows a single equilibrium layer to capture long-range dependencies as all parts of the input contribute to the equilibrium state \(z^{\star}\). Moreover, in the process of finding \(z^{\star}\), the model implicitly emphasizes features that are stable under \(g_{\theta}\), promoting robust and consistent representations. Additionally, if the input distribution changes, the DEQ can adapt by finding a new equilibrium, enabling dynamic adaptation to data changes. However, there remain two subproblems that affect the forward pass of DEQ optimization including fixed-point convergence and fixed-point solver. **Fixed-point convergence.** From a theoretical perspective, a unique fixed point \(z^{\star}\) offers an equilibrium representation under \(g_{\theta}\). The convergence of fixed-point iteration is strongly tied to the properties of the function \(g_{\theta}\), where a crucial condition for the guaranteed convergence is that \(g_{\theta}\) needs to be a contraction mapping. This means that there exists a Lipschitz constant \(L<1\) such that \(\|g_{\theta}(x)-g_{\theta}(y)\|\leq L\|x-y\|\) for all \(x,y\) in the domain of \(g_{\theta}\). If \(g_{\theta}\) is a contraction mapping, the Banach fixed-point theorem guarantees the convergence to a unique fixed point [39]. However, ensuring that \(g_{\theta}\) is a contraction mapping can be challenging in practice, given that \(g_{\theta}\) is typically a deep neural network, capable of representing highly complex and non-linear functions. Therefore, we make two assumptions following the theory in [31]. First, \(\phi\) is a component-wise non-expansive (CONE) mapping, i.e., (i) the \(k\)th component of its output only depends on the \(k\)th component of its input and (ii) when operating on scalar input, \(\phi\) is \(1\)-Lipschitz continuous: \(\forall u,v\in\mathbb{R},|\phi(u)-\phi(v)|\leq|u-v|\). Many activation functions in deep learning, such as tanh, sigmoid, and (leaky) ReLU, satisfy this property. Second, the infinity norm of \(B\), denoted by \(\|B\|_{\infty}\), is strictly bounded above by \(1\). Together, these assumptions imply that \(g_{\theta}\) in (3) is a contraction mapping, and a fixed point \(z^{\star}\) exists for all input \(x\). The use of \(l_{\infty}\) projection can promote fixed-point convergence in certain iterative methods by limiting the size of the updates. Specifically, after each update, the resulting point is "projected" onto a set defined by an \(\infty\)-norm ball. For a matrix \(B\in\mathbb{R}^{d_{1}\times d_{1}}\) in (3) and radius \(\kappa>0\) we can employ \(l_{\infty}\) projection by solving the following problem. \[\min_{A}\frac{1}{2}\|A-B\|_{F}^{2}\ \ \ \text{s.t.}\ \ \|A\|_{\infty}\leq\kappa. \tag{4}\] Fig. 2: Using explicit and implicit layers as a shared representation module for edge nodes in FL. (a) Each node has \(M\) explicit layers in its model. (b) Instead of having multiple explicit layers, each node uses only one equilibrium layer parametrized by \(\theta_{i}\). This problem can be solved by decomposing across rows and applying existing methods such as Bisection [31]. We demonstrate these findings to an FL context that involves 100 edge nodes, each possessing a portion of the FEMNIST dataset for a 62-class classification task. To benchmark, we employ the widely used FedAvg algorithm and compare two local model configurations: (i) a DEQ-MLP network with a single equilibrium layer of size 512 (DEQ-MLP), wherein the parameters associated with \(z\) are projected onto the \(l_{\infty}\) norm, succeeded by a linear layer of size 128, and ultimately, a final layer mapping to 62 classes; and (ii) a similar network architecture but without the projection step. As illustrated in Fig. 3, the DEQ-MLP network with the projection mechanism achieves convergence, whereas the network without projection appears to be diverging. **Fixed-point Solver.** Recent works leverage Broyden [30, 40] and Anderson Acceleration (AA)[33, 41, 42] for solving fixed-point systems in DEQs. In this work, we employ a variant of AA to expedite the convergence of fixed-point iterations. The concept behind AA is to form the next iterate as a linear combination of past iterates, rather than merely relying on the last iterate. Specifically, AA minimizes the differences between the iterates and their mapped values over a history of \(m\) previous iterations. In terms of representation learning in FeDEQ, we suggest a warm-start strategy for AA in which the fixed point of the previous iterations is reused as the initial point for the next. Since the changes in \(\theta\) and \(x\) are small from one iteration to the next (i.e., \(\theta^{(i)}\approx\theta^{(i-1)}\) and \(x^{(i)}\approx x^{(i-1)}\)), the fixed points \(z^{(i)}\) and \(z^{(i-1)}\) are likely to be close to each other as well. This could bring several benefits: * _Faster Convergence_: As the input data changes slightly from one iteration to the next, reusing the fixed point could lead to faster convergence of the fixed-point iteration, which could in turn speed up the overall training process. * _Continuity of Representations_: The learned representations could exhibit a form of continuity or smoothness from one iteration to the next. This could potentially make the learned representations more robust and stable, especially in the face of small changes in the input data or the underlying function. * _Improved Robustness_: If the model is trained on a stream of data that changes over time, reusing the fixed point could make the model more robust to these changes, as it provides a form of context from the previous data. #### Iii-B2 Backward Pass - Implicit Differentiation Once the equilibrium state \(z^{\star}\) has been found in the forward pass, the model computes the loss \(\mathcal{L}\) based on this state and the target output. The goal in the backward pass is to update the parameters \(\theta\) to minimize this loss. The gradient of the loss \(\mathcal{L}\) w.r.t. \(\theta\) is calculated by employing the chain-rule as follows: \[\frac{\partial\mathcal{L}}{\partial\theta}=\frac{\partial\mathcal{L}}{\partial z ^{\star}}\frac{\partial z^{\star}(\theta)}{\partial\theta} \tag{5}\] In this context, the gradient of \(\mathcal{L}\) w.r.t. \(z^{\star}\) can be derived through backpropagation. The notation \(z^{\star}(\theta)\) is also utilized to signify situations where \(z^{\star}\) is perceived as an implicit function of the variables that we're differentiating w.r.t. the shared parameter \(\theta\), and we use \(z^{\star}\) on its own when we're just mentioning the equilibrium value. Because of the self-consistent nature of the equilibrium state \(z^{\star}\), the gradient \(\partial z^{\star}(\theta)/\partial\theta\) cannot be computed directly. Instead, implicit differentiation is used. By differentiating the fixed-point equation \(z^{\star}=g_{\theta}(z^{\star};x)\) w.r.t. \(\theta\), we get: \[\frac{\partial z^{\star}(\theta)}{\partial\theta}=\frac{\partial g_{\theta}(z^ {\star};x)}{\partial\theta}=\frac{\partial g_{\theta}(z^{\star};x)}{\partial z ^{\star}}\frac{\partial z^{\star}(\theta)}{\partial\theta}+\frac{\partial g_ {\theta}(z^{\star},x)}{\partial\theta} \tag{6}\] Upon reorganizing the terms, we attain an explicit formulation for the Jacobian as follows. \[\frac{\partial z^{\star}(\theta)}{\partial\theta}=\left(I-\frac{\partial g_{ \theta}(z^{\star};x)}{\partial z^{\star}}\right)^{-1}\frac{\partial g_{\theta }(z^{\star};x)}{\partial\theta} \tag{7}\] where \(I\) is the identity matrix, and \(\frac{\partial g_{\theta}}{\partial z}\) and \(\frac{\partial g_{\theta}}{\partial\theta}\) are the Jacobians of \(g_{\theta}\) w.r.t. \(z^{\star}\) and \(\theta\), respectively. Although the right-hand side terms can be explicitly computed using conventional automatic differentiation, solving the inverse of Jacobian is computationally intensive, especially when \(z^{\star}\) is high-dimensional. To solve this equation more efficiently, we can employ the following vector-Jacobian product (VJP) [43]. \[y^{T}\left(\frac{\partial z^{\star}(\theta)}{\partial\theta}\right)=y^{T}\left( I-\frac{\partial g(z^{\star};x)}{\partial z^{\star}}\right)^{-1}\left(\frac{ \partial g(z^{\star};x)}{\partial\theta}\right). \tag{8}\] which is equivalent to the following linear system: \[u^{T}=u^{T}\left(\frac{\partial g_{\theta}(z^{\star};x)}{\partial z^{\star}} \right)+y^{T}. \tag{9}\] where \(u^{T}=y^{T}\left(I-\frac{\partial g_{\theta}(z^{\star};x)}{\partial z^{\star}} \right)^{-1}\) is the solution of the linear system, which can be solved efficiently by using linea solvers such as conjugate gradient descent [44]. To this end, by substituting (7) to (5), we can have a form of VJP as follows. \[\frac{\partial\mathcal{L}}{\partial\theta}=\frac{\partial\mathcal{L}}{\partial z ^{\star}}\left(I-\frac{\partial g(z^{\star};x)}{\partial z^{\star}}\right)^{-1} \frac{\partial g(z^{\star};x)}{\partial\theta} \tag{10}\] This can be solved using the above implicit differentiation techniques. The gradient \(\partial\mathcal{L}\) is then used in gradient-based optimization methods to update the model parameters \(\theta\) and minimize the loss. One of the significant advantages of DEQs' backward pass is their memory efficiency. In traditional deep learning models, the Fig. 3: FedAvg using DEQ-MLP with and without \(l_{\infty}\) projection activations of all layers need to be stored for backpropagation. In contrast, DEQs only need to store the final equilibrium state, as all layers are the same and share the same parameters. This results in substantial memory savings, especially for very deep networks. Furthermore, DEQs allow for end-to-end training with gradient-based optimization methods [30]. Despite the complexity introduced by the fixed-point iteration and implicit differentiation, the backward pass is still differentiable, and gradients can be computed and used for parameter updates. **Jacobian-Free Backpropagation.** The main challenge in the backward pass of DEQs is the extensive computational time, even though existing methods can avoid computing the inversion of a Jacobian matrix directly. Surprisingly, recent research has demonstrated implicit differentiation can be performed by using the zeroth-order approximation of the Neumann series for the inverse of the Jacobian matrix [45] instead of explicitly computing of this expensive term. This approach still provides satisfactory performance with significantly reducing computation time. Hence, in this work, we employ this Jacobian-Free backpropagation approach to estimate the implicit gradients. ### _Federated Learning with DEQs vs. Explicit Models_ Previous work has shown that equilibrium layers can achieve similar performance to explicit layers while maintaining a compact model size [30]. We extend this finding to an FL context with 100 edge nodes, each holding a portion of the FEMNIST dataset for a 62-class classification task. We employ the de-facto FedAvg to compare two local model settings: (i) an MLP network with 4 explicit layers of size 512, followed by a linear layer of size 128, and a final layer mapping to 62 classes; and (ii) the same network with the four explicit layers replaced by a single equilibrium layer of size 512 (DEQ-MLP) as in (3). Fig. 4 shows that the same test accuracy is achieved in both settings. However, since setting (ii) requires a model that is \(41\%\) smaller in size, its communication cost is saved by nearly half per round. In Sec. V, we provide experiments with much more sophisticated models like DEQ-ResNet and DEQ-Transformer that also highlight this benefit. ## IV Consensus Optimization for Federated Learning with Equilibrium Layers With the objective (2), we introduce an ADMM consensus optimization variant to collaboratively learn the shared parameters \(\theta_{i}\) and the personalized parameters \(w_{i}\). We detail the optimization procedure in Algorithm 1 and provide some remarks on its convergence at the end of this section. ### _Learning shared representation via ADMM Consensus Optimization_ In FL for edge networks, the data heterogeneity across diverse edge nodes often culminates in a pronounced challenge known as _client drift_[18]. Such drift emerges when edge nodes, optimizing their model based on uniquely distributed data, converge toward a local objective minimum. This local convergence may differ considerably from the global optimization objective desired for an aggregated model. Consequently, when the server integrates models from myriad nodes, it often encounters a compounded effect of these varied local minima. The resulting global model, unfortunately, can become an aggregate that fails to optimally represent any individual edge node's data, leading to overall suboptimal performance. **Motivation for using ADMM in Combatting Client Drift Effects.** To mitigate the challenges posed by data heterogeneity and client drift, we have formulated the consensus optimization problem (2) to find a shared representation across all edge nodes and propose to employ ADMM [34] to solve it. This algorithm is based on alternating updates of primal and dual variables to achieve two goals: \(\theta_{i}\) across edge nodes become closer and the objective function decreases. Applying this to our problem, we first introduce a dual variable \(\lambda_{i}\) for each \(\theta_{i}\) and construct the following augmented Lagrangian for each client \(i\): \[\tilde{\mathcal{L}}_{i}(\theta,\theta_{i},w_{i},\lambda_{i}):=\mathcal{L}_{i}( \theta_{i},w_{i})+\langle\lambda_{i},\theta_{i}-\theta\rangle+\frac{\rho}{2} \|\theta_{i}-\theta\|^{2} \tag{11}\] where \(\rho>0\) is the penalty parameter. Upon solving the local augmented Lagrangian, clients will then collaboratively minimize the global objective \(\tilde{\mathcal{L}}\), which is defined as follows. \[\tilde{\mathcal{L}}(\theta,\{\theta_{i}\},\{w_{i}\},\{\lambda_{i}\}):=\sum_{ i=1}^{n}\left[\tilde{\mathcal{L}}_{i}(\theta,\theta_{i},w_{i},\lambda_{i}) \right], \tag{12}\] Delving deeper into the augmented Lagrangian (11), two essential terms come to the forefront. The inner product term, \(\langle\lambda_{i},\theta_{i}-\theta\rangle\), reflects the alignment between the deviation of a local model from the global model and its associated dual variable, \(\lambda_{i}\). Essentially, it acts as a controller of discrepancies between local and global models. The quadratic term, \(\frac{\rho}{2}\|\theta_{i}-\theta\|^{2}\), directly penalizes deviations of the local model from the global standard. The parameter \(\rho\) adjusts the stringency of this penalty, ensuring a balance between local adaptability and global consensus. These terms result in the new correction term in the gradient \(\nabla_{\theta_{i}}\tilde{\mathcal{L}}_{i}=\nabla_{\theta_{i}}\mathcal{L}_{i }+\lambda_{i}+\rho(\theta_{i}-\theta)\) aligning client updates with server directions to overcome client drifts. As described in detail in Algorithm 1, the ADMM consensus optimization proceeds by iteratively updating the local parameters \(\theta_{i}\) and \(w_{i}\), the dual variables \(\lambda_{i}\), and the shared global parameters \(\theta\). This procedure consists of three key steps per communication round. First, the server performs averaging to update the shared parameters \(\theta\) based on the set of \(\theta_{i}\) obtained from the previous round (line 5) and then sends it to a new subset of edge nodes (line 7). Second, each selected node Fig. 4: FedAvg with 5-layer MLP model vs. DEQ-MLP \(i\) minimizes the augmented Lagrangian \(\tilde{\mathcal{L}}_{i}\) w.r.t. the primal variables (lines 8-9). This local minimization makes use of the Rep_Update procedure (lines 13-20), which involves solving Problems (1) and (10) to obtain the gradient \(\nabla_{\theta_{i}}\tilde{\mathcal{L}}_{i}\) via implicit differentiation. The dual variable \(\lambda_{i}\) is then updated by an ascent step to enforce the consensus constraint (lines 10-11). This process is repeated within a number of iterations until convergence, at which point the local representation \(\theta_{i}\) and the consensus variable \(\theta\) agree, resulting in a shared representation learned from the distributed data. ### _Personalization through Explicit Layers_ With the shared representation \(\theta\) learned via ADMM consensus optimization, we proceed to the second aim of FeDEC, which focuses on achieving personalized model for each node to better address data heterogeneity. Particularly, in every communication round, node \(i\in S\) fine-tunes its personalized parameters \(w_{i}\) on top of the shared representation \(\theta\) with its local data, i.e., minimizing the original loss \(\mathcal{L}_{i}(\theta,w_{i})\) w.r.t. \(w_{i}\) (Line 8 of Algorithm 1). This problem can be efficiently solved by using traditional gradient-based optimization methods such as SGD, where the personalized parameter \(w_{i}^{t}\) obtained from the previous round \(t\) is used as the initial value for updating the next \(w_{i}^{t+1}\), providing edge nodes the flexibility adapt to their unique data distributions to different personalization settings. By decomposing the global problem into local subproblems and a consensus step, ADMM consensus procedure seeks a balance between local objectives and the global objective. The augmented Lagrangian (11) introduces a penalty for deviations from the consensus. Thus, even if a node's local data pushes its model towards a distinct local minimum, the consensus penalty nudges it closer to the global consensus. Furthermore, with the penalty parameter \(\rho\), ADMM can be adjusted in terms of how strictly it enforces consensus. This provides flexibility to tune the degree of consensus based on the extent of client drift observed. Furthermore, combining equilibrium models with the ADMM consensus optimization framework enhances communication efficiency in key ways. Equilibrium models, with their implicitly "infinite-depth" single layer, simplify model complexity and reduce communication overhead by requiring fewer parameters. Moreover, they maintain a constant, often lower, memory footprint than traditional deep models, thus enabling resource-constrained edge devices to manage complex models without straining hardware resources. Additionally, the ADMM framework permits asynchronous updates of local parameters, thereby optimizing network resource utilization and convergence speed. ``` 1:Parameters: Communication rounds \(T\); penalty \(\rho\) 2: Initialize \(S_{0}=\{1,\ldots,n\}\), and \(\theta^{0}=\theta_{i}^{0}\), \(w_{i}^{0}\) and \(\lambda_{i}^{0}\) randomly for \(i=1,2,\ldots,n\) 3:for\(t=0,1,\ldots\), \(T-1\)do\(\triangleright\) Global iteration 4: Server aggregates the shared parameters: \(\theta^{t+1}=\frac{1}{|S_{t}|}\sum_{i\in S_{t}}\theta_{i}^{t}\) 5: Server randomly samples a subset \(S_{t+1}\) of edge nodes 6:for each edge node \(i\in S_{t+1}\) in paralleldo 7: Receive \(\theta^{t+1}\) from server \(\triangleright\) Communication 8: Update primal variables: 9:\(\left\{\begin{array}{l}w_{i}^{t+1}\leftarrow\text{arg\,min}_{w_{i}}\mathcal{ L}_{i}(\theta^{t+1},w_{i})\\ \theta_{i}^{t+1}\leftarrow\text{Rep\_Update}(\tilde{\mathcal{L}}_{i}(\theta^{t+1}, \theta_{i}^{t},w_{i}^{t+1},\lambda_{i}^{t}))\end{array}\right.\) 10: Update dual variable: \(\lambda_{i}^{t+1}=\lambda_{i}^{t}+\rho\left(\theta_{i}^{t+1}-\theta^{t+1}\right)\) 11: Send \(\theta_{i}^{t+1}\) to server \(\triangleright\) Communication ``` **Algorithm 1** FeDEQ ### _FeDEC: Convergence Analysis_ We present a theoretical analysis for FeDEC's convergence in a smooth, non-convex setting. In comparison with the original consensus optimization problem in [34, SS7.1], our problem is different in four key aspects. First, on edge node \(i\) the loss function \(\mathcal{L}_{i}\) is non-convex w.r.t. \((\theta_{i},w_{i})\). Therefore, Algorithm 1 is based on a more recent treatment of ADMM for non-convex objectives in [46, Algorithm 3]. In particular, we assume that \(\mathcal{L}_{i}\) is differentiable and \(L\)-Lipschitz smooth w.r.t. \((\theta_{i},w_{i})\). Based on this, the update of these variables is done using gradient descent steps, which allows us to bound the decrease in \(\tilde{\mathcal{L}}_{i}\) after lines 8-9. Further, the gradient w.r.t. \(w_{i}\) can be found via the traditional backward propagation procedure, as \(w_{i}\) is the parameter of an explicit module. On the other hand, the gradient w.r.t. \(\theta_{i}\), the parameters of an implicit module, is found using the implicit function theorem, which we have previously detailed in Sec. III-B. Second, as is typical of FL algorithms, we must allow for node sampling, i.e., only a subset of edge nodes is required to participate in training in each global iteration \(t\). This is done via choosing the subset \(S_{t+1}\) (line 5) and only aggregating the parameters \(\theta_{i}\) from the sampled nodes (line 4). To assist the convergence analysis, we make an assumption that, regardless of the sampling method, there exists a period of \(T>0\) global rounds after which _all edge nodes_ will have participated at least once. In [46], this is called the period-\(T\) essentially cyclic update rule. Third, we employ _variable splitting_ for our ADMM variant. In other words, the primal variable is split into \(\theta_{i}\) and \(w_{i}\) on each node, and only \(\theta_{i}\) is subject to the consensus constraint in (2). Moreover, unlike the original ADMM version which minimizes \(\widetilde{\mathcal{L}}_{i}\) jointly w.r.t. \(\theta_{i}\) and \(w_{i}\), we split the update of these variables in Algorithm 1. For \(w_{i}\) (line 8), since it is not constrained, minimizing the augmented Lagrangian is equivalent to minimizing the loss function \(\mathcal{L}_{i}\). Then, \(\theta_{i}\) is updated by minimizing \(\mathcal{L}_{i}\). Fourth and finally, as is common in large-scale machine learning, parameters are learned using optimization with _mini batches_. This gives us stochastic gradients w.r.t. the parameters \(\theta_{i}\) and \(w_{i}\), and as a result, the augmented Lagrangian is not guaranteed to decrease after every SGD round (it is only guaranteed in expectation). We make the following assumption for our convergence analysis. **Assumption IV.1**.: For every node \(i\), the local loss function \(\mathcal{L}_{i}\) is bounded below. Further, \(\mathcal{L}_{i}\) is differentiable everywhere w.r.t. \((\theta_{i},w_{i})\) and has Lipchitz gradients with parameter \(L>0\). _Remark IV.2_.: This assumption is common in deep learning's non-convex optimization analysis, ensuring gradient stability and the existence of an optimal solution. To establish a convergence guarantee in a non-convex setting of ADMM, we aim to show the following. Start with the variables \((\theta^{t},\{\theta^{t}_{i}\},\{w^{t}_{i}\},\{\lambda^{t}_{i}\})\). After \(T\) rounds (i.e., after all edge nodes have participated at least once since iteration \(t\)), the new variables \((\theta^{t+T},\{\theta^{t+T}_{i}\},\{w^{t+T}_{i}\},\{\lambda^{t+T}_{i}\})\) lead to decrease in the global augmented Lagrangian \(\widetilde{\mathcal{L}}\). Then we show these variables will converge to a local stationary point of \(\widetilde{\mathcal{L}}\). The following result establishes the convergence of. **Lemma IV.3** (Decrease in augmented Lagrangian after every \(T\) iterations).: _Suppose \(\rho\) is chosen large enough that \(\rho>6L\). After \(T\) global iterations since \(t\), when all edge nodes have participated in training at least once, we have_ \[\widetilde{\mathcal{L}}(\theta^{t+T},\{\theta^{t+T}_{i}\},\{w^{t+ T}_{i}\},\{\lambda^{t+T}_{i}\})-\widetilde{\mathcal{L}}(\theta_{t},\{\theta^{t}_{i }\},\{w^{t}_{i}\},\{\lambda^{t}_{i}\})\] \[\leq\frac{1}{T}\sum_{i=1}^{n}\left[-\left(\frac{\rho}{2}-\frac{3L ^{2}}{\rho}\right)\|\theta^{t+T}-\theta^{t}\|^{2}\right.\] \[\left.\qquad\qquad\qquad-\left(\frac{L+\rho}{2}-\frac{12L^{2}}{ \rho}\right)\|\theta^{t+T}_{i}-\theta^{t}_{i}\|^{2}\right. \tag{13}\] Proof.: The detailed proof can be found in Appendix A. _Remark IV.4_.: Lemma IV.3 shows that if we choose a large enough \(\rho\), the augmented Lagrangian decreases by a sufficient amount after all edge nodes have participated at least once. However, the smoothness parameter is difficult to estimate in practice. We therefore fine-tune \(\rho\) from a predefined range of values. **Lemma IV.5** (Lower bound for augmented Lagrangian).: _Suppose that the global loss function \(\mathcal{L}(\theta,\{w_{i}\})\) is lower bounded, i.e.,_ \[\underline{\mathcal{L}}:=\min_{\theta,\{w_{i}\}}\mathcal{L}(\theta,\{w_{i}\}) >-\infty\] _If \(\rho\) is chosen large enough that \(\rho>6L\), then the augmented Lagrangian is lower bounded by \(\underline{\mathcal{L}}\)._ Proof.: The detailed proof can be found in Appendix B. Now we combine Lemmas IV.3 and IV.5 to establish the following theorems for the convergence of (12). **Theorem IV.6** (Convergence of the augmented Lagrangian).: _Suppose the following are true:_ * _After_ \(T\) _iterations, all clients have participated in training at least once._ * _The global loss function_ \(\mathcal{L}(\theta,\{w_{i}\})\) _is bounded below by a finite quantity_ \(\underline{\mathcal{L}}\)_._ * _The hyperparameter_ \(\rho\) _for the augmented Lagrangian is chosen such that_ \(\rho>6L\)_._ _Then the augmented Lagrangian \(\widetilde{\mathcal{L}}\) will monotonically decrease and is convergent to a quantity of at least \(\underline{\mathcal{L}}\). Further, for all \(i=1,\ldots,n\) we have \(\lim_{t\to\infty}\|\theta^{t+T}-\theta^{t+T}_{i}\|=0\)._ Proof.: Lemma IV.3 implies that the augmented Lagrangian decreases by a non-negative amount after every \(T\) global iterations. Given Lemma IV.5 and the fact that every client will be updated at least once in the interval \([t,t+T]\), we conclude that the limit \(\lim_{t\to\infty}\widetilde{\mathcal{L}}(\theta^{t+T},\{\theta^{t+T}_{i}\}, \{w^{t+T}_{i}\},\{\lambda^{t+T}_{i}\})\) exists and is at least \(\underline{\mathcal{L}}\). Now we prove the second statement. From (27) and the fact that \(\widetilde{\mathcal{L}}\) converges, we conclude that as \(t\to\infty\), \[\|\theta^{t+T}-\theta^{t}\|\to 0,\quad\|\theta^{t+T}_{i}-\theta^{t}_{i}\|\to 0, \quad\|w^{t+T}_{i}-w^{t}_{i}\|\to 0.\] Combining this with (25), we have \(\|\lambda^{t+1}_{i}-\lambda^{t}_{i}\|\to 0\). Based on the definition of \(\lambda^{t+1}_{i}\), this implies that \(\|\theta^{t+1}_{i}-\theta^{t+1}\|\to 0\). _Remark IV.7_.: Theorem IV.6 establishes that the sequence of primal and dual variables updated after each \(T\) global iterations of Algorithm 1 converges. Furthermore, we have \(\|\theta^{t+1}_{i}-\theta^{t+1}\|\to 0\), i.e., the consensus constraint is satisfied. In the below theorem we present another guarantee that the limit of the sequence \((\theta^{t},\{\theta^{t}_{i}\},\{w^{t}_{i}\},\{\lambda^{t}_{i}\})\) is the stationary solution for Problem (2). **Theorem IV.8** (Convergence to stationary point).: _Suppose the assumptions in Theorem IV.6 hold. Then the limit point \((\theta^{*},\{\theta^{*}_{i}\},\{w^{*}_{i}\},\{\lambda^{*}_{i}\})\) of the sequence \((\theta^{t},\{\theta^{t}_{i}\},\{w^{t}_{i}\},\{\lambda^{t}_{i}\})\) is a stationary solution to Problem (2). That is, for all \(i\),_ \[\nabla_{\theta_{i}}\mathcal{L}_{i}(\theta^{*},w^{*}_{i})+\lambda^{* }_{i} =0,\] \[\nabla_{w_{i}}\mathcal{L}_{i}(\theta^{*},w^{*}_{i}) =0,\] \[\theta^{*} =\theta^{*}_{i}.\] Proof.: See the proof to [46, Theorem 2.4]. Here we make some remarks. First, because Algorithm 1 allows for node sampling on line 5, the proof relies on an assumption that all edge nodes will participate after some fixed number of rounds (in particular, \(T\) rounds). Second, since local objectives are non-convex and smooth, the primal variables \(\theta_{i}\) and \(w_{i}\) are updated using first-order methods like SGD. Further, the gradient w.r.t. \(\theta_{i}\) is found through IFT (lines 12-13), while that w.r.t. \(w_{i}\) only involves a simple backward pass. Finally, the proof aims to show that after every \(T\) rounds, we obtain a sufficient decrease in the augmented Lagrangian; coupled with the fact that the augmented Lagrangian is bounded below, the sequence of variables converges to a stationary point of \(\widetilde{\mathcal{L}}\). ## V Experimental Results In this section, we present the experimental results and insights into the performance and effectiveness of FeDEQ, demonstrating its advantages compared to existing approaches under various scenarios. ### _Experimental Setup_ #### V-A1 Datasets and Non-i.i.d. Partition Methods. We evaluate FeDEQ on four diverse federated datasets: FEMNIST [47], CIFAR-100 [48], CIFAR-100 [48], and SHAKESPEARE [49]. These datasets cater to various application domains, such as image classification and natural language processing, and are suitable for evaluating FL algorithms in different settings for edge networks. Table I presents the details for the datasets. Here we list some key characteristics of each dataset. **FEMNIST**: A variant of the handwritten EMNIST dataset [50] designed specifically for FL. In this work, we use the version obtained from FedJax's database [51], which simulates real-world non-i.i.d data distributions by writers. The original dataset contains 3,400 writers, providing a total of 780,000 images, with each writer contributing at least one image of each of 62 labels (10 digits and 52 letters). Due to a lack of computing resources, we randomly sample 100 and 200 writers from the original to assign to edge nodes for our experiments. **CIFAR-10**: An object recognition dataset [48] including \(50,000\) training and \(10,000\) test colored images belonged to \(10\) classes. The dataset is equally partitioned into 100 nodes in a non-i.i.d. manner, with each node either having data from 2 or 5 distinct classes, depending on the desired scenario. This partitioning simulates real-world FL scenarios facilitating the evaluation of algorithms under heterogeneous data distributions. **CIFAR-100**: An object recognition dataset [48] including \(50,000\) training and \(10,000\) test colored images belonged to \(100\) classes. The dataset is divided equally into 100 edge nodes non-i.i.d., with each node having data from either 5 or 20 distinct classes, based on the chosen scenario. This partitioning strategy provides a realistic FL environment, allowing for comprehensive algorithm evaluation with diverse data distributions. **SHAKESPARE**: A federated dataset obtained from FedJax's database [49] comprising 715 users, representing characters from Shakespeare's plays, with each example being a series of consecutive lines spoken by a character in a specific play. In this paper, we randomly select 200 edge nodes from the original dataset which each has more than 100 training samples with a sequence length of 16. #### V-A2 Baselines. We benchmark FeDEQ against several comparison methods, including locally trained models (Local), the de-facto FedAvg [1], the fine-tuning approach (FedAvg+FT)[20, 21], and four SOTA personalized algorithms including: (1) Ditto [9], an effective full model personalization approach via MTL; (2) FedPer [24] and (3) FedRep [10], emerging partial personalization approaches enabling nodes to collaboratively learn an explicit shared representation and subsequently personalize the local model; and (4) kNN-Per [28], a semi-parametric learning approach that employs kNN as a personalized model on top of pre-trained models. Notably, the latter three are closely related to ours. We apply settings consistent with FeDEQ for all methods to ensure a fair comparison, and fine-tune their respective hyperparameters to reach their best performance on the settings. #### V-A3 Models and Representations. For vision tasks, we construct DEQ-ResNet models inspired by ResNet [52], comprising an equilibrium residual block followed by a 3-layer fully connected (FC) network with the last serving for personalization and using Softplus activations [53], a smooth version of ReLU, to enforce the smoothness assumption in theoretical analysis. We offer two versions of our DEQ-ResNet: "S" (small) and "M" (medium), varying by the size of equilibrium layers. DEQ-ResNet-S is used for FEMNIST and DEQ-ResNet-M for CIFAR-10/CIFAR-100, with FeDEQ. As for other baselines, we employ the architectures of ResNet, using their residual blocks with the same FC network as DEQ-ResNet. Especially, ResNet-20 and ResNet-34 are then employed for experiments on FEMNIST and CIFAR-10/CIFAR-100, correspondingly. For sequence tasks, we design a DEQ-Transformer based on Universal Transformer (UT)[54], consisting of an equilibrium UT block succeeded by a personalized module with one last UT block and one linear layer; while other methods use a UT with 8 layers, 4-head attention (Transformer-8). Specific details regarding the shared parameters and personalized parameters, including the number of model parameters and the model sizes, are presented in Table II and Appendix C. For training DEQs, we use Anderson Acceleration [41] with warm-start strategy for solving fixed-point system in the forward pass, and Jacobian-free backpropagation [45] for solving the implicit differentiation in the backward pass. It is worth noting that when using non-smooth activations like ReLU, FeDEQ still achieves convergence which reaches similar levels of performance compared to those models using smooth activations. #### V-A4 Training Details. Training is proceeded for \(T=150\) communication rounds in the experiments involving FEMNIST, CIFAR-10 and CIFAR-100, whereas for Shakespeare, it is conducted for \(T=400\) rounds. For all methods and experiments, we uniformly sample 10% of nodes without replacement for each round and employ SGD optimizer with a constant learning rate. Unless specified in certain instances, we use \(5\) local epochs for training shared representation and \(3\) for personalization for all methods. The final accuracy is determined by averaging local accuracies from all nodes over the last \(10\) rounds. More details about the training processes are presented in Appendix D. All experiments are conducted in a coding environment powered by an AMD Ryzen 3970X Processor [55] with 64 cores and 256GB of RAM. Additionally, fours NVIDIA GeForce RTX 3090 GPUs [56] are employed to accelerate the training process. For software, we rely on Python3 as our primary language, supplemented by advanced machine learning and optimization frameworks: Jax [57], Haiku [58], Optax [59], Jaxopt [60], and FedJax [51]. Furthermore, our implementation is also inspired by some personalized FL benchmarking works such as Motley [61]. ### _Results and Evaluation_ #### Iv-B1 Effects of \(\rho\) on the performance of FeDEQ The penalty parameter \(\rho\) in FeDEQ is a tuning parameter that controls the trade-off between the dual constraint satisfaction (i.e., how closely the local representation matches the shared representation) and the minimization of the objective function. Therefore, the selection of \(\rho\) is crucial for the performance of FeDEQ and it can vary significantly based on the specific optimization problem. To investigate the impact of \(\rho\) FeDEQ's performance and convergence, we vary \(\rho\) values within \(\{0.001,0.01,0.1,1.0,5.0,10.0\}\) on the FEMNIST dataset. The results in Fig. 5 show that FeDEQ achieves convergence with any \(\rho\) within the range; however, if \(\rho\) is too small, the constraint \(\theta_{i}=\theta\) is weakly enforced, meaning that the local variables have more freedom to minimize their local objective function, which might cause the consensus variable to deviate from the optimum. This might increase the number of iterations needed for convergence (e.g., \(\rho=0.001\)) or may even prevent convergence in certain cases. On the other hand, it can provide a better local performance as each subproblem can be more optimized according to its local data. Conversely, if \(\rho\) is too large, the algorithm will strongly enforce the consensus constraint, which can lead to a swift alignment of the local variables to the consensus variable (e.g., \(\rho=5.0\) or \(\rho=10.0\)). However, this might cause suboptimal solutions because the strong enforcement of the consensus might prevent local variables from reaching their local optima. In other words, it imposes high "pressure" to force local solutions to agree, potentially at the expense of solution quality. Through empirical evaluation, we found that \(\rho=0.01\) provides the best balance for our datasets. In practice, selecting a suitable \(\rho\) often involves trial and error or a validation process. It is also possible to use an adaptive strategy, where \(\rho\) is updated during the optimization process. #### Iv-B2 The convergence of FeDEQ To empirically assess the convergence of FeDEQ across all the datasets, we provide both the test accuracy and the training loss (using the function \(\mathcal{L}\)) over global communication rounds in Fig. 6,7. Those figures reveal a consistent trend: across all datasets, FeDEQ's loss decreased and stabilized within a dataset-specific range of communication rounds, signaling a stable and reliable convergence trajectory. Specially, for the FEMNIST dataset, \begin{table} \begin{tabular}{|c|c||c|c||c|c|c|} \hline \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Shared Representation**} & \multirow{2}{*}{\begin{tabular}{c} **Params (M)/** \\ **Size (MB)** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Personalized** \\ **Layer** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Params (M)/** \\ **Size (MB)** \\ \end{tabular} } \\ \hline \multirow{2}{*}{FEMNIST} & Baselines & ResNet-20 (19 Conv + 2 FC) & 2.97 / 11.87 & 1 FC & 0.008 / 0.03 \\ & FeDEQ & DEQ-ResNet-S (3 Conv + 2 FC) & **0.98 / 3.94** & 1 FC & 0.008 / 0.03 \\ \hline \multirow{2}{*}{CIFAR-10} & Baselines & Resnet-34 (33 Conv + 2 FC) & 7.78 / 31.11 & 1 FC & 0.001/0.01 \\ & FeDEQ & DEQ-Resnet-M (3 Conv + 2 FC) & **2.73 / 10.90** & 1 FC & 0.001/0.01 \\ \hline \multirow{2}{*}{CIFAR-100} & Baselines & Resnet-34 (33 Conv + 2 FC) & 10.88 / 43.51 & 1 FC & 0.01/0.05 \\ & FeDEQ & DEQ-Resnet-M (3 Conv + 2 FC) & **2.73 / 10.90** & 1 FC & 0.01/0.05 \\ \hline \multirow{2}{*}{ShakeSpace} & Baselines & Transformer-8 (8 UT, 4 H) & 0.35 / 1.43 & 1 UT + 1 FC & 0.06/0.22 \\ & FeDEQ & DEQ-Transformer (4 UT, 4 H) & **0.21 / 0.83** & 1 UT + 1 FC & 0.06/0.22 \\ \hline \end{tabular} _Acronym: Conv – Convolutional layer; FC – Fully Connected layer; UT – Universal Transformer layer; H – Head-attention; Params – The number of model parameters; M – Millions; Size – The size of model; MB – Megabytes_ \end{table} TABLE II: Summary on Shared Representation and Personalization Layers Fig. 5: Effects of \(\rho\) on the convergence of the shared representation \(\theta\) \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\begin{tabular}{c} **Partition** \\ **Method** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Original** \\ **Nodes** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Sample** \\ **Nodes** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Labels/** \\ **Node** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Total** \\ **Train** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Total** \\ **Test** \\ \end{tabular} } & \multirow{2}{*}{**Mean**} & \multirow{2}{*}{**Std**} \\ \hline FEMNIST & Natural non-i.i.d. & 3400 & 100 & Various & 19,357 & 2,237 & 194 & 72.9 \\ & & 200 & 200 & 39,139 & 4,521 & 195 & 75.7 \\ \hline CIFAR-10 & By-label non-i.i.d. & 100 & 100 & 2 \& 5 & 50,000 & 10,000 & 500 & – \\ \hline CIFAR-100 & By-label non-i.i.d. & 100 & 100 & 5 \& 20 & 50,000 & 10,000 & 500 & – \\ \hline SHAKESPEARE & Natural non-i.i.d. & 715 & 200 & Various & 92,834 & 11,805 & 465 & 506.2 \\ \hline \end{tabular} \end{table} TABLE I: Details on Datasets and Non-i.i.d. Partition Methods convergence was observed by the 50th communication round, achieving a test accuracy of approximately 87%. In the case of CIFAR-10, despite its inherent challenges, FeDEQ exhibited convergence by the 80th round, with test accuracies reaching up to 90%. The CIFAR-100 dataset, with its augmented label complexity, presented a more intricate challenge. Nevertheless, convergence was achieved around the same 80th round, with test accuracies ranging between 58% and 80%, contingent on the label distribution. The Shakespeare dataset, distinct in its data distribution, witnessed convergence by the 120th round and realized an accuracy of approximately 48%. These empirical findings not only attest to the adaptability and resilience of FeDEQ but also position it as a promising solution for a myriad of federated learning scenarios. #### V-B3 Model Size and Communication Efficiency. In edge networks, the implementation of FL often confronts formidable communication challenges. Such challenges arise from the recurring necessity to update and transmit large and deep model parameters during the training process. Here, we highlight the prowess of FeDEQ in achieving efficient communication, even with compact model size, without compromising on performance excellence. Specially, we examine the efficacy of FeDEQ with DEQs, in comparison to FedRep with explicit models. For our experiments with FeDEQ, we utilize DEQ-ResNet-S for FEMNIST, DEQ-ResNet-M for both CIFAR-10 and CIFAR-100 and DEQ-Transformer for the Shakespeare dataset. In contrast, for FedRep, we delve into a range of models from more compact architectures (e.g., ResNet-14, Transformer-4) to their intricate counterparts (e.g., ResNet-34, Transformer-12), across various datasets. The results shown in Fig. 8 reveal that FeDEQ consistently maintains a smaller model size for all datasets while achieving competitive accuracy levels. In FEMNIST, FeDEQ with DEQ-ResNet-S outperforms FedRep using the ResNet-14 model, which is 1.5 times larger in size. Remarkably, even with a model size that is \(2-4\) times smaller, FeDEQ achieves performance metrics comparable to deeper architectures, such as ResNet-20 and ResNet-34. This trend of maintaining compactness without compromising accuracy is consistently observed in CIFAR-10 and CIFAR-100, where FeDEQ either matches or slightly exceeds the performance of other benchmarked models. Furthermore, in the Shakespeare dataset, FeDEQ's implementation with DEQ-Transformer consistently presents a model that is up to \(3\) times more compact than standard Transformers, yet the accuracy remains competitive. These findings underscore FeDEQ's potential as a robust solution for edge environments that prioritize both model efficiency and performance. A smaller model size implies more efficient update exchanges in FL scenarios, an essential aspect in edge environments characterized by limited bandwidth and computational resources. The compact nature of FeDEQ ensures that edge devices engage in communication processes that are both resource-efficient and rapid, thereby facilitating expedited decision-making. #### V-B4 Performance of Personalized Models. To investigate how FeDEQ addresses the data heterogeneity issue through personalization, we evaluate the performance of FeDEQ and various baselines on the local test datasets of edge nodes (unseen during training) across a range of tasks, as detailed in Table III. As anticipated, FeDEQ consistently outperforms non-personalized methods (Local, FedAvg, and FedAvg+FT), which generally exhibit lower performance compared to personalized approaches in the context of data heterogeneity. Taking the FEMNIST dataset as an example, while the best non-personalized algorithm, FedAvg+FT, achieves accuracies of 84.98% for 100 nodes and 85.40% for 200 nodes, FeDEQ impressively scores 87.13% and 87.46% respectively, demonstrating a clear advantage in the realm of approximately 18%. When pitted against personalized FL baselines, FeDEQ showcases competitive performance with Ditto and faintly Fig. 6: Test accuracy of FeDEQ across the datasets. Fig. 7: Average loss of FeDEQ across the datasets. surpasses FedPer, FedRep, and kNN-Per. On FEMNIST dataset, FeDEQ achieves accuracies faintly less than Ditto but outshines the other methods. In the CIFAR-10 dataset with various degrees of heterogeneity, FeDEQ achieves top accuracies of 90.54% and 82.84% respectively. These results marginally surpass the performance of other personalized algorithms like Ditto, which scores 90.27% and 82.18% under the same conditions. Similarly, for the CIFAR-100 dataset, FeDEQ's performance peaks at 80.42% and 57.49% for configurations (100, 5) and (100, 20), respectively, again nudging past the closest competitor, Ditto, which posts figures of 80.37% and 57.18%. Furthermore, in the next-character prediction task, while kNN-Per achieves the highest accuracy, FeDEQ closely follows with a small gap, exhibiting competitiveness. Consequently, FeDEQ consistently delivers comparable or superior performance with SOTA algorithms across all datasets, while offering substantial efficiency in terms of communication with up to \(4\) times smaller in the size of representation. #### Vi-B5 Generalization to new edge nodes. We demonstrate the benefits of the shared representation in generalizing to new, unseen edge nodes. To achieve this, we split the edge nodes into two groups: 90% training nodes, and 10% new, unseen nodes. Initially, we train the shared representation for FeDEQ, FedPer FedRep, and kNN-Per using the designated training group, then use it to refine the personalization layers for new nodes using their training data, which has not been incorporated during the representation training process. For FedAvg and Ditto, a full model will be trained and adapted to new participants. Table IV presents the performance of these new edge nodes. Notably, kNN-Per underperforms FedPer and FedRep in vision tasks as its shared representation is trained from scratch rather than using pre-trained models as in the original proposal, but excels all in the sequence task. Despite Ditto showcases effective personalization on training nodes, it falls short when dealing with new nodes. Meanwhile, FeDEQ demonstrates remarkable competitiveness, consistently surpassing the other algorithms on FEMNIST, CIFAR-10, CIFAR-100 while keeping acceptable performance on Shakespeare. This implies that the shared representation facilitated by FeDEQ possesses the capacity for effective adaptation to newly joined nodes, even with a significantly reduced size. #### Vi-B6 Effects of Shared Representation on Personalization Our investigation turns towards assessing the robustness and adaptability of the shared representation learned by FeDEQ, FedPer, and FedRep, particularly on the FEMNIST and CIFAR-10 datasets. For a comprehensive assessment, two tailored scenarios have been meticulously designed. In the first scenario, we vary the local epochs for updating personalized parameters while keeping a constant number of epochs for training the shared representation. As depicted in Fig. 8(a),8(c), in FEMNIST, all methods only require a few personalized epochs to reach peak performance due to the simplicity of the dataset; however, overfitting becomes a concern shortly thereafter. Meanwhile, in CIFAR-10, FeDEQ achieves desirable results with fewer \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Datasets (\(c\), \(p\))\({}^{\dagger}\)** & FEMNIST & CIFAR-10 & CIFAR-100 & Shakespeare \\ \hline **Algorithms** & (100, -) & (200, -) & (100, 2) & (100, 5) & (100, 5) & (100, 20) & (200, -) \\ \hline Local & 68.80 & 68.77 & 83.85 & 65.78 & 66.67 & 34.93 & 28.70 \\ FedAvg & 83.50 & 82.12 & 38.91 & 63.78 & 33.05 & 33.06 & 48.07 \\ FedAvg + FT & 84.98 & 85.40 & 88.75 & 79.88 & 77.14 & 55.71 & 46.60 \\ \hline Ditto & 87.93 & 88.43 & 90.27 & 82.18 & 80.37 & 57.18 & 47.56 \\ FedPer & 85.10 & 85.42 & 87.89 & 80.38 & 78.88 & 55.07 & 44.66 \\ FedRep & 85.29 & 86.18 & 87.81 & 80.25 & 79.53 & 56.30 & 48.01 \\ kNN-Per & 84.94 & 85.36 & 87.43 & 80.18 & 76.19 & 56.27 & 48.77 \\ \hline \hline \multicolumn{1}{l}{FeDEQ (Ours)} & **87.13** & **87.46** & **90.54** & **82.84** & **80.42** & **57.49** & **47.62** \\ \hline \hline \end{tabular} \({}^{\dagger}\) (\(c\), \(p\)) corresponding to \(c\) nodes and \(p\) classes per node). For FEMNIST and Shakespeare, \(p\) might be various denoted by ”-”. \end{table} TABLE III: Test accuracy of different algorithms across edge nodes Fig. 8: Comparison the model size and accuracy of FeDEQ and FedRep across the datasets. personalized updates than the others. In the second scenario, the emphasis is shifted towards the epochs for training the shared representation while the epochs for personalized parameter updates are kept constant. We observe that a satisfactory shared representation could be achieved within a few epochs across all methods on both datasets, but FeDEQ potentially needs fewer to attain a good one, as shown in Fig. 8(b),8(d). These results imply that FeDEQ can learn representations more effectively than its explicit counterparts. Furthermore, FeDEQ's performance consistently surpasses the others in both scenarios, highlighting its ability to maintain stable performance on personalization. #### V-B7 Memory and Time Complexity. As FL to be deployed the network edge, efficient memory management has become a non-negotiable criterion, especially in scenarios with resource constraints. DEQs which are recognized for their consistent memory footprints during training phases, emerge as a promising solution in this context [30]. To demonstrate that, we measure the average GPU memory consumption during training FedAvg with DEQ-ResNets on edge nodes using the CIFAR-100 dataset, in comparison to FedAvg with explicit ResNet models. Fig. 10 reveals that DEQ-ResNet-S has the lowest memory footprint due to its smaller size. DEQ-ResNet-M's memory usage is roughly on par with ResNet-14 and ResNet-20, but nearly \(1.5\) times lower than ResNet-34 for training each mini-batch. This can be attributed to DEQs' reliance on implicit differentiation, which eliminates the need to store intermediate values of all layers for backpropagation. **Limitations.** While the memory benefits of FeDEQ are significant, the biggest drawback of FeDEQ lies in the training time, which is approximately \(2-3\) times slower than explicit models (Fig. 10) due to the iterative nature of solving DEQ's forward and backward pass. To mitigate this, we can explore acceleration methods for solving fixed-point iterations or implicit differentiation. ## VI Conclusion The challenges of data heterogeneity, communication bottlenecks, and memory constraints are well-known in FL at the network edge. While existing works have extensively addressed each problem--such as using model personalization for data heterogeneity or model compression to reduce communication cost--a comprehensive solution to address these challenges is underexplored. In this paper, we introduce FeDEQ, a novel approach for FL that aims to tackle these challenges concurrently. The design of FeDEQ benefits from the compactness and representation power of deep equilibrium models and the Fig. 10: Memory usage vs. training time of DEQ-ResNets and ResNets. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Datasets (\(c\), \(u\), \(p\))\({}^{\dagger}\)** & FEMNIST & CIFAR-10 & CIFAR-100 & Shakespeare \\ \cline{2-5} **Algorithms** & (180, 20, -) & (90, 10, 5) & (90, 10, 20) & (180, 20, -) \\ \hline FedAvg & 80.04 (82.30) & 50.20 (57.76) & 28.40 (28.90) & 46.92 (47.68) \\ Ditto & 81.65 (88.70) & 50.90 (82.34) & 29.60 (55.86) & 47.20 (47.24) \\ FedPer & 83.47 (85.20) & 78.20 (79.63) & 59.20 (52.82) & 45.47 (44.53) \\ FedRep & 84.68 (86.12) & 78.90 (79.11) & 57.40 (52.97) & 47.89 (48.00) \\ kNN-Per & 84.48 (85.40) & 75.70 (80.72) & 50.70 (53.73) & 48.26 (48.59) \\ \hline FeDEQ (Ours) & **85.69** (87.00) & **79.20** (83.18) & **60.10** (54.22) & **47.25** (47.46) \\ \hline \hline \end{tabular} \({}^{\dagger}\) (\(c\), \(u\), \(p\)) corresponding to \(c\) training nodes, \(u\) unseen nodes and \(p\) classes per node. For FEMNIST and Shakespeare, \(p\) might be various denoted by “-”. \end{table} TABLE IV: Test accuracy across unseen edge nodes (test accuracy of training nodes between parentheses) Fig. 9: Effects of Shared Representation on Personalization in FEMNIST (a, b) and CIFAR-10 (c, d) FEMNIST and Shakespeare, \(p\) might be various denoted by “-”. personalization capabilities of local fine-tuning. We formulate an ADMM consensus optimization scheme that allows edge nodes to learn a shared representation and then adapt to their personalized parameters. Extensive experiments on various benchmarks showcase FeDEQ's ability to achieve comparable performance to state-of-the-art methods with explicit models of up to four times larger in size, highlighting FeDEQ's lower memory footprint during training and hence a direct benefit to communication in FL for edge environments.
2309.04754
Ansatz-Agnostic Exponential Resource Saving in Variational Quantum Algorithms Using Shallow Shadows
Variational Quantum Algorithms (VQA) have been identified as a promising candidate for the demonstration of near-term quantum advantage in solving optimization tasks in chemical simulation, quantum information, and machine learning. The standard model of training requires a significant amount of quantum resources, which led us to use classical shadows to devise an alternative that consumes exponentially fewer quantum resources. However, the approach only works when the observables are local and the ansatz is the shallow Alternating Layered Ansatz (ALA), thus severely limiting its potential in solving problems such as quantum state preparation, where the ideal state might not be approximable with an ALA. In this work, we present a protocol based on shallow shadows that achieves similar levels of savings for almost any shallow ansatz studied in the literature, when combined with observables of low Frobenius norm. We show that two important applications in quantum information for which VQAs can be a powerful option, namely variational quantum state preparation and variational quantum circuit synthesis, are compatible with our protocol. We also experimentally demonstrate orders of magnitude improvement in comparison to the standard VQA model.
Afrad Basheer, Yuan Feng, Christopher Ferrie, Sanjiang Li
2023-09-09T11:00:39Z
http://arxiv.org/abs/2309.04754v1
# Ansatz-Agnostic Exponential Resource Saving in Variational Quantum Algorithms Using Shallow Shadows ###### Abstract Variational Quantum Algorithms (VQA) have been identified as a promising candidate for the demonstration of near-term quantum advantage in solving optimization tasks in chemical simulation, quantum information, and machine learning. The standard model of training requires a significant amount of quantum resources, which led us to use classical shadows to devise an alternative that consumes exponentially fewer quantum resources. However, the approach only works when the observables are local and the ansatz is the shallow Alternating Layered Ansatz (ALA), thus severely limiting its potential in solving problems such as quantum state preparation, where the ideal state might not be approximable with an ALA. In this work, we present a protocol based on shallow shadows that achieves similar levels of savings for almost any shallow ansatz studied in the literature, when combined with observables of low Frobenius norm. We show that two important applications in quantum information for which VQAs can be a powerful option, namely variational quantum state preparation and variational quantum circuit synthesis, are compatible with our protocol. We also experimentally demonstrate orders of magnitude improvement in comparison to the standard VQA model. ## Introduction The fields of quantum computing and quantum algorithms have made huge strides in the past decade. Although we are currently in the era of small erroneous quantum devices called Noisy Intermediate Scale Quantum (NISQ) [4] devices, different research groups were still able to pave the way for demonstrating quantum advantage over classical computers in synthetic but well-defined sampling problems [1, 13, 14]. The next major breakthrough in this area will be to replicate similar advantages for practically valuable problems. Many proposals have been put forward and one class of algorithms that stands out is Variational Quantum Algorithms (VQA) [15]. These algorithms are specifically designed to solve optimization problems involving quantum information, which are stored as quantum states using quantum bits a.k.a. qubits and operated using quantum circuits. The core idea is built upon the fact that many important functions involving these objects are notoriously hard or intractable to evaluate on classical computers because this will require classical computational resources exponential in the number of qubits involved. By using parameterized quantum circuits, such functions can be estimated with polynomially many quantum resources on quantum devices, thereby enabling optimization using iterative optimization algorithms. Potentially useful applications include Variational Quantum Eigensolver [1], Quantum Support Vector Machines [10], Quantum Approximate Optimization Algorithm [1], etc. Unlike classical computing, the lack of quantum memory devices coupled with the no-cloning theorem implies that each use of a quantum state requires preparing it from scratch. When discussing VQAs, we use the term _sample complexity_ to denote the total number of executions of the quantum device required (equivalently the total number of copies of quantum states consumed). In the standard VQA model, this scales linearly with the total number of function evaluations required throughout the optimization. Bring hyperparameter tuning, choice of models and ansatzes, etc. into the picture and suddenly this number is very large. Moreover, in the near term, only very few capable quantum computers would be available and hence implementing such VQAs with a reduced sample complexity is crucial. Interesting parallels can be drawn between VQA training and quantum tomography when viewed in the Heisenberg picture. Since classical shadow tomography [12] provides an exponentially better method to estimate linear functionals involving quantum states, this was adopted in the VQA training protocols to achieve an exponential reduction in quantum resources in our previous work [1]. But the method, titled _Alternating Layered Shadow Optimization_ (ALSO), uses a version of shadow tomography that requires target observables to be local, and this restricts the ansatzes to require simple entanglement structures such as the Alternating Layered Ansatz (ALA) given in Figure 1(a). This limitation is profound when the optimal circuit or state is not approximable with ALAs. The recently proposed shallow shadow technique [1] describes a similar tomography procedure that can be easily implemented in NISQ devices and does not rely directly on the locality of the observables. By leveraging this, in this work, we introduce _Ansatz Independent Shadow Optimization_ (AISO), a method that provides an exponential reduction in quantum resources for VQA training that works with almost all of the popular shallow (depth logarithmic in the number of qubits) quantum circuit structures in the literature, when used in combination with observables of low Frobenius norm. We demonstrate these savings for two important problems in quantum information for which VQAs can be used, namely, Variational Quantum State Preparation (VQSP) and Variational Quantum Circuit Synthesis (VQCS). Both problems concern identifying the right circuit parameters of an ansatz that best approximates unknown quantum states or circuits. The benefits of AISO can be summarized as follows: 1. [leftmargin=*] 2. _Exponential saving on input state copies:_ To achieve arbitrarily precise estimates of all function evaluations that one encounters during an iterative optimization of the said VQA cost function, AISO consumes exponentially fewer copies of the input state compared to standard VQA, allowing one to do more iterations, achieve better approximations, and carry out extensive hyperparameter tuning. 3. _Ansatz agnostic implementation on quantum hardware:_ Our method guarantees savings of input state copies for almost all the shallow ansatzes used and studied in the literature. Moreover, the operations required using the quantum device are independent of the choice of ansatz. 4. _Optimization using different ansatzes:_ The combination of the two advantages given above means that, for a given unknown input state or circuit, optimization can be carried out with various types of ansatzes. One can then choose the best one that fits, with significant savings in the total usage of quantum devices. 5. _Compatibility with VQCS:_ Solving VQCS involves the usage of maximally entangled states. Since ansatzes with limited entanglement are necessary for ALSO, it cannot be used for efficiently implementing VQCS, This is not the case for AISO, since it is ansatz independent. The advantage is experimentally demonstrated in both use cases of interest where we show that AISO outperforms standard VQA significantly given the same number of copies in four different ansatzes used in the literature; Alternating Layered Ansatz (ALA) [3, 10], Multi-Entanglement Renormalization Ansatz (MERA) [1, 13, 14, 15], Hardware Efficient Ansatz (HEA) [16, 17, 18] and Tree Tensor Networks (TTN) [19, 12, 13, 15] (cf. Figure 1). We also prove that the sample complexity of AISO and, by extension, shallow shadows, can be improved when the input state being sampled is from a \(2\)-design, instead of a \(1\)-design. Finally, we argue how AISO is compatible with most of the heuristic methods used to address trainability issues called barren plateaus that one might encounter during optimization. This paper is organized as follows: in Sections Related Works and Background, we briefly review some related works as well as quantum computing, shallow shadows, and VQA; in Section Ansatz Independent Shadow Optimization, we explain the technical details of AISO; in Section Applications, we discuss VQSP and VQCS; in Section Simulation Results, we present the experimental results comparing AISO with standard VQA; in Section Improved Bounds Using 2-Design Assumption, we show how one can improve the sample complexity bounds of AISO as well as shallow shadows by assuming that the input is drawn from a state \(2\)-design rather than a state \(1\)-design; in Section Dealing With Barren Plateaus, we explain why AISO is compatible with many of the heuristic barren plateau alleviating techniques in the literature and in Section Technical Appendix, we give the proofs of Theorems 3, 4, 5, 6. ## Related Works Classical shadows have been used to improve the sample complexity of VQAs in our previous work [1]. But in this work, the authors use a type of shadow tomography that relies on the target observables being local. This forces the ansatz to have a weak entanglement structure such as an ALA. So, for applications such as VQSP, results can be poor if the optimal state is not approximable by ALAs. Moreover, this method cannot be used for VQCS since this requires working with the maximally entangled state. Since our method uses shallow shadows, it can be used with almost any shallow ansatz studied in the literature, to solve VQSP and VQCS. In [18], classical shadows have been used to reduce the number of times one has to call a quantum computer in quantum machine learning applications. The idea here is to use the quantum computer to generate classical shadows of an already learned VQA model so that predictions can be made of the learned model using a classical computer. But here, the learning procedure is still carried out on a quantum computer, while in AISO, the learning procedure is carried out completely on a classical computer. In [14], the ability of classical shadows to estimate an exponentially large number of properties and their classical computational tractability is leveraged to classically approximate VQA cost functions. This is done by simultaneously computing large numbers of covariance functions and using them to solve polynomially growing numbers of root-finding problems. But what is being proposed here is a completely new optimization algorithm, while in AISO, we have a technique that is capable of improving the existing VQA optimization algorithms. So the type of function evaluations that one encounters in AISO will be the same as standard VQA, but we can estimate all of them simultaneously without consuming a lot of copies. ## Background In this section, we review quantum computing, shallow shadows, and VQAs. ### Quantum Computing Throughout this work, we use the 'ket' and 'bra' notations to denote column vectors \(\ket{\psi}\) and their conjugate transposes \(\bra{\psi}\) respectively. \(\ket{i}\in\mathbb{C}^{d}\) is the \(i^{\text{th}}\) standard basis vector. We use \(\mathcal{L}(\mathbb{C}^{d})\) to denote the set of all linear operators that act on \(\mathbb{C}^{d}\). A quantum _state_ is defined as any positive semidefinite operator \(\rho\in\mathcal{L}(\mathbb{C}^{d})\) with \(\text{tr}(\rho)=1\). In quantum computing, a _qubit_ is the analog of a bit in classical computing and can admit any quantum state in \(\mathcal{L}(\mathbb{C}^{2})\) as its value, The state of an \(n\)-qubit system can be described using states that act on the tensor product of the \(n\)\(2\)-dimensional vector spaces, denoted as \(\mathbb{C}^{2}\otimes\cdots\otimes\mathbb{C}^{2}\cong\mathbb{C}^{2^{n}}\). A unitary operator \(U\in\mathcal{L}(\mathbb{C}^{2^{n}})\) is called a _quantum gate_ acting on \(n\) qubits. Such gates can transform the state of an \(n\)-qubit system from \(\rho\) to \(U\rho U^{\dagger}\). The _Pauli gates_ are defined as \[X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\;Y=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\;Z=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}. \tag{1}\] Let \(P=\{\mathds{1},X,Y,Z\}\) and let \(\mathcal{P}_{n}^{(\gamma)}\) contain all possible distinct \(4^{n}\)\(n\)-fold tensor products of the elements in \(P\), with a scalar \(\gamma\in\mathbb{C}\) multiplied to them. Then, the set \(\mathcal{P}_{n}=\mathcal{P}_{n}^{(1)}\cup\mathcal{P}_{n}^{(-1)}\cup\mathcal{ P}_{n}^{(i)}\cup\mathcal{P}_{n}^{(-i)}\) forms a group under matrix multiplication. The normalizer of this group in the group of unitary matrices acting on \(\mathbb{C}^{2^{n}}\) is called the _Clifford group_ over \(n\) qubits. To observe information from a quantum system in a state \(\rho\), we measure the system using an _observable_, which is defined as any Hermitian operator \(O\). Let the spectral decomposition of \(O\) be \(O=\sum_{i}\lambda_{i}\ket{u_{i}}\bra{u_{i}}\). Then the measurement results in an output \(\lambda_{i}\), with probability \(\bra{u_{i}}\rho\ket{u_{i}}\). The post-measurement state will be \(\ket{u_{i}}\). In addition, the expected value of this random procedure is \(\text{tr}(\rho O)\), concisely written as \(\bra{O}_{\rho}\). Measurements using diagonal observables are called _standard basis measurements_. Rank one states are called _pure states_. In this case, gate operations and measurements can be fully described by any normalized eigenvector in its support. ### Shallow Shadows For an arbitrary state \(\rho\) and known observables \(O_{1},O_{2},\ldots,O_{M}\), estimating \(\langle O_{i}\rangle_{\rho}\) for each \(i\) using conventional quantum tomography techniques requires \(\mathcal{O}(2^{n}\cdot M)\) copies of \(\rho\). Classical shadow tomography [10] can be used to estimate all these expectations by consuming only \(\mathcal{O}(\log M)\) copies. Moreover, for certain classes of observables, the dependence on \(n\) is \(\mathcal{O}(\text{poly}(n))\). The first step to generate a shadow is to apply a circuit \(U\) sampled from an ensemble of \(n\)-qubit circuits \(\mathcal{U}\). Then we measure the resultant state according to the standard basis to obtain an \(n\)-bit string \(u\). Then, a classical shadow is computed classically as \[\hat{\rho}_{U,u}=\Delta_{\mathcal{U}}^{-1}(U^{\dagger}\ket{u}\bra{u}U), \tag{2}\] where \[\Delta_{\mathcal{U}}(\rho)=\mathbb{E}_{U\sim\mathcal{U}}\sum_{u\in\{0,1\}^{n} }\bra{u}U\rho U^{\dagger}\ket{u}U^{\dagger}\ket{u}\bra{u}U \tag{3}\] Furthermore, \(\hat{\rho}_{U,u}\) gives an unbiased estimator of \(\rho\) and hence \(\langle O_{i}\rangle_{\hat{\rho}_{U,u}}\) is an unbiased estimator of \(\langle O_{i}\rangle_{\rho}\) for all \(i\). The number of such shadows required for precise estimations is dominated by the _state-dependent shadow norm_ of the traceless part of the observables, defined as Figure 1: Ansatzes used in our simulations. In (a), (b), (d), each connected pair of black boxes represent a two-qubit subcircuit. In (c), each black box is a single qubit subcircuit while the two-qubit gate is the CNOT gate. Figure 2: The structure of the unitary ensemble used to generate shallow shadows. Each block here is a uniformly randomly sampled \(2\)-qubit Clifford circuit. \(d\) is the number of vertical layers of these blocks in the circuit. \[\|\widetilde{O}\|_{\rho\mathcal{U}}^{2}=\mathbb{E}_{U\sim\mathcal{U}}\sum_{u\in\{0, 1\}^{n}}\left\langle u\right|U\rho U^{\dagger}\left|u\right\rangle\left\langle \widetilde{O}\right\rangle_{\hat{\rho}_{U,u}}^{2}, \tag{4}\] where \(\widetilde{O}=O-\frac{\text{tr}(O)}{2^{n}}\mathds{1}\). Using this, the sample complexity of the protocol is given by the following theorem. **Theorem 1**.: _(Huang, Kueng, and Preskill 2020) Let \(\mathcal{U}\) be an ensemble of gates such that \(\Delta_{\mathcal{U}}^{-1}\) exists, and \(O_{1},O_{2},\ldots,O_{M}\) be \(n\)-qubit observables. For any \(\delta,\epsilon\in(0,1)\), let \(T_{1}=2\log(2M/\delta)\) and \(T_{2}=(34/\epsilon^{2})\max_{i}\|\widehat{O_{i}}\|_{\rho,\mathcal{U}}^{2}\). Let \(\rho\) be a state with classical shadows (generated using \(\mathcal{U}\)) \(\hat{\rho}_{U_{1},u_{1}},\hat{\rho}_{U_{2},u_{2}},\ldots,\hat{\rho}_{U_{T_{1 }},u_{T_{2}}}\). Define \(\langle\widehat{O_{i}}\rangle_{\rho}=\mu_{T_{1},T_{2}}(\{\langle O_{i}\rangle_ {\hat{\rho}_{U_{j},u_{j}}},\ 1\leq j\leq T_{1}T_{2}\})\), where \(\mu_{T_{1},T_{2}}\) is the median-of-means estimator (median of \(T_{1}\) means of \(T_{2}\) values each). Then, with probability at least \(1-\delta\), we have \(|\langle\widehat{O_{i}}\rangle_{\rho}-\langle O_{i}\rangle_{\rho}|\leq\epsilon\) for all \(i\)._ One way to remove the dependency on \(\rho\) and get the worst-case performance guarantees is to replace \(\|\widehat{O_{i}}\|_{\rho,\mathcal{U}}\) with \(\max_{\sigma\text{-state}}\|\widehat{O_{i}}\|_{\sigma,\mathcal{U}}\), defined as the _shadow norm_. In (Huang, Kueng, and Preskill 2020), it was shown that when the ensemble is the Clifford group over \(n\) qubits, the shadow norm of the observables, and hence the sample complexity, are proportional to the Frobenius norm. But the implementation requires very deep circuits, ruling itself out for NISQ devices. Hence in (Bertoni et al. 2023), the authors propose an ensemble of shallow-depth circuits \(\mathcal{U}_{d}\) (with depth \(d\)), given in Figure 2 that achieves similar performance guarantees. Each two-qubit subcircuit here is a uniformly randomly sampled two-qubit Clifford gate. The shadow can be classically computed and stored in the matrix product state form, with cost \(\mathcal{O}(2^{d})\). Formally, we have **Theorem 2**.: _(Bertoni et al. 2023) If \(d=\Theta(\log n)\), then for any observable \(O\) with \(\text{tr}(O)=0\), we have \(\|O\|_{1/2^{n}\cdot\mathcal{U}_{d}}^{2}\leq 4\|O\|_{F}^{2}\)._ The term \(\|O\|_{1/2^{n},\mathcal{U}_{d}}^{2}\) is called the _locally scrambled shadow norm_. Notice that for any ensemble of states \(\mathcal{D}_{1}\) for which \(\mathbb{E}_{\rho\sim\mathcal{D}_{1}}(\rho)=\mathds{1}/2^{n}\) (also called _state \(1\)-designs_ when all states are pure), \(\mathbb{E}_{\rho\sim\mathcal{D}_{1}}\|O\|_{\rho,\mathcal{U}}^{2}=\|O\|_{1/2^{n }\cdot\mathcal{U}}^{2}\) for any gate ensemble \(\mathcal{U}\). So, we can view \(\|O\|_{1/2^{n}\cdot\mathcal{U}}\) as a quantity that intuitively characterizes the sample complexity of a shadow protocol for a "typical" state or the performance of the protocol on average, similar to how the shadow norm describes the worst-case performance. This is more apparent when all states in \(\mathcal{D}_{1}\) are pure since then, sampling from \(\mathcal{D}_{1}\) is equivalent to sampling uniformly (according to the spherical measure) from the set of all pure states up to one statistical moment. Moreover, if the observables can be represented using tensor networks with certain properties, then each \(\langle O_{i}\rangle_{\hat{\rho}_{U,u}}\) can be computed classically efficiently. ### Variational Quantum Algorithms Parameterized quantum circuits can be used to encode various optimization problems that one encounters in quantum information. The structure of the circuit used is called an _ansatz_. We use \(U(\mathbf{\theta})\) to denote a parameterized circuit, where \(\mathbf{\theta}\) is a vector of parameters. In standard VQA, we use \(U(\mathbf{\theta})\) to estimate the value of a target function and then optimize the parameters by feeding the output to a classical iterative optimizer. For any ansatz \(U\), we define \(\rho(\mathbf{\theta})\coloneqq U(\mathbf{\theta})\rho U(\mathbf{\theta})^{\dagger}\). Our focus in this paper is on the function defined (over \(\mathbf{\theta}\)) as \[\langle O\rangle_{\rho(\mathbf{\theta})}=\text{tr}(U(\mathbf{\theta})\rho U(\mathbf{ \theta})^{\dagger}O), \tag{5}\] where \(\rho\) is the input quantum state and \(O\) is an output observable, and we aim to find the parameters that maximize it. One can estimate \(\langle O\rangle_{\rho(\mathbf{\theta})}\) for any \(\mathbf{\theta}\) by repeated measurements, after the application of \(U(\mathbf{\theta})\) on \(\rho\). Given this ability, the gradient of \(\langle O\rangle_{\rho(\mathbf{\theta})}\) can also be estimated using standard methods such as finite differencing or quantum-specific approaches such as the parameter shift rule (Mitari et al. 2018). Problems in quantum information that can be reduced to an instance of optimization of Eq (5) include variational quantum eigensolver (Peruzzo et al. 2014), quantum autoencoder (Romero, Olson, and Aspuru-Guzik 2017), as well as VQSP and VQCS. ## Ansatz Independent Shadow Optimization In this section, we explain the main idea and theoretical results behind AISO. ### Method For any quantum circuit \(V\), for any qubit \(i\), we define the number of times a gate touches or crosses the qubit wire as \(R_{V,i}\). Let \(R_{V}=\max_{i}R_{V,i}\). We require our ansatz \(U\) to have \(R_{V}\in\mathcal{O}(\log n)\). Note that most shallow ansatzes used in the literature will satisfy this. Let \(\langle O\rangle_{\rho(\mathbf{\theta}^{(1)})},\langle O\rangle_{\rho(\mathbf{\theta}^{ (2)})},\ldots,\langle O\rangle_{\rho(\mathbf{\theta}^{(C)})}\) be function evaluations that one encountered while optimizing Eq (5) using an iterative optimization algorithm. Define \(W_{O}(\mathbf{\theta})=U(\mathbf{\theta})^{\dagger}OU(\mathbf{\theta})\). Each function evaluation can be seen as estimating the expectation of \(\rho\) with these parameterized observables because \[\langle O\rangle_{\rho(\mathbf{\theta})}=\text{tr}(U(\mathbf{\theta})\rho U(\mathbf{\theta}) ^{\dagger}O)=\langle W_{O}(\mathbf{\theta})\rangle_{\rho}. \tag{6}\] Moreover, the Frobenius norm remains invariant since \(\|O\|_{F}^{2}=\|VOV^{\dagger}\|_{F}^{2}\) for any unitary \(V\). Now, using Theorems 1 and 2, we can estimate all \(C\) function evaluations using shallow shadows, and the AISO protocol goes as follows. 1. Load \(T_{1}T_{2}\) shallow shadows of \(\rho\), where \(T_{1}=\mathcal{O}(\log C)\) and \(T_{2}=\mathcal{O}(\|O\|_{F}^{2})\). Let them be \(\hat{\rho}_{U_{1},u_{1}},\hat{\rho}_{U_{2},u_{2}},\ldots,\hat{\rho}_{U_{T_{1}T_{ 2}},u_{T_{1}T_{2}}}\) 2. Use the iterative optimization algorithm to optimize the target function \[\langle\widehat{W_{O}}(\mathbf{\theta})\rangle_{\rho}\coloneqq\mu_{T_{1},T_{2}}(\{ \langle W_{O}(\mathbf{\theta})\rangle_{\hat{\rho}_{U_{j},u_{j}}}\ 1\leq j\leq T_{1}T_{2}\}).\] (7) The cost of classical computation is dominated by the cost of computing \(\langle\widehat{W_{O}}(\mathbf{\theta})\rangle_{\rho}\) classically. In this case, we have **Theorem 3**.: _In AISO, for any quantum ansatz \(U\) with \(R_{U}\in\mathcal{O}(\log n)\), \(\langle W_{O}(\mathbf{\theta})\rangle_{\rho}\) can be classically evaluated with cost \(\mathcal{O}(\text{poly}(n)\cdot\log C\cdot\|O\|_{F}^{2})\) for VQSP and VQCS._ ### Sample Complexity In this section, we prove the bounds on the sample complexity when the input state is sampled from a state \(1\)-design. The goal is to show that on average, AISO requires only a number of copies that is logarithmically dependent on \(M\) and linearly dependent on \(\|O\|_{F}^{2}\). **Theorem 4**.: _Let \(d=\Theta(\log n)\) and \(\rho\) be an \(n\)-qubit pure state sampled from a state 1-design \(\mathcal{D}_{1}\). For any \(\delta,\epsilon\in(0,1)\), \(m>1/\delta\), and any \(C>0\), let_ \[T_{1}\geq 2\log\left(\frac{2(m-1)C}{m\delta-1}\right),\;T_{2}\geq\frac{136}{ \epsilon^{2}}m\|O\|_{F}^{2}. \tag{8}\] _Then for any parameter vectors \(\mathbf{\theta}^{(1)},\mathbf{\theta}^{(2)},\ldots,\mathbf{\theta}^{(C)}\), all values \(\langle W_{O}(\mathbf{\theta}^{(c)})\rangle_{\rho}\), \(1\leq c\leq C\), defined as in Eq (6) can be estimated using \(\langle\widehat{W_{O}}(\mathbf{\theta}^{(c)})\rangle_{\rho}\) defined as in Eq (7) so that with probability at least \(1-\delta\), we have \(|\langle W_{O}(\mathbf{\theta}^{(c)})\rangle_{\rho}-\langle\widehat{W_{O}}(\mathbf{ \theta}^{(c)})\rangle_{\rho}|\leq\epsilon\) for all \(c\)._ Since estimating \(C\) evaluations in standard VQA requires preparing \(U(\mathbf{\theta}^{(c)})\) for all \(c\) and measuring each of them multiple times, the total number of copies required would be \(\mathcal{O}(C)\), which is exponentially higher than AISO. One key reason why this is the case is that in standard VQA, we cannot reuse the measurement results, since each of them was conducted specifically to estimate \(\langle W_{O}(\mathbf{\theta}^{(c)})\rangle_{\rho}\) for some \(c\). Meanwhile, in AISO, all quantum measurements made are _independent_ of \(\mathbf{\theta}^{(c)}\), and all these measurements are used while estimating all the expectations. Although the constants appear large, since we use union bounds as well as a few loose constants in Theorem 2, in practice significantly lesser copies than what is suggested in Theorem 4 suffice. We explore this in detail in our experimental results. The space complexity of the protocol is dominated by the storage of shallow shadows. Each shadow is an MPS with maximum bond dimension at most \(2^{d-1}\). This means that each shadow can be stored using at most \(n2^{d}\) complex numbers and hence the total space complexity is \(nT_{1}T_{2}2^{d}\). So, when \(d=\mathcal{O}(\log n)\), the space complexity is \(\mathcal{O}(\text{poly}(n)\cdot T_{1}T_{2})\). ## Applications In this section, we discuss how AISO can be used to tackle two important problems in quantum information. ### Variational Quantum State Preparation In VQSP, our goal is to find a circuit that is capable of preparing (approximately) a pure state \(\rho=\ket{\psi}\bra{\psi}\), given access to multiple copies of it. That is, we would like to find a parameter vector \(\mathbf{\theta}\) that minimizes the _infidelity_ between \(U(\mathbf{\theta})^{\dagger}\ket{0}\) and \(\ket{\psi}\), defined as \(1-|\bra{\psi}U(\mathbf{\theta})^{\dagger}\ket{0}|^{2}\), where \(U\) is a heuristically chosen ansatz. Infidelity assumes values in \([0,1]\) and is widely used in quantum information to measure how far apart two states are, \(1\) implying orthogonality and \(0\) implying equality. Note that the minimization of infidelity is the same as the maximization of \(\bra{0}\bra{0}_{\rho(\mathbf{\theta})}\). Since \(\ket{0}\bra{0}\) has unit Frobenius norm, this objective function is compatible with AISO. Also, using AISO, one can attempt to find the best parameters for a wide variety of circuit ansatzes using multiple optimization procedures with very few copies consumed. Moreover, for any shallow shadow \(\hat{\rho}\), \(\langle W_{O}(\mathbf{\theta})\rangle_{\hat{\rho}}\) can be computed classically efficiently by contracting the tensor network given in Figure 5(a). Even though the example given here is the ALA, using Theorem 3, one can easily replace it with any ansatz with \(R_{U}\in\mathcal{O}(\log n)\). ### Variational Quantum Circuit Synthesis VQCS is a natural extension of VQSP to quantum circuits. Here, our goal is to learn the parameters of an \(n\)-qubit ansatz \(U(\mathbf{\theta})\) that best approximates a given unknown quantum gate \(V\). Similar to how we use infidelity for quantum states, we can use the Hilbert-Schmidt cost function defined for unitaries in [10]. For any \(\mathbf{\theta}\), this is computed as \(H(\mathbf{\theta})=1-1/4^{n}|\text{tr}(U(\mathbf{\theta})^{\dagger}V)|^{2}\) and minimizing \(H\) gives us the set of parameters that prepares (approximates) \(V\). To see why, first note that any quantum gate \(W\) can be uniquely identified using a representation given as \(W\otimes\overline{W}\). This can be derived from its action on the vectorized version of elements in \(\mathcal{L}(\mathbb{C}^{2^{n}})\). Then we see that \(H(\mathbf{\theta})\) is proportional to \(\|U(\mathbf{\theta})\otimes\overline{U(\mathbf{\theta})}-V\otimes\overline{V}\|_{F}^{2}\). To evaluate \(H(\mathbf{\theta})\) for any \(\mathbf{\theta}\), we start with the maximally entangled state on two \(n\)-qubit systems, defined as \(|\Phi\rangle=1/\sqrt{2^{n}}\sum_{i=0}^{2^{n}-1}\ket{i}\ket{i}\). Then, we apply \(V\) on the second register to obtain \(|V\rangle=1/\sqrt{2^{n}}\sum_{i=0}^{2^{n}-1}\ket{i}\ket{v_{\bullet i}}\), where \(\ket{v_{\bullet i}}\) is the \(i^{\text{th}}\) column of \(V\). Then, one can see that \(H(\mathbf{\theta})=1-\langle|U(\mathbf{\theta})\rangle\langle U(\mathbf{\theta})|\rangle_{| V\rangle\langle V|}\). Therefore, we can use shallow shadows of \(|V\rangle\) to estimate \(H(\mathbf{\theta})\). Since \(\|\ket{U(\mathbf{\theta})}\bra{U(\mathbf{\theta})}\|_{F}=1\) for all \(\mathbf{\theta}\), the number of shadows, or equivalently, the number of applications of \(V\), is independent of \(n\). In terms of classical computational complexity, \(\langle|U(\mathbf{\theta})\rangle\langle U(\mathbf{\theta})|\rangle_{\hat{\rho}}\) for any shallow shadow \(\hat{\rho}\) can be computed by contracting the tensor network given in Figure 5(b), the cost of which is polynomial in \(n\). The explanation regarding the usage of ALA in this figure is the same as the one for VQSP. From now on, when discussing the sample complexity of VQCS, the "number of copies" will mean the number of copies of \(|V\rangle\) consumed (equivalently, the number of applications of \(V\).) ## Simulation Results Here we elaborate on the experimental results by comparing the sample complexity of AISO and the standard VQA in the two use cases discussed above. Python code to replicate our experiments can be found in [1]. The depth \(d\) of the shallow shadow ensemble (cf. Figure 2) is set to \(3\) throughout the experiments. The viability of AISO in solving both problems is tested across four different ansatzes whose structures are given in Figure 1(a,b,c,d). Except in HEA, all two-qubit gates can be arbitrary two-qubit subcircuits. The specific ones used in our simulation are given in Figure 1(e). Also, for VQCS, each two-qubit subcircuit is a combination of two of these. In HEA, the two-qubit gate used is the CNOT gate. For VQSP, we have used the Simultaneous Perturbation Stochastic Approximation (Spall 1992) (SPSA), where the converging sequences used are, respectively, \(c_{r}=a_{r}=r^{-0.4}\) and the total number of iterations is \(5000\). On the other hand, the results of VQCS have used Powell's method (Powell 1964) with a maximum of \(10^{3}\) function evaluations allowed. We denote by AISO/VQA (\(T\)) the AISO/VQA algorithm that uses \(T\) copies in total. This means that VQA \((T)\) will consume \(T/10^{4}\) copies per function evaluation in SPSA and \(T/10^{3}\) copies in Powell's method. This is because SPSA requires two function evaluations to produce estimates of the gradient. The unknown target states considered in the VQSP are 8-qubit states, which are also compatible with the corresponding ansatzes being used. In each setting, the experiment is carried out across five different states and the results are shown in Figure 3(a-d). Here, we have plotted the mean of infidelity values achieved at different iterations across the five different experiments that were carried out. The shaded region comprises the mean plus and minus \(0.3\) times the standard deviation of the five different infidelities. In Figure 3(a-d), VQA \((5\times 10^{5})\), which utlizes \(5\times 10^{5}\) copies in total, consumes \(50\) state copies per function evaluation. Similarly, the other VQA algorithms consume \(100\) and \(250\) state copies per evaluation. One can see that AISO closely matches or outperforms the results of VQA by consuming only \(10^{4}\) copies in total. Moving on to VQCS, similar experiments are carried out for \(4\)-qubit quantum gates (meaning \(8\)-qubits used in total). The results are summarized in Figure 3(e-h). Here, the minimum \(H(\mathbf{\theta})\) in each interval of \(10^{2}\) function evaluations out of the total allowed \(10^{3}\) is plotted. The three VQA algorithms used here consume \(10^{2},10^{3}\) and \(10^{4}\) copies per function evaluation respectively. It is clear from the plots that AISO can match the performance of standard VQA similarly using considerably fewer copies to what we saw in the case of VQSP. In Figure 4, we present the superiority of AISO over VQA in a different light. On the x-axis, we plot different infidelity or Hilbert-Schmidt cost values, and on the y-axis, we plot the number of copies required to achieve them, which are exponentially better for AISO. ## Improved Bounds Using 2-Design Assumption In this section, we analyze the assumption of the input state in more detail. The assumption that the state is sampled from a \(1\)-design merely says that the input state is the maximally mixed state. So, to further understand the notion of a "typical input state" and to get closer to the notion of the input state being an average state or a randomly generated state, we make a stronger assumption on the distribution. More precisely, we assume that the input state is sampled from a Figure 3: Simulation results comparing the learning curves of AISO with the standard VQA. Each shaded region corresponds to \(5\) instances of a problem. VQA/AISO \((T)\) consumes \(T\) copies in total throughout the optimization. Plots (a,e), (b,f), (c,g), (d,h) correspond to ALA, MERA, HEA, and TTN being used as the ansatz, respectively. In plots (a-d), we compare the learning rates of AISO with standard VQA in VQSP. The classical optimizer used is the SPSA algorithm, with \(5\times 10^{3}\) iterations. The red curve represents AISO \((10^{4})\) while the orange, green, and blue curves represent VQA \((5\times 10^{5})\), VQA \((10^{6})\), and VQA \((2.5\times 10^{6})\) consuming \(50,100\),and \(250\) copies per function evaluation respectively. Similarly, in plots (e-h), we compare the learning rates of AISO with standard VQA in VQCS. The classical optimizer used is Powell’s method, with a total of \(10^{3}\) function evaluations allowed. In these plots, the minimum Hilbert-Schmidt Cost in each interval of \(100\) function evaluations is plotted. Like previously, the red curve represents AISO \((10^{4})\) while the orange, green, and blue curves represent VQA \((10^{5})\), VQA \((10^{6})\), and VQA \((10^{7})\) consuming \(10,10^{2}\), and \(10^{3}\) copies per function evaluation respectively. We can see that AISO can closely match or outperform standard VQA by consuming orders of magnitude fewer copies in total. state \(2\)-design \(\mathcal{D}_{2}\)_. These are ensembles such that sampling from them is equivalent to sampling a pure state uniformly up to two statistical moments. \(2\)-designs are extensively used in quantum information to generate pseudorandomness and to analyze average case complexities [1, 13, 14]. In this regime, we derive two results, starting with an upper bound on the variance of the state-dependent shadow norm when the state is sampled from a state \(2\)-design. **Theorem 5**.: _Let \(\mathcal{D}_{2}\) be a state \(2\)-design and \(d=\Theta(\log n)\). Then, for any observable \(O\), we have_ \[\text{Var}_{\sigma\sim\mathcal{D}_{2}}\left(\|O\|^{2}_{\sigma,\mathcal{U}_{2} }\right)\leq 64\|O\|^{2}_{F}. \tag{9}\] Using this result, we can derive a result similar to Theorem 4, with better constants. **Theorem 6**.: _Let \(d=\Theta(\log n)\) and \(\rho\) be an \(n\)-qubit pure state sampled from a state 2-design \(\mathcal{D}_{2}\). For any \(\delta,\epsilon\in(0,1)\), \(m>1/\sqrt{\delta}\), and any \(C>0\), let_ \[T_{1}\geq 2\log\left(\frac{2(m^{2}-1)C}{m^{2}\delta-1}\right),\;T_{2}\geq \frac{136}{\epsilon^{2}}(2m+1)\|O\|^{2}_{F}. \tag{10}\] _Then for any parameter vectors \(\mathbf{\theta}^{(1)},\mathbf{\theta}^{(2)},\dots,\mathbf{\theta}^{(C)}\), all values \(\langle W_{O}(\mathbf{\theta}^{(c)})\rangle_{\rho}\), \(1\leq c\leq C\), defined as in Eq (6) can be estimated using \(\langle\widehat{W_{O}}(\mathbf{\theta}^{(c)})\rangle_{\rho}\) defined as in Eq (7) so that with probability at least \(1-\delta\), we have \(|\langle W_{O}(\mathbf{\theta}^{(c)})\rangle_{\rho}-\langle\widehat{W_{O}}(\mathbf{ \theta}^{(c)})\rangle_{\rho}|\leq\epsilon\) for all \(c\)._ Hence, we see that the lower bound on \(T_{1}\) in Eq (10) is a constant time better than the lower bound on \(T_{1}\) in Eq (8). By replacing the function evaluations in Theorem 6 with expectations with arbitrary observables, one can see that similar advantages can be gained for regular shallow shadow estimation also when the input is sampled from a \(2\)-design. Figure 4: Resource needs for different infidelity objectives. All points plotted correspond to the mean of \(5\) instances of the problem, with x-axis representing average lowest infidelity/Hilbert-Schmidt Cost achieved and y-axis representing the total number of copies consumed to achieve it. The classical optimizers used are the same as Figure 3. Plots (a,e), (b,f), (c,g), (d,h) correspond to ALA, MERA, HEA and TTN being used as the ansatz respectively. The order of magnitude savings in the number of copies when using AISO is evident. Figure 5: Tensor networks to compute \(\langle W_{O}(\mathbf{\theta})\rangle_{\hat{\rho}}\). The examples used here uses the ALA. (a) corresponds to VQSP while (b) corresponds to VQCS. To contract (a) efficiently, we can start from the top qubit wire and contract wire by wire. One can see that, at every step, the total number of free indices the tensor will have is \(\mathcal{O}(\log n)\), thus the cost of contraction is \(\mathcal{O}(\text{poly}(n))\). Note that this is true for any ansatz with \(R_{U}\in\mathcal{O}(\log n)\). A similar argument can be made for (b) when we start contracting ring by ring from the top. ## Dealing With Barren Plateaus In some cases, the usage of global observables has been shown to introduce barren plateaus into the training landscape [11, 12]. These are regions with gradients exponentially small in the number of qubits, which makes evaluating them using quantum devices extremely difficult. Several heuristic approaches have been proposed, which have been experimentally shown to be effective in certain cases. Even though AISO uses global observables, we note that our method is compatible with almost all barren plateau mitigating heuristic methods that have been proposed in the literature. For example, [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] are methods that ultimately use the quantum device only to estimate \(\langle W_{O}(\mathbf{\theta})\rangle_{\rho}\) at certain carefully chosen inputs \(\mathbf{\theta}\). So, it is clear that if we use shadows to estimate them, then exponential advantages similar to the ones discussed in this paper can be achieved. ## Conclusion and Future Direction In this work, we proposed AISO -- a training algorithm that leverages shallow shadows to achieve an exponential reduction in quantum resources required to train VQA cost functions involving almost any shallow ansatz and observables with low Frobenius norm. This allows one to do more iterations of the classical optimizer, more hyperparameter tuning, and experiment with ansatzes and optimizers with very few executions of the quantum device. We demonstrate this advantage in two important use cases of interest in quantum information: Variational Quantum State Preparation and Variational Quantum Circuit Synthesis. In terms of future directions, we are trying to design similar resource-efficient ansatz agnostic protocols for local observables, by leveraging classical machine learning with classical shadows similar to [12]. We are also investigating the potential of generative machine learning-based tomography models such as [13, 14, 15, 16] to improve the sample complexity of VQAs. ## Acknowledgement We thank Afham for pointing out important references and related works. This work is partially supported by the Australian Research Council (Grant No: DP220102059). AB was partially supported by the Sydney Quantum Academy PhD scholarship.
2309.14547
Distributed Resource Allocation for D2D Multicast in Underlay Cellular Networks
We address the problem of distributed resource allocation for multicast communication in device-to-device (D2D) enabled underlay cellular networks. The optimal resource allocation is crucial for maximizing the performance of such networks, which are limited by the severe co-channel interference between cellular users (CU) and D2D multicast groups. However, finding such optimal allocation for networks with large number of CUs and D2D users is challenging. Therefore, we propose a pragmatic scheme that allocates resources distributively, reducing signaling overhead and improving network scalability. Numerical simulations establish the efficacy of the proposed solution in improving the overall system throughout, compared to various existing schemes.
Mohd Saif Ali Khan, Ajay Bhardwaj, Samar Agnihotri
2023-09-25T21:43:06Z
http://arxiv.org/abs/2309.14547v1
# Distributed Resource Allocation for D2D Multicast in Underlay Cellular Networks ###### Abstract We address the problem of distributed resource allocation for multicast communication in device-to-device (D2D) enabled underlay cellular networks. The optimal resource allocation is crucial for maximizing the performance of such networks, which are limited by the severe co-channel interference between cellular users (CU) and D2D multicast groups. However, finding such optimal allocation for networks with large number of CUs and D2D users is challenging. Therefore, we propose a pragmatic scheme that allocates resources distributively, reducing signaling overhead and improving network scalability. Numerical simulations establish the efficacy of the proposed solution in improving the overall system throughout, compared to various existing schemes. Multicast communication, Device-to-Device communication, Distributed channel allocation, Distributed power allocation, Underlay networks. ## I Introduction The rapid growth of mobile devices and data-intensive applications running on them has resulted in a dramatic increase in data requirements [1]. Trying to address this challenge has become a primary concern for both the telecommunication services sector and the academic community. In the recent years, underlay D2D communication has emerged as a viable solution to address this challenge. It enables proximate mobile users to communicate with one another using cellular resources while bypassing the base station (BS) [2, 3]. Due to its short-range exchange of information, D2D communication provides several benefits in cellular networks, such as increased throughput, lower latency, and enhanced spectral and energy efficiencies [4, 5, 6]. By capitalizing on these advantages, D2D communication has the potential to revolutionize wireless communication and support mobile users' ever-increasing data needs. D2D-multicast communication has several advantages over D2D unicast or BS-based multicast communication, especially for the applications like weather forecasting and location-based advertisements that involve the same data to be distributed to nearby users [7]. This may allow more efficient network resource utilization and may mitigate severe bottlenecks in data-centric cellular networks. Furthermore, D2D-multicast facilitates and improves direct, backhaul-free communication among devices with similar content requirements, which can substantially reduce network congestion while enhancing data transfer rates. In conclusion, D2D multicast communication seems a promising option for improving wireless communication and meeting the ever-increasing requirements of data-intensive applications. Though underlay D2D multicast communication holds a great deal of promise, deploying it in cellular networks comes with several inherent challenges. The interference that underlay D2D multicast communication may cause to primary cellular users poses one of the most significant challenges. Furthermore, D2D multicast nodes' battery power may quickly deplete when dealing with interference and sending information, leading to decreased network capacity and reliability. These issues in underlay D2D multicast communication must be addressed in order to be successfully deployed and widely adopted. To mitigate the co-channel interference between CUs and D2D multicast groups (MGs), numerous resource allocation techniques have been proposed in the literature [8, 9, 10, 11, 6]. However, most of them are centralized in nature, thus involving significant signaling overhead and delays as the BS computes a solution to resource allocation problem, making it unsuitable for large-scale networks. To address this issue, distributed schemes for resource allocation have been proposed recently [12, 13]. In [12], the authors have proposed semi-distributed solution by using coalition game framework and fractional programming. In [13], the authors have used fractional programming approach to tackle the joint power and channel allocation. However, it is important to note that this approach has a significant signaling overhead, due to message passing between the BS and MGs. The objective of our study is to solve the resource allocation problem in a distributed manner. We propose a novel two-step strategy to achieve this. In the first step, by utilizing the notion of distributed graph coloring, we present a distributed channel allocation scheme that allows cellular users to allocate channels distributively without incurring significant signaling overhead or requiring centralized computation. In the second step, power allocation among the D2D multicast groups, over a particular channel, is carried out using a distributed power allocation scheme to minimize the interference at the CUs while ensuring efficient resource utilization. With this, we achieve an efficient and scalable alternative to centralized resource allocation without compromising on the performance achievable by the centralized scheme, as confirmed by our numerical evaluation. _Organization:_ The work is organized as follows: Section II explains the system model used in this paper. Section III proposes a distributed channel allocation approach, while Section IV presents a distributed power allocation scheme that complements the proposed channel allocation approach. Section V evaluates the performance of our proposed approach. Finally, Section VI concludes the paper. ## II System Model In our proposed system model, we consider the coexistence of CUs and D2D multicast group transmitters (MGTx) in a two-dimensional space \(\mathbb{R}^{2}\). We model these two types of users as two independent and stationary homogeneous Poisson point processes (PPP), and having the densities \(\lambda_{\text{c,u}}\) and \(\lambda_{\text{g,t}}\), respectively. Using the properties of homogeneous PPP, we calculate the number of mobile users as follows: \[Pr[\text{N}(\text{S})=k]=\frac{(\lambda S)^{k}}{k!}e^{-\lambda S} \tag{1}\] As the process is homogeneous, the \(\lambda\) is constant and denotes the average node density per unit area. To make the system more pragmatic, we assume that the D2D multicast networks follow a Matern cluster process [14], where several clusters co-exist in \(\mathbb{R}^{2}\), and each cluster center (MGTx) is encircled by a homogeneous and independent child PPP of radius \(d_{r}\) and having a density of \(\lambda_{\text{g,r}}\) modeled as the D2D receivers. The radius \(d_{r}\) of MGs in D2D multicast is determined by the maximum power that MGTx can transmit and signal detection threshold, which ensures that all MG receivers (MGRx) within this radius can receive the transmitted data with sufficient quality for reliable communication. This configuration results in each MG having a different number of MGRx. The parameter \(\mathcal{U}_{\text{g}}\) represents the set of MGRx of the \(g^{\text{th}}\) MG. The sets \(\mathcal{C}\) and \(\mathcal{G}\) represent the set of all CUs and MGs repectively. To maximize the spectral efficiency, we assume that a single CU uplink channel is shared by multiple MGs; thus, the signal-to-interference-plus-noise ratio achieved by a CU and D2D MG are given respectively by: \[\Gamma_{c}^{k} =\frac{p_{\text{c,k}}h_{c,b}^{k}}{\sum\limits_{g=1}^{\mathcal{G}}a _{g,k}h_{g,b}^{k}p_{\text{g,k}}+N_{0}}, \tag{2}\] \[\Gamma_{r,g}^{k} =\frac{p_{\text{g,k}}h_{g,r}^{k}}{\sum\limits_{j=1,j\neq g}^{ \mathcal{G}}a_{g,k}h_{j,r}^{k}p_{\text{j,k}}+h_{c,r}^{k}p_{\text{c,k}}+N_{0}}, \tag{3}\] where \(a_{g,k}\) is the binary variable that indicates whether the \(g^{\text{th}}\) MG shares the resources with the \(k^{\text{th}}\) CU or not; \(h_{g,r}^{k}\) and \(h_{c,b}^{k}\) are the channel gains between \(g^{\text{th}}\) MGTx and \(r^{th}\) MGRx, and the \(k^{\text{th}}\) CU and the BS, respectively, that incorporates both small scale fading as well as large scale fading; \(p_{\text{c,k}}\) and \(p_{\text{g,k}}\) denote the transmit powers of the \(k^{\text{th}}\) CU and the \(g^{\text{th}}\) MG, respectively; \(N_{0}\) denotes the variance of the AWGN noise, and \(B\) represents the bandwidth of each channel. Let \(R_{c}^{k}\) be the data rate of the CU that transmits on the \(k^{th}\) channel and \(R_{g}^{k}\) be the rate that can be attained by the \(g^{th}\) MG by considering the SINR of the worst MGRx. These rates are given by the following equations, respectively. \[R_{c}^{k} =B\log_{2}(1+\Gamma_{c}^{k}), \tag{4}\] \[R_{g}^{k} =|\mathcal{U}_{\text{g}}|\,B\log_{2}\bigg{(}1+\min_{r\in\mathcal{ U}_{\text{g}}}\Gamma_{r,g}^{k}\bigg{)}, \tag{5}\] The maximum transmit powers of the CUs and MGTXs that can be allocated are \(P_{c}^{max}\) and \(P_{g}^{max}\) respectively. Using (4) and (5), we formulate an optimization problem with total sum-throughput as a objective function as: \[\mathbf{P1}: \max_{a_{g,k},p_{g,k}}\left(\sum_{c\in\mathcal{C}}R_{c}^{k}+\sum _{g\in\mathbb{G}_{k}}R_{g}^{k}\right)\] (6) s.t. \[\text{C}_{1}:0\leq p_{g,k}\leq P_{g}^{max},\forall k\in\mathcal{C},\] \[\text{C}_{2}:\Gamma_{r,g}^{k}\geq\Gamma_{g,k}^{\text{th}},\Gamma_ {c}^{k}\geq\Gamma_{c,k}^{\text{th}},\] \[\text{C}_{3}:\sum_{k\in\mathcal{C}}a_{g,k}=1,g\in\mathcal{G},k\in \mathcal{C},\] where \(\text{C}_{1}\) limits the maximum power allocated to D2D MGTXs, \(\text{C}_{2}\) guarantees minimum achievable rate for every CU and receivers of every MG, and C\({}_{3}\) guarantees that each MG can be assign only one channel. Problem \(\mathbf{P1}\) is an instance of Mixed-Integer Nonlinear Programming (MINLP) problem, which are NP-hard, in general. To reduce the computational costs associated with solving this problem, a pragmatic approach is used that splits the original problem into two sub-problems that are solved sequentially, namely the channel allocation problem and the power allocation problem. The correlation between these sub-problems is that power allocation is dependent on the channels assigned in the channel allocation step. The iterative optimization method ensures that the two sub-problems converge, resulting in a set of sub-optimal solutions that satisfy all constraints of the original problem \(\mathbf{P1}\) jointly. While the solutions obtained in each sub-problem may not be globally optimal due to the inherent NP-hardness of MINLP problems, their combination results in a solution for the original problem \(\mathbf{P1}\) that meets the objectives and constraints. This work proposes distributed schemes to address these sub-problems. ## III Distributed Channel Allocation The problem of distributed channel allocation is formulated as distributed graph coloring problem. The channels are assigned to various MGs using distributed graph coloring [15]. If number of MGs is greater than the number of CUs, then the algorithm utilizes all channels. Let \(N_{p}\) and \(N_{c}\) be number of MGTx and CUs, respectively. The achievable rate of the \(k^{\text{th}}\) CU when no MG is sharing the channel is given by: \[R^{k}=B\log_{2}\left(1+\frac{p_{\text{c,k}}h_{c,b}^{k}}{N_{0}}\right) \tag{7}\] Using (4), (5) and (7), the overall change in sum-throughput of the system due to the \(g^{th}\) MG, when \(a_{g,k}=1\)\(\forall g\in\mathcal{G}\), is given as follows: \[\Delta R_{g}^{k}=R_{g}^{k}+R_{c}^{k}-R^{k} \tag{8}\] Let \(\mathcal{C}_{g}\subset\mathcal{C}\) be the set of available channels for the \(g^{th}\) MGTx. If \(\Delta R_{g}^{k}>0\) then \(g^{\text{th}}\) MG can share the \(k^{\text{th}}\) CUs channel and then \(\mathcal{C}_{g}\) will be updated as \(\mathcal{C}_{g}=\mathcal{C}_{g}\cup\{k\}\). Let the MGs represent the vertices of a graph \(G\) where an edge is drawn between a pair of vertices if the corresponding MGs have co-channel interference greater than a certain threshold (\(\gamma_{th}\)). Let CU channel represent the color. Accordingly, the corresponding interference matrix \(A_{d}\) is defined as follows: \[A_{d}(g,j)=\begin{cases}1,&\text{if }\mathcal{C}_{g}\cap\mathcal{C}_{j}=\emptyset,\\ 1,&\text{if }\mathcal{C}_{g}\cap\mathcal{C}_{j}\neq\emptyset\text{ and }|h_{g,r_{j}}-h_{j,r_{g}}|<\gamma_{th},\\ 0,&\text{otherwise},\end{cases} \tag{9}\] where \(g\) and \(j\) represent any two MGs from the set \(\mathcal{G}\). The vertices of the interference graph \(G\) are subjected to a coloring process for channel allocation, detailed in Algorithm 1. The proposed algorithm iteratively updates the parameter \(\gamma_{th}\) whenever the subroutine in Algorithm 2 triggers a \(flag\) indicating its failure. As these iterations progress, the value of \(\gamma_{th}\) steadily decreases. Consequently, there exists a possibility that \(\gamma_{th}\) could decrease to an extent where each individual node in the graph \(G\) becomes isolated, devoid of any neighboring nodes. When this scenario is reached, each node in the graph effectively stands alone and independent, without any interference from neighboring nodes. This implies that the graph \(G\) can be colored using a single color, and thus establishing the convergence of Algorithm 1. ``` 1:Input:\(G\), \(\gamma_{\text{th}}\), \(flag=0\), \(\delta\), where \(\delta\) denotes the small percentage change in \(\gamma_{\text{th}}\) 2:Return: Channel allocation of each MG 3:Color the graph \(G\) according to Algorithm 2 and update the \(flag\). 4:while\(flag\) or \(colors\_unique<N_{c}\)do 5:if flag then 6: Update \(\gamma_{\text{th}}\leftarrow\gamma_{\text{th}}-\delta\). 7:else 8: Update \(\gamma_{\text{th}}\leftarrow\gamma_{\text{th}}+\delta\). 9:endif 10: Update \(A_{d}\) according to Equation (9) and form a new graph \(G\) again. 11: Set the \(flag=0\) and color the new graph \(G\) according to Algorithm 2 and update the \(flag\). 12:endwhile ``` **Algorithm 1** Distributed Channel Allocation Scheme The proposed distributed channel allocation scheme effectively assigns channels to all D2D MGs which contribute positively to sum-throughput and ensures minimum co-channel interference among the MGs and CUs. The computation complexity of channel allocation Algorithm 1 is dependent on the Algorithm 2 and equation (9). The computation complexities of graph coloring Algorithm 2 and equation (9) are \(O(I_{d}N_{p}^{2})\) and \(O(N_{p}^{2})\) respectively, where \(I_{d}\) is the maximum number of iterations for Algorithm 2 to converge. The \(\gamma_{\text{th}}\) updation step complexity is \(O(1)\). Therefore, the overall computation complexity of the channel allocation is \(O(I_{c}I_{d}N_{p}^{2})\), where \(I_{c}\) is the maximum number of iterations required for Algorithm 1 to converge. ``` 1:Input:\(G\), \(\mathcal{C}_{g}\), \(flag\), \(N\_g\): the set of neighbors of node \(g\), \(\forall g\in\mathcal{G}\) 2:Return: The color assignment for each node, a \(flag\) indicating whether the algorithm failed to assign color to each node, and total number of colors (\(colors\_unique\)). 3:while there exist uncolored node \(g\) and non-empty set \(\mathcal{C}_{g}\)do 4:for all unassigned nodes do 5: Assign a color from \(\mathcal{C}_{g}\) to \(g\). 6:endfor 7: Each node informs its neighbors about its color. 8:if node \(g\) and \(j\in N\_g\) have the same color then 9: Unassign the color of the node with larger \(|\mathcal{C}_{g}|\). 10:endif 11: Again each nodes will inform its neighbors about it assign color. 12: Each unassigned node \(g\) will update its \(\mathcal{C}_{g}\) by removing the colors of its neighbors. 13:endwhile 14: If there is no unassign node then \(flag=0\), else \(flag=1\). ``` **Algorithm 2** Distributed Graph Coloring for a given \(\gamma_{\text{th}}\) ## IV Distributed Power Allocation We propose a distributed power allocation scheme to address the uplink interference at the BS caused by MGs sharing the channel with CUs. The primary aim is to formulate a scheme that encourages MGs to adjust their power levels to minimize interference for the co-channel CUs. In this approach, each unassigned MG chooses a power within the permissible range, which is determined by the minimum power (\(p_{min}\)) and maximum power (\(p_{max}\)). To ensure that an MG causing more interference and operating with a higher power reconsiders its power selection, we adopt a conflict resolution approach that allows MGs that cause greater interference to the co-channel CUs to be assigned lower power levels than the MGs that cause less interference. The MGs iteratively change their power during conflict resolution while communicating this information with co-channel MGs. Upon the resolution of conflicts, if the overall interference caused by all co-channel MGs exceeds the predefined interference threshold (\(\gamma_{CU}^{k}\)), the next step is to examine the interference offered by each MG at the BS separately. For this, we employ a threshold value \(\gamma_{MG}^{k}\), calculated by dividing the CU's interference threshold \(\gamma_{CU}^{k}\) by the total number of MGs sharing the channel. If the interference generated by a specific MG exceeds \(\gamma_{MG}^{k}\), its power allocation is revoked. Furthermore, the minimum power (\(p_{min}\)) and maximum power (\(p_{max}\)) values are adjusted slightly downward, and the whole process repeats. The detailed steps are shown in Algorithm 3. This iterative process continues until all MGs have power that permits them to transmit without violating the BS's interference threshold. This ensures convergence of the Algorithm 3. Furthermore, when while addressing the MGs with severe interference at the BS, variations in power may eventually pull down the power levels of these interfering MGs to zero. As a result, these MGs will satisfy the interference threshold at the BS, proving the convergence of Algorithm 3. ``` 1:Input: Set of unassigned MGs, \(\gamma_{CU}^{k}\), \(p_{min}=P_{g}^{max}-\beta\) and \(p_{max}=P_{g}^{max}\), where \(\beta\) denotes the small change. 2:Return: Power allocation for each MG. 3: Assign random power levels within \([p_{min},p_{max}]\) to each unassigned MG. 4: Resolve conflicts: Adjust power levels distributively to ensure MGs causing more interference allocate lower power levels than those causing less interference. 5:if interference caused by all MGs sharing channel \(k\) exceeds \(\gamma_{CU}^{k}\)then 6: Calculate \(\gamma_{MG}^{k}=\frac{\gamma_{CU}^{k}}{\zeta_{k}}\). 7:for each MG do 8:if MG's interference is greater than \(\gamma_{MG}^{k}\)then 9: Unassign power of that MG. 10:endif 11:endfor 12: Update \(p_{min}\gets p_{min}-\beta\) and \(p_{max}\gets p_{max}-\beta\). 13: Repeat the power allocation process. 14:endif ``` **Algorithm 3** Distributed Power Allocation Scheme The computation complexity of Algorithm 3 depends mainly on number of iterations (\(I_{p}\)) for the algorithm to converge and conflict resolution step (line 4). The conflict resolution step has the worst case complexity of \(O(N_{p}^{2})\). Therefore, overall complexity of proposed power algorithm is \(O(I_{p}N_{p}^{2})\). ## V Performance Evaluation To show efficacy of proposed scheme, numerical simulations are performed to compute the total system sum-throughput, while varying different parameters. In the context of channel allocation, we compare our proposed channel allocation scheme with Interference-Aware Channel Allocation (IACA) [10] and Random Channel Allocation (RCA) schemes but not with distributed scheme in [13], as it has already been outperform by IACA in terms of sum-throughput [13]. While for power allocation, we compare our proposed power allocation scheme with water-filling power allocation (WFPA) [16], STIM power allocation (STIMPA) [10] and equal power allocation (EPA). To ensure a fair and relevant comparison while comparing different channel allocation schemes, we pair all channel allocation schemes with our proposed power allocation approach. Similarly, while comparing power allocation schemes, we pair all power allocation schemes with our proposed channel allocation scheme. Simulations parameters are listed in Table I. Each data point in the following plots is an average of \(500\) network instances. Fig. 1 shows the comparison between the total sum-throughput vs the MG density (\(\lambda_{\text{g,t}}\)) for different channel allocation algorithms. As the \(\lambda_{\text{g,t}}\) increases, the total sum-throughput also increases. As \(\lambda_{g,t}\) doubles, the rate of increase in the sum-throughput decreases from 76.5% to Fig. 1: Sum-throughput versus MGTx density for different channel allocation schemes. 38%, and further decreases to 13% for the proposed scheme. This decline in the rate of increase is attributed to the co-channel interference that arises due to the growing number of MGs. Notably, our proposed scheme outperforms both the IACA and the RCA. With each doubling of \(\lambda_{g,t}\), the increase in sum-throughput for our proposed scheme consistently surpasses the IACA by approximately around 0.1-6% and the RCA by approximately 8-22%. This observation indicates that our proposed distributed channel scheme effectively mitigates co-channel interference, outperforming IACA and RCA. Fig. 2 shows the comparison between the total sum-throughput vs the MGs density (\(\lambda_{\text{g,t}}\)) for different power allocation algorithms. Initially, the sum-throughput of the proposed scheme is 1%, 27% and 30% greater than WFPA, STIMPA and EPA, respectively. Furthermore, as \(\lambda_{g,t}\) doubles, the sum-throughput of the proposed scheme consistently outperforms the aforementioned power allocation schemes by increasingly significant margins. These results indicate that the proposed distributed power allocation scheme accommodates co-channel interference in a better manner than the existing schemes. Although, Fig. 2 shows that MPA performs better than the proposed scheme but it does not respect the CU threshold, and is thus impractical. Fig. 3 shows the variation of sum-throughput with the variation in \(P_{g}^{max}\) while keeping the \(d_{r}\) fixed for different channel allocation schemes. As the \(P_{g}^{max}\) increases, the sum-throughput also increases. Initially, when \(P_{g}^{max}\) is increased from 1 dBm to 5 dBm and then to 10 dBm, the sum-throughput for proposed scheme experiences significant increases of 21% and 23.5% Fig. 2: Sum-throughput versus MGTx density for different power allocation schemes. respectively. However, as \(P_{g}^{max}\) is further increased to 30 dBm, the subsequent increase in sum-throughput diminish to 3.5%. This diminishing trend in the rate of increase of sum-throughput can be attributed to the concurrent rise in co-channel interference as the \(P_{g}^{max}\) value increases. As the \(P_{g}^{max}\) reaches higher levels, the interference caused by neighboring MGs becomes stronger. The results clearly demonstrate the superiority of the proposed channel allocation scheme over both the IACA and RCA schemes, as depicted in Fig. 3. While the performances of the IACA and proposed schemes are comparable for most cases, it is noteworthy that the proposed scheme outperforms IACA scheme specifically for high values of \(P_{g}^{max}\). At these higher power levels, the impact of co-channel interference becomes more pronounced, and it is observed that the IACA is more adversely affected compared to the proposed scheme. Consequently, the sum-throughput of the proposed scheme surpasses that of the IACA by approximately 0.5% to 1.5%. In Fig. 4, the variation of sum-throughput with changes in \(P_{g}^{max}\) is illustrated while keeping the distance \(d_{r}\) fixed for different power allocation schemes. The results highlight the performance differences among these schemes in the presence of co-channel interference. For lower power levels, where severe co-channel interference is not a significant factor, the WFPA exhibits a slightly higher sum-throughput, approximately 1-2%, compared to the proposed scheme. However, as \(P_{g}^{max}\) increases beyond 10 dBm, co-channel interference starts to play a substantial role in system performance. For every 5 dBm increase in \(P_{g}^{max}\), the proposed scheme consistently outperforms the WFPA in terms of sum-throughput. For the initial power level of 1 dBm, the sum Fig. 3: Sum-throughput versus \(P_{g}^{max}\) for different channel allocation schemes. throughput of the STIMPA is marginally higher by approximately 0.5% compared to the proposed scheme. However, as \(P_{g}^{max}\) increases, the performance gap between the proposed scheme and the STIMPA widens significantly. These findings clearly indicate that at higher power levels, when co-channel interference becomes more severe, the proposed scheme demonstrates superior performance by effectively handling and mitigating the impacts of co-channel interference. _Discussion:_ The proposed scheme outperforms existing ones in terms of the system sum-throughput. It also provides a range of advantages. Firstly, the distributed approach has greater scalability as each node only needs to communicate with its immediate neighbors, allowing it to handle larger networks without incurring excessive communication and computational overhead. Secondly, the distributed approach can provide lower latency due to its decentralized nature, where each node makes its own channel and power allocation decisions based on information of neighboring nodes. This enables faster and more efficient decision-making, which may improve network performance. Finally, the distributed approach is more fault-tolerant compared to the centralized scheme since each node operates independently, and the failure of a single node does not disrupt the entire network. ## VI Conclusion and Future Work The results in this work suggest that distributed channel and power allocation can enhance the performance of underlay D2D multicast networks while having little impact on cellular users' Fig. 4: Sum-throughput versus \(P_{g}^{max}\) for different power allocation schemes. performance. Apart from better sum-throughput, distributed schemes have several advantages due to their decentralized nature, including better scalability, fault tolerance, and lower latency. Our results emphasize the potential of distributed schemes for resource allocation in underlay D2D multicast networks, as well as the need for additional research in this area, specifically in the scenarios where multicast groups may have different priorities, and the resource allocation among the multicast groups is subjected to some fairness criteria.
2310.00445
A Note on Minimax Robustness of Designs Against Correlated or Heteroscedastic Responses
We present a result according to which certain functions of covariance matrices are maximized at scalar multiples of the identity matrix. This is used to show that experimental designs that are optimal under an assumption of independent, homoscedastic responses can be minimax robust, in broad classes of alternate covariance structures. In particular it can justify the common practice of disregarding possible dependence, or heteroscedasticity, at the design stage of an experiment.
Douglas P. Wiens
2023-09-30T17:44:43Z
http://arxiv.org/abs/2310.00445v5
# A Note on Minimax Robustness of Designs Against Dependence ###### Abstract We present a result under which certain functions of covariance matrices are maximized at multiples of the identity matrix. This is used to show that experimental designs that are optimal under an assumption of independent observations can be minimax, in broad classes of correlation structures. keywords: correlation, covariance, induced matrix norm, Loewner ordering, minimax, robust. Msc: Primary 62K05, Secondary 62G35 ## 1 Introduction and summary Experimental designs are typically derived, or chosen, assuming that the observations will be independent and homoscedastic. As well as being simple, this is almost necessary, unless an alternative covariance structure is known. This is frequently a complicating feature of design theory - until a design is constructed and implemented there are no data which can be used to estimate an alternate structure. There is some consolation however if a design \(\xi_{0}\), optimal under an assumption of independence and/or heteroscedasticity, is _minimax_ against a broad class of alternatives. In this note we establish - Lemma 1 of SS2 - that functions of covariance matrices, commonly occurring as loss functions in design problems, are _maximized_ within certain classes of such matrices at a multiple of the identity matrix. This can be paraphrased by saying that the 'least favourable' covariance structure is that of independence and homoscedasticity - against which one is protected by \(\xi_{0}\). In SS3, Lemma 1 is applied to show that, in particular, designs optimal under the classical 'alphabetic' optimality criteria are minimax. As examples we consider the special cases of MA(1) and AR(1) errors. As well, we consider the 'I-robust' and 'D-robust' designs of Wiens (2018) which are minimax against misspecified response models; we show that these have the additional property of being minimax against alternate covariance models. ## 2 Main result Suppose that \(\left\|\cdot\right\|_{M}\) is an induced matrix norm (also called an operator norm), induced by the vector norm \(\left\|\cdot\right\|_{V}\), i.e. \[\left\|\mathbf{C}\right\|_{M}=\sup_{\left\|x\right\|_{V}=1}\left\|\mathbf{C}\mathbf{x} \right\|_{V}.\] We use the subscript '\(M\)' when referring to an arbitrary matrix norm, but adopt special notation in the following cases: **(i)**: \(\left\|\mathbf{x}\right\|_{V}=\sqrt{\mathbf{x}^{\prime}\mathbf{x}}\) (the Euclidean norm) \(\Rightarrow\left\|\mathbf{C}\right\|_{M}=\left\|\mathbf{C}\right\|_{E}\) = the spectral radius \(\sqrt{ch_{\max}\mathbf{C}^{\prime}\mathbf{C}}\) (\(=ch_{\max}\mathbf{C}\) if \(\mathbf{C}\) is a covariance matrix - symmetric and positive semidefinite). **(ii)**: \(\left\|\mathbf{x}\right\|_{V}=\max_{i}\left|x_{i}\right|\) (the sup norm) \(\Rightarrow\left\|\mathbf{C}\right\|_{M}=\left\|\mathbf{C}\right\|_{\infty}=\max_{i} \sum\left|c_{ij}\right|\), the maximum absolute row sum. **(iii)**: \(\left\|\mathbf{x}\right\|_{V}=\sum\left|x_{i}\right|\) (the 1-norm) \(\Rightarrow\left\|\mathbf{C}\right\|_{M}=\left\|\mathbf{C}\right\|_{1}=\max_{j}\sum \left|c_{ij}\right|\), the maximum absolute column sum. Of course \(\left\|\mathbf{C}\right\|_{1}=\left\|\mathbf{C}\right\|_{\infty}\) if \(\mathbf{C}\) is symmetric. Some well-known properties of induced norms are: * \(\left\|\mathbf{I}\right\|_{M}=1\). * For square matrices \(\mathbf{C}\), \(\left\|\mathbf{C}\right\|_{M}\geq\left\|\mathbf{C}\right\|_{E}\). Now suppose that the loss function in a statistical problem is given by \(\mathcal{L}\left(\mathbf{C}\right)\), where \(\mathbf{C}\) is an \(N\times N\) covariance matrix and \(\mathcal{L}\left(\cdot\right)\) is non-decreasing in the Loewner ordering: \[\mathbf{A}\preceq\mathbf{B}\Rightarrow\mathcal{L}\left(\mathbf{A}\right)\leq\mathcal{L} \left(\mathbf{B}\right).\] Here \(\mathbf{A}\preceq\mathbf{B}\) means that \(\mathbf{B}-\mathbf{A}\succeq\mathbf{0}\), i.e. is positive semi-definite. **Lemma 1**: With notation as above, and for any positive constant \(\eta^{2}\), covariance matrix \(\mathbf{C}\) and norm \(\left\|\mathbf{C}\right\|_{M}\), define classes \[\mathcal{C}_{M}=\left\{\mathbf{C}\left|\mathbf{C}\succeq\mathbf{0}\text{ and }\left\|\mathbf{C} \right\|_{M}\leq\eta^{2}\right.\right\}.\] For the norm \(\left\|\cdot\right\|_{E}\) an equivalent definition is \[\mathcal{C}_{E}=\left\{\mathbf{C}\left|\mathbf{0}\preceq\mathbf{C}\preceq\ \eta^{2}\mathbf{I}_{N} \right.\right\}. \tag{1}\] Then: **(i)**: In all such classes, \(\max_{C_{M}}\mathcal{L}\left(\mathbf{C}\right)=\mathcal{L}\left(\eta^{2}\mathbf{I}_{N}\right)\). **(ii)**: If \(\mathcal{C}^{\prime}\subseteq\mathcal{C}_{M}\) and \(\eta^{2}\mathbf{I}_{N}\in\mathcal{C}^{\prime}\), then \(\max_{\mathcal{C}}\mathcal{L}\left(\mathbf{C}\right)=\mathcal{L}\left(\eta^{2}\mathbf{ I}_{N}\right)\). **Proof**: (i) By (1) and the monotonicity of \(\mathcal{L}\left(\cdot\right)\), \(\max_{C_{E}}\mathcal{L}\left(\mathbf{C}\right)=\mathcal{L}\left(\eta^{2}\mathbf{ I}_{N}\right)\). By property P1) followed by P2), \(\eta^{2}\mathbf{I}_{N}\in\mathcal{C}_{M}\subseteq\mathcal{C}_{E}\), and so the maximizer in the larger class is a member of the smaller class, hence _a fortiori_ the maximizer there. The proof of (ii) uses the same idea - the maximizer in the larger class \(\mathcal{C}_{M}\) is a member of the smaller class \(\mathcal{C}^{\prime}\). \(\square\) **Remark 1**: An interpretation of Lemma 1 is as follows. Suppose that one has derived a technique under an assumption of uncorrelated, homoscedastic errors, i.e. a covariance matrix \(\sigma^{2}\mathbf{I}_{N}\), which is optimal in the sense of minimizing \(\mathcal{L}\), for any \(\sigma^{2}>0\). Now suppose one is concerned that the covariance matrix might instead be a member \(\mathbf{C}\) of \(\mathcal{C}_{M}\), and that \(\mathcal{L}\) is monotonic in the sense described above. Then the technique minimizes \(\max_{\mathbf{C}\in\mathcal{C}_{M}}\mathcal{L}\left(\mathbf{C}\right)=\mathcal{L} \left(\eta^{2}\mathbf{I}_{N}\right)\), i.e. is minimax in \(\mathcal{C}_{M}\). This argument clearly extends to heteroscedastic errors as well. **Remark 2**: In Remark 1 we implicitly assume that \(\eta^{2}\geq\sigma^{2}\), else \(\mathcal{C}_{M}\) does not contain \(\sigma^{2}\mathbf{I}_{N}\). An argument for taking \(\eta^{2}>\sigma^{2}\) arises if one assumes homoscedasticity and writes \(\mathbf{C}=\sigma^{2}\mathbf{P}\), where \(\mathbf{P}\) is a correlation matrix. Then in \(\mathcal{C}_{1}\), \(\eta^{2}\geq\|\mathbf{C}\|_{1}=\sigma^{2}\|\mathbf{P}\|_{1}\geq\sigma^{2}\), with the final inequality being an equality iff \(\mathbf{P}=\mathbf{I}_{N}\). Thus take \(\eta^{2}>\sigma^{2}\). Then an intuitive explanation of the lemma is that in determining a least favourable covariance structure, one can alter the correlations in some manner which increases \(\|\mathbf{C}\|_{M}\), or one can merely increase the variances. The answer is that one should always just increase the variances. **Remark 3**: A version of Lemma 1 was used by Wiens and Zhou (2008) in a maximization problem related to the planning of field experiments. It was rediscovered by Welsh and Wiens (2013) in a study of model-based sampling procedures. This note seems to be the first systematic study of the design implications of the lemma. ## 3 Examples ### Experimental design in the linear model Consider the linear model \(\mathbf{y}=\mathbf{X}\mathbf{\theta}+\mathbf{\varepsilon}\). Suppose that the random errors have covariance matrix \(\mathbf{C}\). If \(\mathbf{C}\) is _known_ then the 'Best Linear Unbiased Estimate' is \(\mathbf{\hat{\theta}}_{\text{\tiny BLUE}}=\left(\mathbf{X}^{\prime}\mathbf{C}^{-1}\mathbf{X} \right)^{-1}\mathbf{X}^{\prime}\mathbf{C}^{-1}\mathbf{y}\) and there is an extensive design literature - see Dette, Pepelyshev and Zhigljavsky (2015) for a review. In the more common case that the covariances are only vaguely known, or perhaps only suspected, it is more usual to use the Ordinary Least Squares estimate \(\hat{\mathbf{\theta}}_{\text{ots}}\), design as though the errors are uncorrelated, and hope for the best. An implication of the results of this section is that, in a minimax sense, that approach can be quite sensible. In the classical 'alphabetic' design problems, one seeks to minimize a function \(\Phi\) of the covariance matrix of the regression estimates. Let \(\xi_{0}\) be the minimizing design, under the assumption of uncorrelated errors \(\varepsilon\). Assume that under \(\xi_{0}\) the moment matrix \(X^{\prime}X\) is non-singular. Then the covariance matrix of \(\hat{\mathbf{\theta}}_{\text{ots}}\) is \[\text{cov}\left[\hat{\mathbf{\theta}}|\mathbf{C}\right]=\left(X^{\prime}X\right)^{-1} \mathbf{X}^{\prime}\mathbf{C}\mathbf{X}\left(X^{\prime}\mathbf{X}\right)^{-1}.\] Suppose that \(\mathbf{0}\preceq\mathbf{C}_{1}\preceq\mathbf{C}_{2}\), so that \(\mathbf{C}_{2}-\mathbf{C}_{1}=\mathbf{A}^{\prime}\mathbf{A}\) for some \(\mathbf{A}\). Then \(\text{cov}\!\left[\hat{\mathbf{\theta}}|\mathbf{C}_{2}\right]-\text{cov}\!\left[\hat{ \mathbf{\theta}}|\mathbf{C}_{1}\right]=\mathbf{B}^{\prime}\mathbf{B}\succeq\mathbf{0}\) for \(\mathbf{B}=\mathbf{A}\mathbf{X}\left(X^{\prime}\mathbf{X}\right)^{-1}\), hence \[\text{cov}\left[\hat{\mathbf{\theta}}|\mathbf{C}_{1}\right]\preceq\text{cov}\left[ \hat{\mathbf{\theta}}|\mathbf{C}_{2}\right].\] It follows that if \(\Phi\) is non-decreasing in the Loewner ordering, then \[\mathcal{L}\left(\mathbf{C}\right)=\Phi\left(\text{cov}\left[\hat{\mathbf{\theta}}| \mathbf{C}\right]\right)\] is also non-decreasing and the conclusions of the lemma hold. Then as at Remark 1, \(\xi_{0}\) is a minimax design - it minimizes the maximum loss as the covariance structure varies over \(\mathcal{C}_{M}\). It is well known that if \(\mathbf{0}\preceq\mathbf{\Sigma}_{1}\preceq\mathbf{\Sigma}_{2}\), then the \(i^{th}\) largest eigenvalue \(\lambda_{i}\) of \(\mathbf{\Sigma}_{2}\) exceeds that of \(\mathbf{\Sigma}_{1}\), for all \(i\). It follows that \(\Phi\) is non-decreasing in the Loewner ordering if **(i)**: \(\Phi\left(\mathbf{\Sigma}\right)=trace\left(\mathbf{\Sigma}\right)=\sum\lambda_{i} \left(\mathbf{\Sigma}\right)\), corresponding to 'A-optimality'; **(ii)**: \(\Phi\left(\mathbf{\Sigma}\right)=det\left(\mathbf{\Sigma}\right)=\prod\lambda_{i} \left(\mathbf{\Sigma}\right)\), corresponding to 'D-optimality'; **(iii)**: \(\Phi\left(\mathbf{\Sigma}\right)=\max_{i}\lambda_{i}\left(\mathbf{\Sigma}\right)\), corresponding to 'E-optimality'; **(iv)**: \(\Phi\left(\mathbf{\Sigma}\right)=trace\left(\mathbf{L}\mathbf{\Sigma}\right)\) for \(\mathbf{L}\succeq\mathbf{0}\), corresponding to 'L-optimality' and including 'I-optimality' - minimizing the integrated variance of the predictions; thus the designs optimal under these criteria are minimax in the sense above. #### 3.1.1 MA(1) errors As a particular case, assume for simplicity that the random errors are homoscedastic, but that rather than being uncorrelated, they might be serially correlated, following an MA(1) model with \(\text{corr}\!\left[\varepsilon_{i},\varepsilon_{j}\right]=\rho I\left(|i-j|=1\right)\) and with \(|\rho|\leq\rho_{\max}\). Then under this alternative structure \(\mathbf{C}\) varies over the subclass \(\mathcal{C}^{\prime}\) of \(\mathcal{C}_{\infty}\) defined by \(c_{ij}=0\) if \(|i-j|>1\) and \(\|\mathbf{C}\|_{\infty}\leq\sigma^{2}\left(1+2\rho_{\max}\right)\overset{def}{=}\eta ^{2}\). Since \(\eta^{2}\mathbf{I}_{N}\in\mathcal{C}^{\prime}\), part (ii) of the lemma applies: \(\xi_{0}\) is a minimax design in \(\mathcal{C}^{\prime}\) and with respect to any of the alphabetic criteria above. #### 3.1.2 AR(1) errors It is known that the eigenvalues of an AR(1) autocorrelation matrix with autocorrelation parameter \(\rho\) are bounded, and that the maximum eigenvalue \(\lambda_{N}\left(\rho\right)\) has \(\lambda_{N}^{*}=\max_{\rho}\lambda_{N}\left(\rho\right)>\lambda_{N}\left(0 \right)=1\). See for instance Trench (1999, p. 182). Then, again under homoscedasticity, an AR(1) covariance matrix \(\mathbf{C}\) has \(\|\mathbf{C}\|_{E}\leq\sigma^{2}\lambda_{N}^{*}\overset{def}{=}\eta^{2}\), and a design optimal when \(\rho=0\) is minimax in the subclass of \(\mathcal{C}_{E}\) defined by the bound \(\eta^{2}\) and the autocorrelation structure. ### Designs robust against model misspecifications Working in finite design spaces \(\chi\) and with \(p\)-dimensional regressors \(\mathbf{f}\left(\mathbf{x}\right)\), Wiens (2018) derived minimax designs for possibly misspecified regression models \[Y\left(\mathbf{x}\right)=\mathbf{f}^{\prime}\left(\mathbf{x}\right)\mathbf{\theta}+\psi\left( \mathbf{x}\right)+\varepsilon, \tag{2}\] with the unknown contaminant \(\psi\) ranging over a class \(\Psi\) and satisfying - for identifiability of \(\mathbf{\theta}\) - the orthogonality condition \[\sum_{\mathbf{x}\in\chi}\mathbf{f}\left(\mathbf{x}\right)\psi\left(\mathbf{x}\right)=0_{p\times 1}. \tag{3}\] For designs \(\mathbf{\xi}\) placing mass \(\xi_{i}\) on \(\mathbf{x}_{i}\in\chi\), he took \[\mathcal{I}\left(\psi,\mathbf{\xi}\right) = \sum_{\mathbf{x}\in\chi}E\left[\left(\mathbf{f}^{\prime}\left(\mathbf{x} \right)\hat{\mathbf{\theta}}-E\left[Y\left(\mathbf{x}\right)\right]\right)^{2}\right], \tag{4a}\] \[\mathcal{D}\left(\psi,\mathbf{\xi}\right) = \left(\det E\left[\left(\hat{\mathbf{\theta}}-\mathbf{\theta}\right) \left(\hat{\mathbf{\theta}}-\mathbf{\theta}\right)^{\prime}\right]\right)^{1/p}, \tag{4b}\] and found designs minimizing the maximum, over \(\psi\), of these loss functions. Here \(\hat{\mathbf{\theta}}=\hat{\mathbf{\theta}}_{\text{o.s.}}\). The random errors \(\varepsilon_{i}\) were assumed to be i.i.d.; now suppose that they instead have covariance matrix \(\mathbf{C}\in\mathcal{C}_{M}\). Consider first (4a). Using (3), and with \(\mathbf{d}_{\psi}=\left(E\left[\hat{\mathbf{\theta}}\right]-\mathbf{\theta}\right)\) - this does not depend on the covariance structure - this decomposes as \[\mathcal{I}\left(\psi,\mathbf{\xi},\mathbf{C}\right)=\sum_{\mathbf{x}\in\chi}\mathbf{f}^{ \prime}\left(\mathbf{x}\right)\text{cov}\left[\hat{\mathbf{\theta}}|\mathbf{C}\right]\mathbf{ f}\left(\mathbf{x}\right)+\sum_{\mathbf{x}\in\chi}\left\{\mathbf{f}^{\prime}\left(\mathbf{x} \right)\mathbf{d}_{\psi}\mathbf{d}_{\psi}^{\prime}\mathbf{f}\left(\mathbf{x}\right)+\psi^{2} \left(\mathbf{x}\right)\right\}. \tag{5}\] The first sum above does not depend on \(\psi\); the second depends on \(\psi\) but not on the covariance structure. Then an extended minimax problem is to find designs \(\mathbf{\xi}\) minimizing \[\max_{\psi,\mathbf{C}}\mathcal{I}\left(\psi,\mathbf{\xi},\mathbf{C}\right) = \max_{\mathbf{C}\in\mathcal{C}_{M}}\sum_{\mathbf{x}\in\chi}\mathbf{f}^{ \prime}\left(\mathbf{x}\right)\text{cov}\left[\hat{\mathbf{\theta}}|\mathbf{C}\right]\mathbf{ f}\left(\mathbf{x}\right)+\] \[\max_{\psi\in\Psi}\sum_{\mathbf{x}\in\chi}\left\{\mathbf{f}^{\prime}\left( \mathbf{x}\right)\mathbf{d}_{\psi}\mathbf{d}_{\psi}^{\prime}\mathbf{f}\left(\mathbf{x}\right)+\psi ^{2}\left(\mathbf{x}\right)\right\}.\] As in SS3.1 (and taking \(\mathbf{L}=\sum_{\mathbf{x}\in\chi}\mathbf{f}\left(\mathbf{x}\right)\mathbf{f}^{\prime}\left(\mathbf{x}\right)\) in (iv) of that section), for \(\mathbf{C}\in\mathcal{C}_{M}\) the first sum is maximized by a multiple \(\eta^{2}\) of the identity matrix, and then the remainder of the minimax problem is that which was solved in Wiens (2018). The minimax designs - termed 'I-robust' designs - obtained there do not depend on the value of \(\eta^{2}\), and so enjoy the extended property of minimizing \(\max_{\psi,\mathbf{C}}\mathcal{I}\left(\psi,\mathbf{\xi},\mathbf{C}\right)\) for \(\mathbf{C}\in\mathcal{C}_{M}\). Now consider (4b). The analogue of (5) is \[\mathcal{D}\left(\psi,\mathbf{\xi},\mathbf{C}\right)=\left(\det\left\{\mathrm{cov} \left[\hat{\mathbf{\theta}}|\mathbf{C}\right]+\mathbf{d}_{\psi}\mathbf{d}_{\psi}^{\prime}\right] \right)^{1/p}.\] Since \(\mathrm{cov}\Big{[}\hat{\mathbf{\theta}}|\mathbf{C}\Big{]}+\mathbf{d}_{\psi}\mathbf{d}_{\psi}^ {\prime}\), hence its determinant, is non-decreasing in the Loewner ordering, \(\mathcal{D}\left(\psi,\mathbf{\xi},\mathbf{C}\right)\) is maximized, for \(\mathbf{C}\in\mathcal{C}_{M}\), by a multiple of the identity matrix. The rest of the argument is identical to that in the preceding paragraph, and so the 'D-robust' designs obtained in Wiens (2018) also minimize \(\max_{\psi,\mathbf{C}}\mathcal{D}\left(\psi,\mathbf{\xi},\mathbf{C}\right)\) for \(\mathbf{C}\in\mathcal{C}_{M}\). **Remark 4**: Results in the same vein as those above have been obtained in cases which do not seem to be covered by Lemma 1. For instance Wiens and Zhou (1996) sought minimax designs for the misspecification model (2), under conditions on the spectral density of the error process. They state that "... _a design which is asymptotically (minimax) optimal for uncorrelated errors retains its optimality under autocorrelation if the design points are a random sample, or a random permutation, of points..._", with details in their Theorems 2.4 and 2.5. ## Acknowledgements This work was carried out with the support of the Natural Sciences and Engineering Council of Canada.
2310.10656
VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints
Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement. Ownership testing techniques are designed to identify model fingerprints for verifying plagiarism. However, previous works often rely on overfitting or robustness features as fingerprints, lacking theoretical guarantees and exhibiting under-performance on generalized models. In this paper, we propose a novel ownership testing method called VeriDIP, which verifies a DNN model's intellectual property. VeriDIP makes two major contributions. (1) It utilizes membership inference attacks to estimate the lower bound of privacy leakage, which reflects the fingerprint of a given model. The privacy leakage fingerprints highlight the unique patterns through which the models memorize sensitive training datasets. (2) We introduce a novel approach using less private samples to enhance the performance of ownership testing. Extensive experimental results confirm that VeriDIP is effective and efficient in validating the ownership of deep learning models trained on both image and tabular datasets. VeriDIP achieves comparable performance to state-of-the-art methods on image datasets while significantly reducing computation and communication costs. Enhanced VeriDIP demonstrates superior verification performance on generalized deep learning models, particularly on table-trained models. Additionally, VeriDIP exhibits similar effectiveness on utility-preserving differentially private models compared to non-differentially private baselines.
Aoting Hu, Zhigang Lu, Renjie Xie, Minhui Xue
2023-09-07T01:58:12Z
http://arxiv.org/abs/2310.10656v1
# VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints ###### Abstract Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement. Ownership testing techniques are designed to identify model fingerprints for verifying plagiarism. However, previous works often rely on overfitting or robustness features as fingerprints, lacking theoretical guarantees and exhibiting under-performance on generalized models. In this paper, we propose a novel ownership testing method called VeriDIP, which verifies a DNN model's intellectual property. VeriDIP makes two major contributions. (1) It utilizes membership inference attacks to estimate the lower bound of privacy leakage, which reflects the fingerprint of a given model. The privacy leakage fingerprints highlight the unique patterns through which the models memorize sensitive training datasets. (2) We introduce a novel approach using less private samples to enhance the performance of ownership testing. Extensive experimental results confirm that VeriDIP is effective and efficient in validating the ownership of deep learning models trained on both image and tabular datasets. VeriDIP achieves comparable performance to state-of-the-art methods on image datasets while significantly reducing computation and communication costs. Enhanced VeriDIP demonstrates superior verification performance on generalized deep learning models, particularly on table-trained models. Additionally, VeriDIP exhibits similar effectiveness on utility-preserving differentially private models compared to non-differentially private baselines. Fingerprinting, neural networks, ownership protection, membership inference, differential privacy. ## 1 Introduction Deep learning plays an important role in various tasks such as image recognition [1, 2, 3], natural language processing [4], and speech recognition [5] tasks. Building a sophisticated deep neural network (DNN) requires a significant amount of annotated training data, which often contains user privacy, demands powerful computing resources, and necessitates machine learning expertise. These unique DNN models represent valuable intellectual property (IP) and require copyright protection. However, deploying DNN models' APIs for user queries introduces the risk of model extraction attacks, leading to copyright infringement [6, 7, 8]. Model extraction attack efficiently transfers the functionality of a _victim model_ to a _stolen copy_ using limited query answers. Additionally, attackers, who may be insiders with full access to the victim models, employ techniques such as distillation [9], fine-tuning [10], or pruning [11, 12] in an attempt to reverse-engineer the tracking. Proof-of-ownership serves as an adequate protection mechanism against model stealing attacks, ensuring accountability for any theft of copyright-protected models. However, proving ownership of a neural network poses challenges due to the stochastic nature of the training and stealing process [13]. Many stealing mechanisms have minimal side effects on the model's functionality but disable the proof-of-ownership mechanism [9, 14, 15]. Methods for proving ownership of DNN models can be broadly classified into two categories: **watermark embedding (WE)**[16, 17, 18, 19, 20, 21, 22] and **ownership testing (OT)**[23, 24, 25, 15]. The WE methods embed customized watermarks into DNN models during the training stage then verify the ownership by confirming the presence of the respective Fig. 1: Ownership testing framework for DNN models. watermarks from given suspect models. However, WE techniques have certain limitations, including tampering with the training process, potential side effects on model functionality, and vulnerability to watermark erasure attacks [9, 10, 26]. In contrast, the OT methods extract the intrinsic characteristics (fingerprints) of DNN models, making them non-invasive and more resilient to adaptive attacks [15, 23]. In this paper, our focus is on the OT technique to verify the copyright of DNN models. To the best of our knowledge, existing ownership testing solutions rely on two types of DNN fingerprints -- _model robustness_ and _model overfitting_, which capture the uniqueness of DNN models. Robustness-based solutions utilize adversarial examples to delineate the decision boundary of both the victim model and its stolen copies, and then compare the percentage of matched answers [15, 24, 25]. However, techniques that enhance a DNN model's robustness against adversarial attacks, such as adversarial training [27], undermine the performance of ownership testing. On the other side, overfitting-based OT solutions, such as dataset inference [23], leverage the observation that the stolen copies exhibit a higher level of overfitting to the training set of the victim models, thereby extracting the overfitting level as fingerprints. While these approaches are innovative and effective, they have certain limitations. The verification process is communicated and computationally expensive requiring thousands of queries to the stolen copy to obtain dozens of minimal adversarial noise as fingerprints [23] Continuous inquiries may raise suspicions of model theft and result in rejection of the inquiries [28]. Furthermore, the performance of overfitting-based solutions is negatively affected by the model's generalization ability. To address these problems, we propose a novel ownership testing approach to Verify a DNN model's Intelligence Property (VeriDIP). The key feature of VeriDIP is its utilization of **privacy leakage** fingerprints, instead of relying on overfitting [23] or robustness [15, 24, 25] metrics to indicate model uniqueness. Drawing on the concept of membership inference (MI) attacks from previous works [29, 30, 31], the privacy leakage of a model against MI attacks reflects the extent to which the model has memorized its private or secret training data. Hence, considering the secrecy of the training data, a stolen model would not exhibit the same level of privacy leakage on the victim's private training data as the victim model under the same MI attacks. In other words, the privacy leakage fingerprint of a model captures the distinctive and confidential patterns learned by the model, fulfilling the criteria of a reliable fingerprint: uniqueness and irremovability. As a result, any unauthorized DNN models that result in a certain degree of privacy leakage of a private training set can be identified as plagiarized. Using privacy leakage fingerprints, VeriDIP consists of four components for verifying a DNN model's intelligence property. First, motivated by the aforementioned properties of the privacy leakage of a given model, we utilize MI attacks to estimate the lower bound of privacy leakage, which serves as the extracted fingerprint of a given model. Then we employ hypothesis testing on the extracted fingerprint to determine the likelihood of a suspect model being a stolen copy of the victim model. However, we may encounter the issue of "fingerprint fading" when dealing with well-generalized models that exhibit minimal privacy leakage against MI attacks. To tackle this problem, we introduce an enhanced version of VeriDIP where MI attacks query the suspect models using less private samples to extract the worst-case privacy leakage fingerprints of the suspect models. These less private samples face higher privacy leaking risks against MI attacks, enabling the enhanced VeriDIP to extract stronger privacy leakage fingerprints. To identify the less private data in advance, we train numerous shadow models to investigate the impact of each training sample on the decision boundary of DNN models. The data that significantly influences the models will be considered as the less private data. We extensively evaluate VeriDIP on two image datasets (FMNIST and CIFAR-10) and two tabular datasets (Adult and Health) against three types of model stealing attacks: model extraction attack, model distillation attack, and fine-tune attack. The evaluation results for FMNIST and CIFAR demonstrate that VeriDIP can publicly authenticate all stolen copies while exposing less than 5 training samples, with a significantly reduced number of queries to the suspect models compared to [23]. Despite the models trained on tabular datasets having minimal overfitting, VeriDIP is still capable of publicly authenticating all stolen copies, at the cost of exposing dozens of training samples, whereas previous works [15, 23, 24, 25] are unable to do so. In this work, we also address an open question raised in [23] regarding the effectiveness of VeriDIP on differentially private DNN models. We demonstrate that VeriDIP's success rate is constrained by a stringent privacy budget, such as \(\varepsilon=0.1\). However, we find that VeriDIP remains effective even for utility-preserving differentially private models, such as those with a higher privacy budget, e.g., \(\varepsilon=0.5\). To summarize, our contributions are as follows: * We propose VeriDIP, a model ownership testing (OT) approach for DNN models. VeriDIP utilizes the membership inference (MI) attack to estimate the privacy leakage of DNN models, which serves as the fingerprint of a given (victim/target) model. * We further enhance VeriDIP by utilizing less private samples to estimate the worst-case privacy leakage, thereby strengthening the extracted fingerprints of DNN models. * We perform extensive evaluations on VeriDIP using various DNN models trained with tabular or image benchmarks, against three types of model stealing attacks. The results show that VeriDIP can publicly authenticate all stolen copies with minimal verification costs. * We theoretically and experimentally analyze the connection between the effectiveness of VeriDIP and differential privacy (DP) privacy protection. The results demonstrate that as long as a DP model is utility-preserving, VeriDIP can effectively protect its copyright. ## 2 Related Work In this section, we review model stealing attacks, well-known ownership testing techniques and membership inference attacks. We list the comparison of different copyright protection methods for DNN models in Table I. ### _Model stealing attacks_ **Black-box attacks.** Tramer et al. [6] proposed the first model extraction attack that trains a stolen copy using the predictions of victim models. It requires black-box access to the victim model and some unlabeled datasets from the same distribution. According to Shafieinejad et al. [33], existing watermark embedding techniques [32, 14] and some fingerprinting solutions [24, 10] cannot withstand model extraction attacks. Distillation [34] was first proposed to distill the knowledge of teacher models into student models and later extended as an attack against methods that protect model copyrights [9]. Distilled models are often able to evade copyright tracking, as demonstrated in works such as Cao et al. [24] and Lukas et al. [25]. **White-box attacks.** White-box attackers have full access to victim model's parameters, and their goal is modify these parameters in order to disable copyright protection mechanisms. For instance, fine-pruning [11] is a defensive method against DNN model backdooring. It prune backdoored neurons and then fine-tuning the models. Consequently, fine-pruning could potentially be an attack against backdoor-based model watermarking techniques, such as those proposed in works like Adi et al. [14, 32]. More recently, Chen et al. [10] proposed an advanced fine-tuning technique that aims to erase model watermarks. They initially increase the learning rate to make the victim model forget unnecessary details about watermarks and then gradually restore the utility of the model by reducing the learning rate step by step. While these attacks are effective in disabling watermark embedding techniques, it remains unclear whether they pose a threat to the copyright protection provided by ownership testing methods. ### _Ownership testing_ Ownership testing (OT) techniques, also referred to as DNN fingerprinting techniques, are an emerging area of research that focuses on extracting the intrinsic characteristics of DNN models to track stolen copies. Currently, the research on OT is limited, with the majority of studies relying on two fingerprint characteristics: robustness [24, 25, 15] and overfitting [23]. IPGuard [24] proposes using model robustness as fingerprints. The authors observe that stolen copies exhibit similar predictions to the victim model for most adversarial data points. While IPGuard can successfully identify white-box derivation attacks, such as fine-tuning, it is not effective against black-box extraction attacks, such as model extraction attack [33], where the attacker retrains the model from scratch, resulting in a larger disparity in the decision surface compared to the victim model. To address this limitation, Lukas et al. [25] propose the use of transferable adversarial samples to extract DNN fingerprints. This approach successfully defends against white-box derivation attacks and most black-box extraction attacks, but it is vulnerable to transfer learning and adversarial training. More recently, Chen et al. [15] propose a testing framework for verifying ownership. Instead of relying on single metrics, they utilize multiple dimensions and combine the results to determine ownership. Their black-box metrics also use robustness as fingerprints, similar to IPGuard [24], making them susceptible to black-box extraction attacks. Their white-box metrics utilize the robustness of inner neuron outputs, requiring the defender to have knowledge of all parameters of stolen copies. Dataset inference (DI) [23] exploits the overfitting of DNN models to their training data as a means to demonstrate that stolen copies exhibit similar overfitting fingerprints to the victim models. They employ minimal adversarial noise that leads to model misclassification [35] as fingerprints. DI is capable of identifying all white-box and black-box model variations [23]. However, this approach has some limitations. Firstly, it cannot be directly applied to DNN models trained on tabular data since some of the features are categorical, making it challenging to perform most adversarial example attacks [36]. Secondly, DI requires querying the suspect model thousands of times, which significantly increases the risk of detector attacks [28]. Thirdly, the effectiveness of DI on differentially private (DP) [37] DNN models remains unanswered. Hence, this paper aims to propose a novel ownership testing approach that addresses these limitations by achieving high verification efficiency and protecting the intellectual property of DP models. ### _Membership inference attacks_ Shokri et al. proposed the first membership inference (MI) attack in 2017 [29], which successfully guesses the membership of the training data with black-box access to the target DNN models. Since then, researchers have made efforts to enhance the attack performance and reduce the background information required by MI attackers. More recently, some researchers have utilized MI attacks as an \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{Approaches} & \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Non- \\ invasive \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} DP \\ connection \\ \end{tabular} } & \multicolumn{3}{c}{Model stealing attacks} & \multirow{2}{*}{Adaptive attacks} \\ \cline{5-6} & & & & & & & ME & KD & FT \\ \hline Adi et al. [32] & watermarking & backdoor & \(\times\) & N/A & \(\times\)[33] & \(\times\)[9] & \(\times\)[10] & N/A \\ Zhang et al. [14] & watermarking & backdoor & \(\times\) & N/A & \(\times\)[33] & \(\times\)[9] & \(\times\)[10] & \(\times\)[10] & N/A \\ Chen et al. [15] & fingerprinting & robustness & \(\surd\) & N/A & \(\times\)[15] & \(\times\)[15] & \(\surd\) & ADV training [27] \\ Cao et al. [24] & fingerprinting & robustness & \(\surd\) & N/A & \(\times\)[25] & \(\times\)[25] & \(\surd\) & ADV training [27] \\ Lukas et al. [25] & fingerprinting & robustness & \(\surd\) & N/A & \(\surd\) & \(\surd\) & \(\surd\) & ADV training [27] \\ Maini et al. [23] & fingerprinting & over-fitting & \(\surd\) & N/A & \(\surd\) & \(\surd\) & \(\surd\) & detector attacks [28] \\ VeriDIP (This work) & fingerprinting & privacy leakage & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & secure for now \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of different DNN model copyright protection methods. ME: model extraction attack; KD: knowledge distillation attack; FT: fine-tuning attack; ADV: adversarial; DP: differential privacy. ME, KD, and FT are model stealing attacks. Adaptive attacks aim to weaken the effect of ownership test approaches. empirical measurement for estimating the privacy leakage of DNN models [30, 31, 38]. This approach has inspired us to leverage the MI advantage as a lower bound for estimating model privacy leakage and consider privacy leakage characteristics as the model fingerprint. Additionally, other studies have revealed the varying exposure risks of training data against MI attacks [31, 39], which have also motivated us to extract stronger fingerprints. ## 3 Ownership Testing Problem In this section, we first formulate the ownership testing (OT) problem, then discuss the capabilities of adversaries and defenders, followed by the backgrounds of differential privacy and membership inference. ### _Notations_ Let \(\mathbf{z}=(\mathbf{x},y)\) be a data point, where \(\mathbf{x}\) is the feature vector and \(y\) is the corresponding label. \(\mathcal{D}\) represents the data distribution from which \(\mathbf{z}\) is drawn. We assume that the victim model is trained on the training set \(S(\sim\mathcal{D}^{n})\) consisting of \(n\) data points. The loss function \(\ell(f,\mathbf{z})\) measures the difference between the model predictions \(f(\mathbf{x})\) and the ground-truth label \(y\). We provide a summary of the notations used in this work in Table II. ### _Problem Formulation_ Figure 1 depicts a general framework of ownership testing (OT) for DNN models, where we have three components - machine learning as a service (MLaaS), model stealing attacks and defences. Particularly, MLaaS provides users with access to pre-built machine learning (DNN) models through APIs, allowing the users to integrate machine learning capabilities into their applications and perform complex tasks through simple queries. However, to fully utilize the potential of the pre-build models, attackers might attempt to steal the models by mimicking the behaviors of regular users (querying the models through the open APIs) to infer/extract the model details. To protect the copyright of (the victim) DNN models, an OT approach extracts and compares the fingerprints of a suspect model and the victim model to produce a test outcome, indicating whether the suspect model is a stolen copy of the victim model. In this paper, we aim to design a model OT algorithm \(\mathcal{V}\), defined as follows \[\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\rightarrow\{0,1\}, \tag{1}\] where \(\mathcal{P}_{S}\) is an auxiliary dataset, containing carefully chosen adversarial examples [24, 15, 25] or a subset of training examples [23] and \(\mathcal{B}\) represents the publicly available knowledge about the model [24, 15, 25] or about the private training data [23]. In the algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\), the verifier first extracts the inherent fingerprint of the suspect model \(f\) using \(\mathcal{P}_{S}\) and \(\mathcal{B}\), and then determines the ownership based on whether it aligns with the owner's expectations. The algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) outputs \(1\) when the verifier believes the suspect model \(f\) is a stolen copy of the victim model \(f_{S}\), and vice versa. The algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) should be highly accurate, computationally and communicationally efficient, and privacy-preserving (safe to audit in public). ### _Threat Model_ We specify the capabilities of the attacker and verifier (defender) shown in Figure 1. **Attacker.** We consider a wide variety of model stealing attacks, including both black-box access and white-box access capabilities. However, the adversary does not have access to the entire (private) training set of the victim model. * Black-box attacker. Attackers, who are external entities attempting to exploit the functionality of the victim model, employ various attacks such as model extraction attacks [6] and model distillation attacks [9]. * White-box attacker. Attackers, who are insiders with full access rights to the victim model, aim to evade tracking and detection. They employ various attacks such as model fine-tuning [10] and model fine-pruning [11, 12]. **Verifier.** As for defense, our focus is on black-box verifiers who have limited query access to the suspect model. There are two main reasons for this choice. First, when the verifier is a third-party agency, sharing excessive information such as training data or model parameters can pose risks to the model owner or data contributors. Second, allowing an unlimited number of verification queries can potentially trigger detector attacks [28]. In a detector attack, the unauthorized model API may refuse to respond or provide random responses upon detecting an attempt to verify copyright. For example, in the work by Maini et al. [23], the victim model is queried 1500 times for a single data point to collect minimal adversarial noise vectors for ownership determination, which significantly increases the likelihood of triggering a detector attack (refer to Table I). \begin{table} \begin{tabular}{l|l} \hline \hline Notations & Description \\ \hline \(\mathbf{x}\) & feature vector \\ \(y\) & the label corresponding to \(\mathbf{x}\) \\ \(\mathbf{z}\) & a data point \(\mathbf{z}=(\mathbf{x},y)\) \\ \(\alpha\) & significance level for hypothesis testing \\ \(\mathcal{D}\) & data distribution \\ \(S\) & private training dataset \\ \(f\) & DNN models \\ \(n_{S}\) & number of exposed samples during \\ \((\epsilon,\delta)\) & public copyright verification \\ \((\epsilon,\delta)\) & DP parameters (privacy budget, failure probability) \\ \((C,\mathbf{z})\) & DP hyper-parameters (clipping threshold, noise multiplier) \\ \(P\) & probability of not being a stolen model \\ \(Y\) & ownership testing outcome - Stolen or Not stolen \\ \hline \(\ell(f,\mathbf{z})\) & Loss function, output the prediction loss of model \(f\) on sample \(\mathbf{z}\) \\ \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) & OT algorithm, output whether a suspect model \(f\) is trained on \(S\), where \(\mathcal{P}_{S}\) is an auxiliary dataset to \(S\) and \(\mathcal{B}\) is background knowledge \\ \(\mathcal{A}(\mathbf{z},f,\mathcal{D})\) & MI attack algorithm, output whether a sample \(\mathbf{z}\) is used to train model \(f\), \(\mathcal{D}\) is auxiliary information \\ \(\mathrm{Adv^{M}}(\mathcal{A},f,\mathcal{D})\) & Membership advantage algorithm, output membership advantages of algorithm \(\mathcal{A}\) on model \(f\), \(\mathcal{D}\) is auxiliary information \\ \hline \hline \end{tabular} \end{table} TABLE II: Summary of Notations ### _Membership Advantage_ As we know, Yeom et al. [38] show that the privacy budget of a differentially private DNN model is a lower bound of the model's privacy leakage against MI attacks. Furthermore, as demonstrated by Yeom et al. [38], the privacy budget of a differentially private DNN model serves as a lower bound for estimating the model's privacy leakage against membership inference (MI) attacks. Additionally, recent research by Hyland et al. [40] highlights that not only intentionally noisy DNN models provide privacy protection, but ordinary DNN models also possess a certain level of privacy protection due to the inherent randomness introduced by stochastic gradient descent (SGD). Consequently, it becomes possible to assess the potential privacy leakage of a non-differentially private DNN model by estimating the corresponding privacy budget associated with the non-DP model. #### 3.4.1 Differential Privacy Recall the definition of differential privacy [37], A learning algorithm \(f:\mathcal{D}\mapsto\mathcal{R}\) satisfies (\(\epsilon,\delta\))-DP if, for all adjacent databases \(D\) and \(D^{\prime}\) that differs in one record, and all possible outputs \(\mathcal{O}\subseteq\mathcal{R}\), the following inequality holds. \[\Pr[f(D)\in\mathcal{O}]\leq\exp(\epsilon)\Pr[f(D^{\prime})\in\mathcal{O}]+\delta, \tag{2}\] where the probabilities are taken only over the randomness of the learning algorithm \(f\). A greater \(\epsilon\) indicates a lesser degree of privacy protection for the training data, meaning that the machine learning algorithm \(f\) may potentially compromise more privacy of the sensitive database \(D\). If the verifier is able to quantify the privacy risks associated with a particular learning algorithm on a specific private training set, this value can be used as a fingerprint for identifying plagiarism. This is because the target model and its pirated version are likely to exhibit higher privacy leakage of their training data compared to independently trained models. By analyzing and comparing the privacy risks of different models, the verifier can detect potential instances of plagiarism or unauthorized use of the training data. However, it is noteworthy that directly estimating the value of \(\epsilon\) for deployed non-DP DNN models on given datasets is intractable. This is because it would require traversing all possible adjacent datasets and evaluating all possible outputs to compute the maximum divergence. This process becomes computationally expensive and impractical, especially for large-scale datasets and complex models. #### 3.4.2 Membership Inference Membership inference (MI) attacks [29] aim to predict whether a particular example is part of a training dataset. Recently, some researchers [41, 42] have proposed utilizing MI attacks as a means to measure privacy leakage. Other works [30, 38] have theoretically established that the privacy leakage measured by MI attacks serves as a lower bound for \(\epsilon\). In this work, we leverage the concept of membership advantage [38] and utilize it as a fingerprint for our model. We provide a review of the related definition below. Before getting into membership advantage, we first define the MI attack following [29, 38]. **Definition 1** (Membership inference experiment \(\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f_{S},\mathcal{D})\)).: _Let \(\mathcal{A}\) be a membership inference attack algorithm, \(f_{S}\) is a machine learning model trained on \(S\sim\mathcal{D}^{n}\). The procedure of the membership inference experiment is as follows:_ 1. _[leftmargin=*]_ 2. _Toss a coin at random_ \(b\leftarrow\{0,1\}\)_;_ 3. _If_ \(b=0\)_, then the sample_ \(\boldsymbol{z}\) _drows from_ \(S\)_, denoted as_ \(\boldsymbol{z}\sim S\)_._ 4. \(\{0,1\}\leftarrow\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f_{S},\mathcal{D})\)_. The experiment_ \(\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})\) _returns_ \(1\) _to represent the attacker correctly guessing the answer of_ \(b\)_, denoted as_ \(\mathcal{A}\left(\boldsymbol{z},f_{S},\mathcal{D}\right)=b\) _and vice versa._ In Definition 1, the attack algorithm \(\mathcal{A}\left(\boldsymbol{z},f_{S},\mathcal{D}\right)\) inputs arbitrary sample \(\boldsymbol{z}\), model \(f_{S}\), public data distribution \(\mathcal{D}\), and outputs the judgment about whether the sample \(\boldsymbol{z}\) is used to train model \(f_{S}\). Membership advantage [38] represents the advantage of an MI attacker's ability to guess the decision boundary of training samples and other samples over random guess. **Definition 2** (Membership Advantage).: _The advantage of the MI attack algorithm \(\mathcal{A}\) is defined as_ \[\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=2\Pr\left[\mathrm{Exp}^{ \mathrm{M}}(\mathcal{A},f,\mathcal{D})=1\right]-1. \tag{3}\] Membership advantage ranges from \(0\) to \(1\), where \(0\) indicates no advantage (equivalent to random guessing), and \(1\) represents a full advantage. The right-hand side of Equation (3) can be empirically determined by computing the difference between the true positive rate (TPR) and the false positive rate (FPR) of the attack algorithm \(\mathcal{A}\). That is, \[\begin{array}{l}\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=\Pr[ \mathcal{A}=1\mid b=1]-\Pr[\mathcal{A}=1\mid b=0]\\ =\mathop{\mathbb{E}}_{\boldsymbol{z}\sim S}\left[\mathcal{A}\left(\boldsymbol{z },f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}}_{\boldsymbol{z}\sim\mathcal{D }}\left[\mathcal{A}\left(\boldsymbol{z},f,\mathcal{D}\right)\right].\end{array} \tag{4}\] It can be observed from the above equation that the membership advantage is dependent on the specific implementation approach of the attack algorithm \(\mathcal{A}\left(\boldsymbol{z},f,\mathcal{D}\right)\), and various options have been proposed in the literature, including [29, 38, 43, 44]. ## 4 VeriDIP In this section, we present our ownership testing approach for DNN models called VeriDIP, which performs hypothesis testing for extracted privacy leakage fingerprints. To illustrate, we first introduce the framework for _basic VeriDIP_, followed by a detailed fingerprint extraction algorithm. Next, we propose _enhanced VeriDIP_ to improve the performance of the basic VeriDIP for more generalized DNN models. Finally, we discuss the relationship between VeriDIP and differential privacy techniques. ### _Ownership Testing Algorithm_ We present the construction of ownership testing algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\rightarrow\{0,1\}\) (see Equation (1)) that outputs whether the suspect model \(f\) is a stolen copy of the victim model. Let \(S\sim\mathcal{D}^{n}\) be a private training set, \(f_{S}\) be the IP-protected (victim) DNN model trained on \(S\), \(\mathcal{P}_{S}=\{\boldsymbol{z}\mid\boldsymbol{z}\in S\}_{n_{S}}\) be an auxiliary dataset associated with \(S\) that contains \(n_{S}\) random samples from the private training set \(S\), and \(\mathcal{B}=\{\mathcal{A},\mathcal{D}\}\) be the public background knowledge that contains an MI attack algorithm \(\mathcal{A}\) and the publicly available data distribution \(\mathcal{D}\). We show the proposed ownership testing algorithm in Algorithm 1. Algorithm 1 performs a one-tailed hypothesis test on the observed membership advantage fingerprints for stolen model \(f\) on a given private training set \(S\). We first give formal definitions of the membership advantage fingerprints of a DNN model \(f\) as follows: **Definition 3** (Membership advantage fingerprint).: _We define the fingerprint of a DNN model \(f\) as its privacy leakage against the private training set \(S\), which is empirically computed as \(\mathcal{F}(f\mid S)=\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})\)._ Empirically, \(\mathcal{F}\) represents the membership advantage of the attacker over a random guesser. If \(f\) is independent of \(f_{S}\), then \(\mathcal{F}(f\mid S)\) should be close to 0. Therefore, we set the null hypothesis as \(\mathcal{F}(f\mid S)=0\), which indicates that the suspect model \(f\) is not a stolen copy of the victim model \(f_{S}\). On the other hand, a larger value of \(\mathcal{F}(f\mid S)\) in the alternative hypothesis indicates that the suspect model \(f\) discloses more privacy of the private training set \(S\) of \(f_{S}\) and is more likely to be a stolen copy of \(f_{S}\). In the verification process, the verifier computes the likelihood of observed fingerprints. Firstly (step 1 in Algorithm 1), the verifier randomly selects \(n_{S}\) training samples from the private dataset \(S\) and randomly selects \(n_{S}\) samples from the public data distribution \(\mathcal{D}\). Then (step 2 in Algorithm 1), the empirical computation of fingerprint estimation is performed as follows: \[\mathcal{F}^{*}(f\mid S)=\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}}\left[\mathcal{ A}\left(\mathbf{z},f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{1}} \left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right]. \tag{5}\] Next (step 3 in Algorithm 1), it computes the p-value for observed fingerprints. The output p-value stands for the likelihood of a suspect model not being a stolen model. It computes \[P=1-\Pr[Z>\mathcal{F}^{*}(f\mid S)], \tag{6}\] where \(Z\sim\mathcal{N}(0,\sigma)\) and \(\sigma\) are estimated by the observed \(\mathcal{F}^{*}(f\mid S)\). Thus, for the stolen models, a lower p-value indicates better OT performance. Finally (step 4 in Algorithm 1), we give the judgment based on pre-defined significant level \(\alpha\). The use of hypothesis testing in VeriDIP serves the purpose of enabling public verifiability. Hypothesis testing allows for a reduction in the number of exposed training samples during ownership verification while maintaining a satisfactory level of verification confidence. If the verifier (as shown in Figure 1) is a third-party agency or if the verification process is required to be executed publicly, directly exposing the entire private training set \(S\) to the public would lead to severe privacy violations. We then theoretically analyze factors that influence the performance of our OT algorithm. **Theorem 1**.: _The p-value returned by Algorithm 1 is negatively correlated with the extracted model fingerprint estimation value and sample size \(n_{s}\)._ Proof.: In Algorithm 1, assume \(H_{0}\) is true then \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=0\). Let the observed the standard deviation of \(\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\) be \(\sigma_{0}\) and \(\sigma_{1}\), for \(\mathbf{z}\in S\) and \(\mathbf{z}\in\mathcal{D}\), respectively. According to the central limit theorem [45], \(\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\)\(-\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{1}}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\) approximately follows Gaussian distribution \(\mathcal{N}(0,\sqrt{\frac{\sigma_{0}^{2}+\sigma_{1}^{2}}{n_{s}}})\), where \(D_{0}\) and \(D_{1}\) are randomly sampled \(n_{S}\)-sized datasets, from \(S\) and \(\mathcal{D}\), respectively. Thus, p-value is computed as: \[P =1-\Phi\left(\frac{\left(\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}} \left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}} _{\mathbf{z}\sim D_{1}}\left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right] \right)*n_{S}}{\sqrt{\sigma_{0}^{2}+\sigma_{1}^{2}}}\right) \tag{7}\] \[=1-\Phi\left(\frac{\mathcal{F}^{*}(f\mid S)*n_{S}}{\sqrt{\sigma_ {0}^{2}+\sigma_{1}^{2}}}\right),\] where \(\Phi\) is the cumulative distribution function of the standard normal distribution and \(D_{0}\) and \(D_{1}\) are two randomly sampled \(n_{S}\)-sized datasets from \(S\) and \(\mathcal{D}\), respectively. Referring to Equation (7), it can be observed that \(\sigma_{0}\) and \(\sigma_{1}\) are constants specific to the neural networks used. Hence, generalized models (with less overfitting) may pose challenges in obtaining satisfactory ownership judgments when limited sensitive training samples are available (smaller \(n_{S}\)). Additionally, a more potent membership inference (MI) attack can enhance the likelihood of obtaining positive judgments for plagiarism. ### _Fingerprints Extraction_ In this section, we provide a comprehensive explanation of the implementation process for estimating the membership advantage fingerprint, as defined in Definition 3. The goal is to compute the membership advantage \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=\mathop{\mathbb{E}}_{ \mathbf{z}\sim\mathcal{D}}\left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right) \right]-\mathop{\mathbb{E}}_{\mathbf{z}\sim S}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\) (refer to Equation (4)). It's worth noting that any existing black-box membership inference (MI) attack algorithms can be utilized as fingerprint extractors. In this paper, we discuss two specific instantiations. For illustrative purposes, we begin by considering a simple MI attack --_Global threshold MI attack_[38]. The definition is as follows. **Definition 4** (Global MI attack \(\mathcal{A}\)[38]).: _Assume the loss of a machine learning model \(f\) is bounded by a constant \(B\), denoted as \(\ell(f,\mathbf{z})\leq B\). Data \(\mathbf{z}=(\mathbf{x},y)\) are sampled from the training set \(S\) or data distribution \(\mathcal{D}\). Given model \(f\), sample \(\mathbf{z}=(\mathbf{x},y)\), public data distribution \(\mathcal{D}\), the MI attack algorithm \(\mathcal{A}_{\ell}\left(\mathbf{z},f,\mathcal{D}\right)\) output 1 with probability \(1-\ell(f,\mathbf{z})/B\)._ The membership advantage fingerprint is estimated as follows: \[\begin{split}&\mathcal{F}(f\mid S)\\ &=\operatorname{Adv}^{\text{M}}(\mathcal{A}_{\ell},f,\mathcal{D}) \\ &=\mathbb{E}\left[\frac{\ell(f,\mathbf{z})}{B}\mid b=1\right]-\mathbb{E }\left[\frac{\ell(f,\mathbf{z})}{B}\mid b=0\right]\\ &=\operatorname*{\mathbb{E}}_{\mathbf{z}\sim\mathcal{D}}\left[\frac{ \ell(f,\mathbf{z})}{B}\right]-\operatorname*{\mathbb{E}}_{\mathbf{z}\sim\mathcal{S}} \left[\frac{\ell(f,\mathbf{z})}{B}\right].\end{split} \tag{8}\] We also consider the latest (to the best of our knowledge) membership inference (MI) attack, known as the _Personable threshold MI attack_[31]. This attack takes a different approach by training multiple shadow models to learn the discrepancy in the model's loss distribution for each sample, distinguishing between samples that are part of the training set and those that are not. For each data point \(\mathbf{z}\), the attack fits two Gaussian distributions, \(\mathcal{N}\left(\mu_{\text{in}},\sigma_{\text{in}}^{2}\right)\) and \(\mathcal{N}\left(\mu_{\text{out}},\sigma_{\text{out}}^{2}\right)\), to the confidence distribution in the logit scale. Subsequently, a likelihood test is performed to compute \(L(\mathbf{z})=\frac{\text{logit}(p_{z})\mid\mathcal{N}(\mu_{\text{in}},\sigma_{ \text{in}}^{2})}{\text{logit}(p_{z})\mid\mathcal{N}(\mu_{\text{out}},\sigma_{ \text{out}}^{2})}\), where \(\text{logit}(p)=\ln(\frac{p}{1-p})\) and \(p_{\mathbf{z}}=-\exp(\ell(f,\mathbf{z}))\). A large value of \(L(\mathbf{z})\) indicates a higher likelihood of the data point \(\mathbf{z}\) being a member. In this attack, the membership advantage is computed as the difference between the true positive rate (TPR) and the false positive rate (FPR) of the MI attack algorithm. Note that while the per-sample threshold MI attack may be computationally inefficient due to the need to train multiple shadow models for each batch of MI queries, it is particularly suitable for model ownership verification tasks. This is because the ownership testing verifier has prior knowledge of the data used for conducting MI attacks, allowing the shadow models to be pre-trained in advance. ### _Enhanced VeriDIP_ Recall that we have previously suspected that more generalized models may yield unsatisfactory ownership judgments due to the negative correlation between input membership advantage fingerprints and output p-values, as shown in Equation (7). To address this issue, we propose an enhanced version of VeriDIP that mitigates the reliance on the effectiveness of VeriDIP's MI attack success rates. The key idea is to utilize _the worst-case_ privacy leakage instead of _the average-case_ privacy leakage as model fingerprints for ownership verification. While average privacy risks are computed using a set of randomly sampled training samples, the worst-case privacy leakage focuses on measuring the privacy risks of a set of less private training samples. It serves as a tighter lower bound for \(\epsilon\) defined in differential privacy. Therefore, we believe it constitutes an enhanced fingerprint for identifying stolen models. Recently, several studies have demonstrated that certain training samples exhibit lower levels of privacy than others when subjected to MI attacks [31, 46]. These samples with reduced privacy are well-suited for estimating worst-case privacy leakage. We define less private data in model \(f\) as follows: **Definition 5** (Less private Data).: _Let \(S\) be the training set for the DNN model \(f_{S}\). We define a data point \(\mathbf{z}\in S\) as a less private data point if the model trained on the set \(S\setminus\mathbf{z}\) is significantly different from \(f_{S}\)._ **Search for the less private data.** Measuring the difference between two DNN models, as defined in Definition 5, can be challenging. However, if we assume that the removal of a data point \(\mathbf{z}\) from the training set has the most significant impact on the model's prediction for that data point, the problem becomes more manageable. We can compute the loss difference between two models by comparing their performance when trained with and without the presence of \(\mathbf{z}\). This can be expressed as follows: \[\eta(\mathbf{z})=\frac{\ell(f_{S\setminus\mathbf{z}},\mathbf{z})}{\ell(f_{S},\mathbf{z})}. \tag{9}\] The data point with a larger \(\eta(z)\) value is less private. To provide an example of the less private data, we conducted a search within the training set of DNN models to identify the sample with the highest \(\eta(\mathbf{z})\) score. The behavior of a less private data point and a more private data point is demonstrated in Figure 2. The x-axis represents a transformation of the loss \(S^{-1}(\exp(-\ell(f,\mathbf{z})))\) following [31], where \(S^{-1}\) denotes the inverse of the Sigmoid function. This transformation ensures that the transformed loss distribution is approximately normal. The y-axis represents the frequency of discrete loss values. From Figure 2, it is evident that the prediction capability of DNN models is particularly sensitive to the presence or absence of certain data points, as illustrated in Figure 2(b) compared to Figure 2(a). The absence of data point 2 significantly reduces the model's confidence in predicting the label of data point 2. Therefore, data point 2 corresponds to the less private data we are specifically interested in identifying. Through further analysis, we discovered that the less private data points are significantly more abundant compared to other data points. To assess the prevalence of the less private data, we traversed all training data points for four benchmarks and computed the corresponding \(\eta(\mathbf{z})\) values for each data point. The distributions of \(\eta(\mathbf{z})\) for each database are depicted in Figure 3. Notably, all distributions exhibit a _long tail_ pattern, indicating a substantial presence of the less private data points. Consequently, if we were to draw random samples to estimate privacy leakage, encountering the less private data points would be Fig. 2: Loss score distribution comparison for the data “IN” model and “QUT” of model, Adult database. The response of DNN models is more sensitive to the absence of data 2 than data 1. a rare occurrence. Therefore, identifying these less private data points is crucial in obtaining robust privacy leakage fingerprints. In summary, for the enhanced VeriDIP, our approach involves initially identifying a set of several less private data points, similar to "Data 2" in Figure 2(b), for each victim model beforehand. During the verification phase, the verifier utilizes these data points to extract worst-case privacy leakage fingerprints, rather than relying on average-case privacy leakage, as evidence for claiming ownership. It is worth noting that training shadow models to identify the less private data incurs additional computational costs. However, it is important to highlight that, for a given victim model, only one dataset of less private data is required. This dataset can be used for an unlimited number of ownership verifications for the respective victim model. Consequently, the additional cost associated with training the shadow models does not pose a significant challenge for the enhanced VeriDIP approach. ### _Bounding Model's Ownership via Differential Privacy Budget_ Maini et al. [23] raised an open question regarding the effectiveness of ownership testing methods based on over-fitting metrics when applied to differentially private DNN models. In this paper, we aim to address this question by investigating the behavior of the p-value in Algorithm 1 for \(\epsilon\)-DP DNN models, where \(\epsilon\) represents the privacy budget. Differential privacy techniques [37], considered the de facto standard for privacy protection, provide an upper bound on the advantage of MI attacks [38] by definition. Consequently, they also place a lower bound on the p-value obtained through the model ownership proof algorithm, such as Algorithm 1. These techniques introduce a privacy budget \(\epsilon\) to govern the level of privacy protection afforded to DNN models (see Section 3.4.1). A smaller value of \(\epsilon\) corresponds to stronger privacy protection. Let \(f_{\epsilon}\) be a DNN model that satisfies \(\epsilon\)-DP and \(\mathcal{A}\) be the global MI attack algorithm in Definition 4. According to [38], the membership advantadge satisfies \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f_{\epsilon},\mathcal{D})\leq\exp( \epsilon)-1\). Substituting the inequality into Equation (7), we have \[\begin{split} P&=1-\Phi\left(\frac{\left(\underset{ \boldsymbol{z}\sim\mathcal{D}_{0}}{\mathbb{E}}\left[\mathcal{A}\left( \boldsymbol{z},f_{\epsilon},\mathcal{D}\right)\right]-\underset{\boldsymbol{z }\sim\mathcal{D}_{1}}{\mathbb{E}}\left[\mathcal{A}\left(\boldsymbol{z},f_{ \epsilon},\mathcal{D}\right)\right]\right)}{\sqrt{\frac{\sigma_{0}^{2}+\sigma _{1}^{2}}{n_{S}}}}\right)\\ &\geq 1-\Phi\left(\frac{\left(\exp(\epsilon)-1\right)*\sqrt{n_{S}}}{ \sqrt{\sigma_{0}^{2}+\sigma_{1}^{2}}}\right).\end{split} \tag{10}\] Therefore, when the privacy budget \(\epsilon\) and sample size \(n_{S}\) are fixed, the minimum p-value is determined accordingly. We plot the minimum p-value as a function of the privacy budget \(\epsilon\) for specific values of \(n_{S}\). In our analysis, we consider three choices for \(n_{S}\), namely 10, 20, and 100. The corresponding results are illustrated in Figure 4. **Differential privacy budgets negatively impact the performance of VeriDIP.** In Figure 4(a), for the CIFAR-10 dataset, when \(\epsilon=0.1\) and \(n_{S}=10\), the corresponding p-value is \(P\geq 0.156\). This implies that if the DNN model is \(0.1\)-differentially private, the ownership testing algorithm, using only \(10\) samples at a significance level of \(\alpha=0.01\), cannot claim ownership of this model due to the presence of differential privacy protection. This holds true regardless of the effectiveness of the deployed MI attack. By increasing \(\epsilon\) to \(0.5\), the lower bound of the p-value decreases to \(P\geq 1.15\times 10^{-14}\). Fortunately, in practice, it is uncommon to train machine learning models with excessively restrictive privacy budgets such as \(\epsilon=0.1\), as doing so would significantly compromise the utility of the machine learning model. In the upcoming section, we will experiment with a reasonable privacy budget on a wide range of models and datasets to explore the trade-offs between privacy protection and model ownership protection. ## 5 Evaluations In this section, we begin by introducing the experimental settings. We then conduct a comprehensive evaluation of both the basic and enhanced VeriDIP methods, comparing their performance to the state-of-the-art Dataset Inference (DI) [23] approach. Finally, we explore the effectiveness of VeriDIP when applied to DP DNN models. Fig. 4: Lower bound of p-values against the privacy budget \(\epsilon\). Fig. 3: Score difference distribution for CIFAR-10, FMNIST, Health, Adult datasets. ### _Experimental Setup_ To begin with, we briefly show the details of datasets and the configurations of machine learning models used in the experiments. **Datasets.** We use four famous datasets in our experimental evaluation, CIFAR-10 1, FMNIST 2, Adult 3, and Health 4. Specifically, CIFAR-10 and FMNIST are two image datasets used by recent studies in evaluating WE and OT approaches [10, 23, 25, 32]; Adult and Health are two tabular datasets, by which we could train (almost) perfect MI attacks-resilient model as (almost) the worst-case scenario for VeriDIP (Algorithm 1). Footnote 1: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html) Footnote 2: [https://github.com/zalandoresearch/fashion-mnist](https://github.com/zalandoresearch/fashion-mnist) Footnote 3: [https://archive.ics.uci.edu/ml/datasets/adult](https://archive.ics.uci.edu/ml/datasets/adult) Footnote 4: [https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm](https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm) * **CIFAR-10: CIFAR-10 consists of \(32\times 32\) color images of \(10\) real world objects, with \(5,000\) instances of each object class.** * **FMNIST: Fashion MNIST consists of \(28\times 28\) grayscale images, associated a label from \(10\) classes, with \(7,000\) instances of each object class.** * **Adult: The US Adult Census dataset comprises \(48,842\) entries, with each entry containing \(13\) features. These features are utilized to infer whether an individual's income exceeds 50K/year or not.** * **Health: The Heritage Health dataset consists of \(139,785\) physician records and insurance claims, with each record containing \(250\) features. The objective is to predict ten-year mortality by binarizing the Charlson Index, using the median value as a cutoff.** **Neural networks.** Following existing works [10, 32], we train CIFAR-10 using ResNet-18 architecture and the SGD optimizer with a stepped learning rate. The initial learning rate is set to \(0.01\) and is divided by ten every 20 epochs. For the FMNIST dataset, we train a convolutional neural network (CNN) using the Adam optimizer. As for the Adult and Health datasets, which are tabular datasets, we utilize a 4-layer perceptron with the Adam optimizer. The learning rate for all Adam optimizers is set to \(10^{-4}\). The batch size is set to \(50\) for CIFAR-10 and FMNIST, and it is set to \(500\) for Adult and Health.** **Model stealing attacks.** We have discussed attackers in OT experiments in Section 3.3. In this section, we consider three types of model stealing attacks that are commonly used for evaluating the effectiveness of copyright protection approaches. Note that fine-prune attack [11] presented in Figure 1 is not specifically targeted at model copyright protection but rather falls under a category of defenses against model backdoor attacks. Therefore, to ensure fairness in the experiments, we did not include it in our evaluation. * **Model extraction (ME) attack [6, 33]. The ME attack retrains a model from scratch by minimizing the loss between the predictions of stolen copies and its teacher predictions.** * **Knowledge distillation (KD) [9]. The KD attack retrains a model from scratch by minimizing the distance between the teacher's and student's soft predictions plus the cross-entropy loss between the student's prediction and ground-truth label \(y\). The student model is the stolen copy.** * **Fine-tuning (FT) [10]. The FT attack keeps training the victim model for a while to modify the original decision boundary. It first uses a large learning rate to erase the original decision boundary, then gradually reduces the learning rate to restore the prediction accuracy of the model. According to their result, it is effective for removing all watermarks.** The ME and the KD are black-box attacks, while FT is a white-box attack. We use the open-source code and the same hyperparameters as the existing works of ME [33], KD [9] and FT [10]. We list their loss functions and hyperparameters in Table III. According to [10], carefully tuning the learning rate can remove all model watermarks. Our aim is to determine the effectiveness of these attacks in disturbing model fingerprints.** **MI attack algorithm.** The implementation of the global threshold MI attack follows the methods proposed by Yeom et al. [38]. As for the per-sample threshold MI attacks, there are two implementations: online and offline. We use the open-source code of the online implementation [31] since it demonstrates better attack performance.** **Reproduction of Dataset Inference (DI) [23].** DI proposed to use "prediction margins" as fingerprints to verify model ownership. The prediction margins are obtained by performing adversarial attacks on the suspect models. We use their black-box implementation (_Blind Walk_) since it is more consistent with our attacker's capability assumptions. Plus, the _Blind Walk_ has better verification performance and lower computational costs than their white-box implementation (_MinGD_) [23]. ### _Metrics_ We use two indicators to evaluate the performance of the model OT algorithm: * **p-value.** The p-value is the outputs of Algorithm 1, which is inherited from [23]. The p-value indicates the probability that a suspect model is not a stolen copy. The smaller this metric, the more copyright verification judgment is likely to be correct. * **Exposed sample size \(n_{S}\). \(n_{S}\) denotes the minimum number of training samples exposed in the verification phase to verify the copyright of stolen copies successfully. Thus, for a fixed \(\alpha\), a smaller value of \(n_{S}\) indicates better privacy protection. ### _Performance of Baseline Models: Victim and Stolen Models_ We begin by training machine learning models on the four datasets and present the training set size (TrainSize), test set size (TestSize), training set accuracy (TrainAcc), test set accuracy (TestAcc), and accuracy difference (AccDiff) in Table IV. It can be observed that all victim/target models achieve satisfactory accuracy. To improve the performance of CIFAR-10, we employ the data augmentation technique [2]. This involves randomly flipping and cropping the images to generate new samples, thereby increasing the diversity of the training set and enhancing the generalization capabilities of the trained machine learning models. As depicted in Table IV, the models trained on tabular datasets (i.e., Adult and Health) exhibit better generalization (with smaller TrainAcc and TestAcc differences) compared to the models trained on image datasets (i.e., CIFAR-10 and FMNIST). We also present the performance of stolen models obtained using the ME attack, KD attack, and FT attack in Table V. We assume that attackers possess a randomly sampled subset of the private trainset \(S\), comprising \(40\%\) of the data. It is important to note that the ME attacker does not have access to ground-truth labels, as per its definition. The FT attack, as described in [10], initially perturbs the original decision boundary of the model using a large learning rate and subsequently reduces the learning rate to restore the model's usability. In general, the performance of FT models tends to be superior to that of the victim model, whereas the usability of ME and KD models is slightly inferior to that of the victim model. ### _VeriDIP Performance_ #### 5.4.1 Fingerprints Distribution By conducting theoretical analysis, we can determine whether the MI advantage serves as a valid fingerprint. In such cases, its value should be higher in the victim model and approach \(0\) in the independent model. Here, an independent model refers to a model that is trained separately and is not derived from the victim model. To represent independent models, we consider two scenarios: (1) models trained on disjoint but identically distributed data, specifically using validation data, and (2) models trained on different distributional data, involving other datasets. For our experiment, we train a total of \(50\) victim models and \(50\) independent models for each database. Subsequently, we plot the distribution of extracted model fingerprints for both victim models (positives) and independent models (negatives). The resulting distributions are presented in Figure 5. **The experimental results confirmed that MI advantage is a valid fingerprint**. Overall, we observe that the MI advantage of all target models can be clearly distinguished from that of the independent models. Specifically, the MI advantage of all independent models approaches \(0\), aligning with our expectations. Notably, in Figure 5(b), we observe that the MI advantage serves as a valid fingerprint even for Health models, as evidenced by the AUROC of the global threshold MI attack being \(0.5032\) (indicating performance similar to random guessing). Regardless of whether the training set of independent models is sampled from the same data distribution or other data distributions, the use of MI advantages as fingerprint estimations enables their identification as negative models. Figure 5(a), Figure 5(b), and Figure 5(c) depict independent models trained on validation data from the same distribution, while Figure 5(d) shows independent models trained on MNIST datasets (representing a different distribution). In all these benchmarks, the extracted fingerprints from victim models are consistently close to \(0\). #### 5.4.2 Basic VeriDIP In this section, we evaluate the performance of VeriDIP, as proposed in Algorithm 1, on the four datasets. We first focus on the basic VeriDIP, which utilizes "random samples" to estimate the average-case privacy leakage. The basic VeriDIP, coupled with the global threshold MI attack, is denoted as \(\mathcal{V}_{G}\), while the basic VeriDIP employing the per-sample threshold MI attack is denoted as \(\mathcal{V}_{P}\). Stolen copies obtained through model extraction attacks (ME), knowledge distillation (KD), and fine-tuning (FT) are considered positive instances in our evaluation. We report the p-values returned by Algorithm 1 in Table VI. A lower p-value is considered better for positive instances (victim, stolen models), while a higher p-value is preferred for negative instances (independent models). To obtain each p-value presented in Table VI, we trained a minimum of \(10\) models with varying seeds. We then performed hypothesis tests over \(20\) iterations for each model, resulting in an average of at least \(200\) trials for the final result. Since different numbers of exposed samples (\(n_{S}\)) lead to different p-values, we also plot the p-value curves against \(n_{S}\) for the four datasets. The results are shown in Figure 6. The black dashed line represents the significance level set at \(\alpha=0.01\). When a point on the curve lies below the threshold line, it indicates that exposing those \(n_{S}\) training samples is sufficient to establish ownership under the condition of \(\alpha=0.01\). According to the results shown in Table VI and Figure 6, we summarized the following results. **(1) The basic VeriDIP demonstrates satisfactory performance in verifying the ownership of victim models and their stolen copies on CIFAR-10 and FMNIST datasets.** Overall, VeriDIP equipped with both the global and the per-sample MI attacks successfully establishes ownership of all positive models with a confidence level exceeding \(99\%\), requiring the exposure of fewer than \(200\) private training samples. The p-values of all independent models (negative models) are in the range of \(10^{-1}\), ensuring they are \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Datasets & TrainSize & TestSize & TrainAcc & TestAcc & AccDiff \\ \hline CIFAR-10 & \(17500\) & \(10000\) & \(98.41\%\) & \(86.76\%\) & \(11.79\%\) \\ FMNIST & \(29700\) & \(10000\) & \(99.77\%\) & \(90.50\%\) & \(9.51\%\) \\ Health & \(20000\) & \(10000\) & \(88.31\%\) & \(86.87\%\) & \(1.43\%\) \\ Adult & \(15000\) & \(5222\) & \(85.61\%\) & \(84.81\%\) & \(0.80\%\) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Machine learning efficacy for victim models, AccDiff=TrainAcc -TestAcc. \begin{table} \begin{tabular}{c c|c c c c} \hline \hline Database & TrainSize & ME & KD & FT & Base \\ \hline CIFAR-10 & 7000 & \(80.60\%\) & \(81.79\%\) & \(89.61\%\) & \(86.76\%\) \\ FMNIST & 11880 & \(88.23\%\) & \(88.23\%\) & \(91.04\%\) & \(90.50\%\) \\ Health & 8000 & \(86.74\%\) & \(86.61\%\) & \(86.77\%\) & \(86.87\%\) \\ Adult & 6000 & \(84.73\%\) & \(84.70\%\) & \(84.82\%\) & \(84.81\%\) \\ \hline \hline \end{tabular} \end{table} TABLE V: Machine learning performance of stolen copies. not misclassified as positives. This effective discrimination between positive and negative models is achieved through the proposed fingerprint extraction scheme in this paper. **(2) The ownership verification performance of VeriDIP is negatively correlated with the model's generalization ability.** VeriDIP equipped with the per-sample MI attack remains effective for DNN models trained on the Adult and Health datasets but exposes a larger number of private training samples, up to about \(2,000\) to \(3,000\). However, VeriDIP equipped with a global MI attack fails to achieve successful verification on these two datasets. This outcome is not surprising, as we have previously expressed concerns in Section 4.3. When a model's output probability distributions for membership and non-membership are nearly identical, extracting sufficient fingerprints to determine ownership requires more exposed samples and stronger MI attacks. Nevertheless, increasing the number of exposed private training samples violates the principle of personal privacy protection during public ownership verification. Therefore, the adoption of stronger fingerprint extraction methods, such as the enhanced VeriDIP proposed in Section 4.3, may prove beneficial. **(3) Fine-tuning, although the most effective attack against watermark embedding, is the easiest attack for VeriDIP to defend.** Unlike watermark embedding techniques that artificially embed unique classification patterns into the decision boundary of IP-protected models, VeriDIP extracts inherent privacy leakage characteristics as fingerprints for ownership verification. As reported in [10], their proposed fine-tuning attack can effectively remove all watermarks. However, the results shown in Figure 6 indicate that the fine-tuned model (red line) is even more susceptible to fingerprint extraction compared to the original model (blue line). The reason behind this observation might be that fine-tuning reinforces the model's memory of a subset of training samples, which VeriDIP can exploit as a fingerprint for ownership judgment. **(4) The effect of VeriDIP is positively correlated with the MI attack effectiveness.** While VeriDIP can be equipped with various black-box MI attacks to extract model ownership fingerprints, this paper focuses on evaluating two representative attacks: the basic global MI attack and the advanced per-sample MI attack, due to space limitations. Comparing Figure 6(a) and Figure 6(b) for CIFAR-10, as well as Figure 6(c) and Figure 6(d) for FMNIST, we observe that \(\mathcal{V}P\) requires exposing only half the number of training samples compared to \(\mathcal{V}G\). Additionally, for the Adult and Health databases, \(\mathcal{V}_{G}\) fails to verify ownership altogether (refer to Figure 6(e) and Figure 6(h)). The reason for this is that a stronger MI attack can provide a tighter lower bound estimation of privacy leakage, resulting in more accurate model fingerprints. In summary, the basic VeriDIP equipped with the per-sample MI attacks \(\mathcal{V}_{P}\) successfully identifies all victim models and their stolen copies as positives, while correctly classifying all independent models as negatives. However, for models that are only slightly overfitted, even with the utilization of the most advanced MI attack to estimate privacy leakage fingerprints, a significant number of private training samples are still required to establish ownership. Hence, it is imperative to devise solutions that reduce VeriDIP's reliance on model overfitting. #### 5.4.3 Enhanced VeriDIP In this section, we evaluate the enhanced VeriDIP on four datasets and compare the results with those of the basic VeriDIP. Table VII reports the minimum number of exposed training samples required to verify ownership at a significance level of \(\alpha=0.01\) (with \(99\%\) confidence). Note that the p-values of all independent models remain at \(10^{-1}\), and therefore, we have omitted the corresponding \(n_{S}\) values for them. To identify the less private data in advance, we train \(N\) shadow models (\(N=100\)), where each model is trained by sampling half of the database. Consequently, for each data point, we have approximately \(N/2\) models that include the data and \(N/2\) models that exclude the data. We compute the loss difference \(\eta(z)\) for each data point using Equation (9) \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multirow{2}{*}{Datasets} & \multirow{2}{*}{\(n_{S}\)} & \multicolumn{6}{c}{p-value} \\ \cline{3-8} & & & TAR & ME & KD & FT & IND \\ \hline \multirow{4}{*}{\(\mathcal{V}_{G}\)} & CIFAR-10 & 200 & \(10^{-5}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-8}\) & \(10^{-1}\) \\ & FMNIST & 200 & \(10^{-6}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-8}\) & \(10^{-1}\) \\ & Adult & 2000 & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) \\ & Health & 3000 & \(10^{-2}\) & \(10^{-1}\) & \(10^{-2}\) & \(10^{-3}\) & \(10^{-1}\) \\ \hline \hline \multirow{4}{*}{\(\mathcal{V}_{P}\)} & CIFAR-10 & 200 & \(10^{-10}\) & \(10^{-4}\) & \(10^{-5}\) & \(10^{-11}\) & \(10^{-1}\) \\ & FMNIST & 200 & \(10^{-10}\) & \(10^{-4}\) & \(10^{-4}\) & \(10^{-11}\) & \(10^{-1}\) \\ \cline{1-1} & Adult & 2000 & \(10^{-5}\) & \(10^{-4}\) & \(10^{-3}\) & \(10^{-5}\) & \(10^{-1}\) \\ \cline{1-1} & Health & 3000 & \(10^{-12}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-10}\) & \(10^{-1}\) \\ \hline \hline \end{tabular} \end{table} TABLE VI: p-values for OT. Tar: target models; ET: model extraction attack; DT: Distillation Attack; FT: Fine-tune Attack; Ind: independent models. \(\mathcal{V}_{\Psi}\): The basic VeriDIP equipped with the global MI attack; \(\mathcal{V}_{\Psi}\): The basic VeriDIP equipped with the per-sample MI attack. Fig. 5: Fingerprints distribution for target models and the independent models, using 50 models for each distribution. and select the \(k\) samples with the highest \(\eta(z)\) values as the less private data. **The enhanced VeriDIP offers superior performance compared to the basic VeriDIP.** For CIFAR-10 and FMNIST datasets shown in Table VII, the enhanced VeriDIP equipped with both the global MI attacks and the per-sample MI attacks successfully verify the ownership of all target ("Tar") and stolen models ("ME", "KD", and "FT") by exposing only \(5\) samples. In the case of more generalized models, such as Adult and Health, the number of exposed training samples is reduced to \(\frac{1}{100}-\frac{1}{10}\) of the basic VeriDIP. It is worth noting that the enhanced VeriDIP equipped with the global MI attack fails to prove ownership for the Adult database. We believe this is because the global MI attack is not powerful enough to extract useful privacy leakage fingerprints in such generalized models. The main reasons for the success of the enhanced solution are: * Leveraging the worst-case privacy leakage as the model fingerprint can significantly amplify the characteristics of the positive model that are different from the negative counterparts (see Figure 2); * The decision boundary for less private data is transferable \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{global} & \multicolumn{2}{c}{per-sample} \\ \cline{3-6} & & Basic & Enh & Basic & Enh \\ \hline \multirow{4}{*}{CIFAR-10} & TAR & 42 & 5 & 23 & 5 \\ & ME & 185 & 5 & 87 & 5 \\ & KD & 94 & 5 & 47 & 5 \\ & FT & 24 & 5 & 23 & 5 \\ \hline \multirow{4}{*}{FMNIST} & TAR & 27 & 5 & 17 & 5 \\ & ME & 170 & 5 & 75 & 5 \\ & KD & 125 & 5 & 80 & 5 \\ & FT & 23 & 5 & 15 & 5 \\ \hline \multirow{4}{*}{Adult} & TAR & – & – & 460 & 5 \\ & ME & – & – & 800 & 6 \\ & KD & – & – & 1600 & 70 \\ & FT & – & – & 430 & 5 \\ \hline \multirow{4}{*}{Health} & TAR & – & 83 & 250 & 8 \\ & ME & – & 148 & 2500 & 28 \\ \cline{1-1} & KD & – & 135 & 2200 & 125 \\ \cline{1-1} & FT & 3000 & 81 & 200 & 6 \\ \hline \hline \end{tabular} \end{table} TABLE VII: Exposed number of training samples \(n_{S}\) when \(\alpha=0.01\). Smaller \(n_{S}\) means better ownership verification performance. “-” means failure. Fig. 6: p-value against the number of exposed training samples \(n_{S}\). Black dotted line implies \(\alpha=0.01\). Fig. 7: Comparison between the enhanced VeriDIP equipped with the global MI attack \(\mathcal{V}_{\text{E}\text{.G}}\) (Dotted line with marker “\(\times\)”) and the enhanced VeriDIP equipped with the per-sample MI attack \(\mathcal{V}_{\text{E}\text{.P}}\) (Solid line with marker “\(\cdot\)”). (not easy to erase) in the process of model stealing. We then compare the performance of the enhanced VeriDIP equipped with the global MI attack (denoted as \(\mathcal{V}_{\text{E-G}}\)) with the enhanced VeriDIP equipped with the per-sample MI attack (denoted as \(\mathcal{V}_{\text{E-P}}\)) and plot the p-value against \(n_{S}\) in Figure 7. **Compared with the basic VeriDIP where \(\mathcal{V}_{\text{E}}\) is superior to \(\mathcal{V}_{\text{G}}\) for all tasks, the behavior of \(\mathcal{V}_{\text{E-P}}\) and \(\mathcal{V}_{\text{E-G}}\) is more complex in enhanced VeriDIP.** For instance, in Figure 7(a) and Figure 7(b), \(\mathcal{V}_{\text{E-G}}\) shows surprisingly better performance than \(\mathcal{V}_{\text{E-P}}\), but the opposite is true for the Health and Adult databases. Particularly for the Adult database (see Figure 7(c)), \(\mathcal{V}_{\text{E-G}}\) fails to identify all positive models. Investigating the attack ability of MI attacks on different types of databases is beyond the scope of this work. However, we can conclude that the enhanced VeriDIP equipped with the global MI attack is more than sufficient to prove ownership of models trained on CIFAR-10 and FMNIST databases. For models that are barely overfitted, such as those trained on the Adult and Health databases, the enhanced VeriDIP equipped with the per-sample MI attack is a better choice. #### 5.4.4 Comparisons with State-of-the-art Dataset Inference (DI) [23] is the most similar to our idea, but differs in terms of model fingerprint extraction methods. Therefore, we compare our verification performance and costs with DI both functionally and experimentally. The result are show in Table VIII and Table IX. We summarize the results in the following aspects: First, VeriDIP is applicable to tabular trained DNN models, while DI is not. DI uses adversarial noise as fingerprints, but finding the adversarial noise is not trivial for models trained on tabular data. Tabular data may contain a combination of continuous, discrete, and categorical features, making it difficult to calculate adversarial noise through gradient descent. VeriDIP, on the other hand, only requires querying the DNN model's prediction probability, making it applicable to all classifiers. Second, compared to DI, VeriDIP significantly reduces the number of required queries during ownership verification, making it immune to the detector attack [28]. DI requires querying the suspect model \(n_{S}\times n_{\text{adv}}\times T\) times to obtain a model fingerprint. However, this can raise suspicion from pirated APIs, leading to refusals to answer or adding noise to the responses. Here, \(n_{S}\) denotes the number of exposed training samples, \(n_{\text{adv}}\) is the number of repeated adversarial attacks per sample, and \(T\) is the number of queries for one adversarial attack. In the original setting of [28], \(n_{\text{adv}}=30\) and \(T=50\). Table IX lists the experimental results for identifying target models in CIFAR-10 and FMNIST. We do not provide the results for Adult and Health datasets because DI does not support them. Consequently, VeriDIP achieves similar or better performance with significantly fewer exposed training samples (two orders of magnitude less than DI). Third, VeriDIP can be directly linked to the definition of DP, as the privacy leakage estimated by MI attacks serves as a lower bound for the privacy budget \(\epsilon\) in DP (see analysis in Section 4.4). In contrast, DI leaves the connection to DP as an open question. #### 5.4.5 Differential Privacy Relationship In this section, we experimentally discuss the effectiveness of VeriDIP on DP machine learning models, which is also a remaining problem addressed in [23]. For this evaluation, we select the enhanced VeriDIP models \(\mathcal{V}\)E-P and \(\mathcal{V}\)E-G due to their improved performance. **Experiment setup.** We use the DP Adam optimizer [47] to train DP machine learning models and compose the privacy budget using RDP techniques [48]. In each iteration, we first clip gradient norm with the threshold \(C\), then add Gaussian noise with scale \(\sigma=\text{z}\ast C\) (see Table XIII) where \(\text{z}\) stands for the noise multiplier. We adjust different pairs of hyper-parameters \((C,\text{z})\) to trade off privacy vs. utility. For each dataset, we choose two privacy budget options for \((\epsilon,\delta)\), such that \((0.5,10^{-5})\) and \((1.0,10^{-5})\), where \(\delta\) is usually set to be the inverse of the number of training sets, as shown in [47]. These options are commonly used in training DP machine learning models. A smaller privacy budget \(\epsilon\) indicates a higher privacy protection level (yet lower model utility). The hyper-parameters that are related to training DP models and testing the accuracy of DP models are listed in Table XIII. Note that, the configuration of model stealing attacks are identical to the former's (see Section 5.1). Recall the theoretical analysis in Section 4.4, we bound the privacy budgets align with the VeriDIP's performance, for instance, \(\epsilon=0.1\) result in \(P>0.156\). Thus, we first \begin{table} \begin{tabular}{c c c c c} \hline \hline Database & (\(\epsilon\), \(\delta\)) & epoch & \((C,\text{z})\) & TestAcc \\ \hline \multirow{2}{*}{CIFAR-10} & \((1.0,10^{-5})\) & 60 & (5e-4.2,1) & \(84.79\%\) \\ & \((0.5,10^{-5})\) & 60 & (5e-4.4,1) & \(84.49\%\) \\ \hline \multirow{2}{*}{FMNIST} & \((1.0,10^{-5})\) & 19 & (5e-3,1,2) & \(90.54\%\) \\ & \((0.5,10^{-5})\) & 20 & (5e-3,1,9) & \(90.00\%\) \\ \hline \multirow{2}{*}{Health} & \((1.0,10^{-5})\) & 50 & (1e-3,4.9) & \(86.97\%\) \\ & \((0.5,10^{-5})\) & 50 & (1e-3,9.7) & \(86.92\%\) \\ \hline \multirow{2}{*}{Adult} & \((1.0,10^{-5})\) & 70 & (1e-3,7.9) & \(84.69\%\) \\ & \((0.5,10^{-5})\) & 60 & (1e-3,14.9) & \(84.73\%\) \\ \hline \hline \end{tabular} \end{table} TABLE IX: Hyper-parameters and test accuracy for DP models. \(\text{z}\): noise multiplier, \(C\):clipping threshold. \begin{table} \begin{tabular}{c c c c} \hline \hline & \begin{tabular}{c} Immune to \\ detector attack \\ \end{tabular} & \begin{tabular}{c} Support table- \\ trained models \\ \end{tabular} & \begin{tabular}{c} Directly \\ link to DP \\ \end{tabular} \\ \hline DI & no & no & no \\ Ours & yes & yes & yes \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Functional comparison with Dataset Inference [23]. experiment with \(\epsilon=0.1\) and find all DP models experienced a substantial loss in functionality. Particularly for CIFAR-10, the \((0.1,10^{-5})\)-DP model achieved only \(76.71\%\) test accuracy, compared with the non-DP benchmark, it loses approximately \(10\%\) of the accuracy. In accordance with the theoretical analysis, none of these models can be verified for ownership using VeriDIP. However, protecting the copyright of DP models becomes less meaningful without preserving utility, which motivated us to focus on evaluating the effectiveness of VeriDIP on more useful DP models. Based on our analysis, when \(\epsilon=0.5\), the limitation on the p-value is already negligible. We then experiment with \(\epsilon=0.5\) and \(\epsilon=1.0\) and Table XI presents the main result for VeriDIP on \((0.5,10^{-5})\)-DP and \((1.0,10^{-5})\)-DP models. Additionally, Figure 8 illustrates the comparisons of p-values against \(n_{S}\) curves for these DP models and non-DP models. Note that the fine-tuning attack [10] fails to steal a functionally-preserving DNN model trained with Adam optimizer, which is why the fourth row of CIFAR-10 is empty. **VeriDIP is as effective on utility-preserving DP models as it is on non-DP models**. Comparing the model utility presented in Table XI and Table IV, we found that, by carefully choosing DP hyper-parameters, all DP models show comparable utility with non-DP baselines. From Table XI and Figure 8, we can see that the effectiveness of \(\mathcal{V}_{\text{E-G}}\) and \(\mathcal{V}_{\text{E-P}}\) on CIFAR-10 and FMNIST are hardly affected by the noise injected by DP. While on Adult and Health datasets, more strict privacy protection may increase the number of exposed training samples. In Table XI, the number of exposed samples \(n_{S}\) of \((0.5,10^{-5})\)-DP models is higher than that of \((1.0,10^{-5})\)-DP models. This indicates that there is a trade-off between privacy protection and copyright protection, especially for those barely overfitted models. Since there is a subtle balance between privacy protection and copyright protection in generalized models, we study the behavior of VeriDIP varying different DP hyper-parameters for Adult and Health datasets. In particular, We study two types of DP hyper-parameters: DP clipping threshold \(C\) and the number of training epochs, and analyze their influence on VeriDIP. **(1) DP clipping threshold \(C\).**\(C\) represents the clipping threshold for batch gradients in each training iteration. We conducted experiments with different values of \(C\) as it does not affect the value of \(\epsilon\) but impacts the training performance. We kept the noise multiplier z and the number of training epochs fixed for \(\epsilon=0.5\). The p-value against \(n_{S}\) curve comparisons are depicted in Figure 9. From the figures, we observe that certain choices of \(C\) lead to the failure of VeriDIP, such as \(C=10^{-1}\), \(C=10^{-2}\), and \(C=10^{-5}\) in Figure 9(a), and \(C=10^{-1}\) in Figure 9(b). Excessively large or small values of \(C\) have a detrimental effect on the effectiveness of VeriDIP. A large \(C\) introduces \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{\(\epsilon=0.5\)} & \multicolumn{2}{c}{\(\epsilon=1.0\)} \\ \cline{3-6} & & \(n_{S}\) & p-value & \(n_{S}\) & p-value \\ \hline \multirow{3}{*}{CIFAR-10} & TAR & 5 & \(10^{-4}\) & 5 & \(10^{-4}\) \\ & ME & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & KD & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & FT & – & – & – & – \\ \hline \multirow{3}{*}{FMNIST} & TAR & 5 & \(10^{-6}\) & 5 & \(10^{-6}\) \\ & ME & 5 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ & KD & 5 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ & FT & 5 & \(10^{-4}\) & 5 & \(10^{-6}\) \\ \hline \multirow{3}{*}{Adult} & TAR & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & ME & 35 & \(10^{-3}\) & 25 & \(10^{-3}\) \\ & KD & 75 & \(10^{-3}\) & 55 & \(10^{-3}\) \\ & FT & 15 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ \hline \multirow{3}{*}{Health} & TAR & 15 & \(10^{-4}\) & 15 & \(10^{-4}\) \\ & ME & 175 & \(10^{-3}\) & 55 & \(10^{-3}\) \\ \cline{1-1} & KD & 135 & \(10^{-3}\) & 75 & \(10^{-3}\) \\ \cline{1-1} & FT & 15 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ \hline \hline \end{tabular} \end{table} TABLE XI: Verification performance of the enhanced VeriDIP on DP models. Fig. 8: Performance of The Enhanced VeriDIP \(\mathcal{V}_{\text{E-G}}\) and \(\mathcal{V}_{\text{E-P}}\) on DP IP-protected models. excessive noise due to the noise scale \(\sigma=\mathrm{z}*C\). Conversely, a small \(C\) restricts the gradient magnitude in each iteration, thereby affecting the model's learning process. Hence, we encourage model owners to explore various choices of \(C\) to determine the optimal value when training a DNN model with both privacy protection and copyright protection. **(2) Number of training epochs.** In addition to \(C\), the model trainer has two options to achieve the same privacy protection: (a) more training epochs but less noise for each iteration. (b) less training epochs but more noise for each iteration. Thus, we compare these options and the results are shown in Figure 10. As a result, we find that option (a) has better VeriDIP performance for the DNN models than option (b). To summarize, the enhanced VeriDIP is effective on DP-protected DNN models. Some privacy-preserving models may double or triple the number of exposed training samples in VeriDIP as a trade-off. Besides, carefully selecting the DP hyperparameters is crucial for model owners to simultaneously benefit from privacy protection and copyright protection. ## 6 Conclusion and Future Work Directions **Conclusion of This Paper.** The increasing prevalence of model-stealing attacks poses a significant threat to the protection of neural network models' copyrights. In this work, we propose a novel ownership testing framework for DNN models, VeriDIP, along with its enhanced version, to combat model plagiarism. VeriDIP leverages privacy leakage as a natural fingerprint for verifying DNN model ownership. The enhanced VeriDIP utilizes a reduced amount of private data to estimate the worst-case privacy leakage of models, serving as enhanced model fingerprints. Our comprehensive experiments demonstrate that the enhanced VeriDIP achieves a true positive rate of \(100\%\) and a false positive rate of \(0\) in accurately identifying positive models (victim models and their stolen copies) as opposed to negative models (independent models), requiring a minimum of 5 data samples during the verification process. Furthermore, the enhanced VeriDIP effectively addresses an open problem concerning the protection of the copyright of any utility-preserved differentially private models. **Future Work Directions.** We list the following potential future work directions for this paper. 1. Quantitative standard for the Number of Shadow Models Required. In this paper, in order to identify less private data for the enhanced VeriDIP, we trained \(100\) shadow models for each mentioned dataset. It is important to note that this empirical number of shadow models may vary depending on the specific datasets. Therefore, it would be valuable to propose a quantitative standard for determining the appropriate number of shadow models based on the characteristics of the given datasets. 2. Extending to other data domains. While our study primarily focuses on image and tabular data, future research can explore the applicability of VeriDIP to other data types and domains. This could include natural language processing, audio data, or even more specialized domains such as genomics or finance. 3. Efficiency improvement. Future work can focus on enhancing the efficiency of the VeriDIP framework by reducing the computation costs associated with finding less private data. These efforts will contribute to minimizing the computational overhead and making the framework more practical for real-world deployment.
2309.04740
Generic Mott-Hubbard phase diagram for extended Hubbard models without Umklapp scattering
We determine the ground-state phase diagram for the 1/r-Hubbard model with repulsive nearest-neighbor interaction at half band-filling using the density-matrix renormalization group (DMRG) method. Due to the absence of Umklapp cattering, the phase diagram displays finite regions for the three generic phases, namely, a Luttinger liquid metal for weak interactions, a Mott-Hubbard insulator for dominant Hubbard interactions, and a charge-density-wave insulator for dominant nearest-neighbor interactions. Up to moderate interactions strengths, the quantum phase transitions between the metallic and insulating phases are continuous, i.e., the gap opens continuously as a function of the interaction strength. We conclude that generic short-range interactions do not change the nature of the Mott transition qualitatively.
Florian Gebhard, Kevin Bauerbach, Örs Legeza
2023-09-09T09:58:22Z
http://arxiv.org/abs/2309.04740v1
# Generic Mott-Hubbard phase diagram for extended Hubbard models ###### Abstract We determine the ground-state phase diagram for the \(1/r\)-Hubbard model with repulsive nearest-neighbor interaction at half band-filling using the density-matrix renormalization group (DMRG) method. Due to the absence of Umklapp scattering, the phase diagram displays finite regions for the three generic phases, namely, a Luttinger liquid metal for weak interactions, a Mott-Hubbard insulator for dominant Hubbard interactions, and a charge-density-wave insulator for dominant nearest-neighbor interactions. Up to moderate interactions strengths, the quantum phase transitions between the metallic and insulating phases are continuous, i.e., the gap opens continuously as a function of the interaction strength. We conclude that generic short-range interactions do not change the nature of the Mott transition qualitatively. ## I Overview After a short introduction in Sect. I.1, we present in Sect. I.2 the generic Mott-Hubbard phase diagram for extended Hubbard models without Umklapp scattering, the central result of our work. The corresponding model and its ground-state properties are discussed in the remainder of this work, as outlined in Sect. I.3. ## II Introduction The Mott transition is one of the long-standing problems in condensed-matter many-body physics [1; 2]. As formalized in the Hubbard model [3; 4; 5], an electronic system with a single-band of width \(W\) and a purely local interaction of strength \(U\) will be a metal for weak interactions, \(W\ll U\), and an insulator for strong interactions, \(U\gg W\). As argued by Mott early on [6], there must be a metal-to-insulator transition, generically at \(U_{\rm c}\approx W\) when the two energy scales are comparable, irrespective of magnetic or charge order. The quantitative analysis of a quantum phase transition in an interacting many-particle system is notoriously difficult. Concomitantly, analytical solutions are scarce even for the simplest models and in one spatial dimension [2; 7; 8]. Numerical approaches in finite dimensions are hampered by finite-size effects so that the calculation of ground-state quantities is also best performed for one-dimensional model systems. In one dimension, the numerical density-matrix renormalization group (DMRG) method provides accurate data for large enough systems with the order of hundred lattice sites and particles [9; 10; 11; 12; 13]. In some respects, one-dimensional systems behave qualitatively different from their three-dimensional counterparts. Most importantly, they generically display the perfect-nesting instability because the two Fermi points at half band-filling are connected by half a reciprocal lattice vector. Umklapp scattering turns the system insulating as soon as the (effective) interaction of the particles becomes finite [14]. Therefore, \(U_{\rm c}=0^{+}\) is the generic situation [2; 7; 8], in contrast to Mott's expectations. Correspondingly, the phase diagram for the one-dimensional Hubbard model does not contain a finite metallic region. When the Hubbard model is extended by the inclusion of a nearest-neighbor interaction, the ground-state phase diagram becomes more varied but one can only study quantum phase transitions between Mott-Hubbard, charge-density-wave (CDW) insulator, and bond-order-wave (BOW) insulator [15; 16; 17]. For more information on density waves in strongly correlated quantum chains, see Ref. [18]. To avoid Umklapp scattering at weak coupling, one can investigate models with only right-moving electrons that display only one Fermi point. A known example is the \(1/r\)-Hubbard model with its linear dispersion relation within the Brillouin zone [2; 19; 20]. Indeed, as indicated analytically [19; 21] and recently corroborated using DMRG [22], the critical interaction strength for the Mott transition is finite in the \(1/r\)-Hubbard model. Therefore, we can study the competition of the metallic and insulating phases and the corresponding quantum phase transitions using the extended \(1/r\)-Hubbard model in one dimension. The resulting phase diagram should be generic in the sense that each phase covers a finite region in the ground-state phase diagram, as is expected for a three-dimensional system at half band-filling without Umklapp scattering. ### Phase diagram The phase diagram in Fig. 1 depicts the central result of our work. It shows the generic Mott-Hubbard phase diagram for extended Hubbard models without Umklapp scattering. Derived for the special case of the extended \(1/r\)-Hubbard model, the phase diagram displays finite regions for the generic phases of an interacting electron system with a single half-filled band of width \(W\equiv 1\) and with tunable local interaction \(U\) and nearest-neighbor interaction \(V\). As can be argued using weak-coupling and strong-coupling perturbation theory, there should be a metallic phase at weak interactions, \(U,V\ll W\), that becomes unstable against a Mott-Hubbard insulator for dominant Hubbard interaction, \(U\gg V,W\), or against a charge-density-wave (CDW) insulator for dominant nearest-neighbor interactions, \(V\gg U,W\). The critical interactions for the corresponding quantum phase transitions should be finite, the competing interactions being of the same order of magnitude. Indeed, when the Coulomb interactions are dominant, \(U,V\gg W\), the separation line between Mott-Hubbard insulator and CDW insulator should be \(V=U/2\). The corresponding curve is included as a dashed line in Fig. 1. For large \(U,V\), we find \(V_{\rm c}(U)\gtrsim U/2\), with small deviations in favor of the Mott-Hubbard insulator. For this reason, we only show the phase diagram for \(U\leq 1.6\). A bond-order wave might separate the two insulating phases, as is found in the one-dimensional extended Hubbard model [15; 16; 17]. Therefore, the line separating Mott-Hubbard insulator and charge-density-wave insulator should be taken as a guide to the eye only. In our study we focus on the transitions between the metallic Luttinger liquid and the two insulating phases. We determine \(V_{\rm c}(U)\) for fixed \(0\leq v=V/U\leq 0.7\) with increment \(\Delta v=0.1\), and for fixed \(U=0.2\); for the meaning of the error bars in Fig. 1, see Sect. IV. We note the following. * In the absence of a nearest-neighbor interaction, the Mott-Hubbard transition is known to occur at \(U_{\rm c}(V=0)=1\)[19; 2] which is well reproduced using DMRG [22]. The repulsive nearest-neighbor interaction _increases_ the critical interaction strength, i.e., the inclusion of the nearest-neighbor interaction stabilizes the _metallic_ phase. Apparently, the additional repulsive nearest-neighbor interaction softens the two-particle scattering potential that is purely local in the bare Hubbard model. As a major result we find that the Mott transition remains continuous in the presence of a nearest-neighbor interaction. We presume that short-range interactions that decrease as a function of the particle distance will not fundamentally alter this behavior. * The transition from the Luttinger liquid metal to the charge-density-wave insulator is fairly common in the sense that even Hartree-Fock theory qualitatively reproduces the transition for not too large interactions. In Fig. 1, the corresponding Hartree-Fock prediction is shown as a dotted line. As usual, Hartree-Fock theory overestimates the stability of the ordered state and thus underestimates the critical interaction, \(V_{\rm c,CDW}^{\rm HF}(U)<V_{\rm c,CDW}(U)\). Since the metallic phase extends well beyond the line \(V=U/2\), there is no indication for a bond-order wave that might separate the Luttinger liquid and the charge-density-wave insulator. We use a third-order spline interpolation through the data points to draw the phase transition lines in Fig. 1. The full lines depict continuous quantum phase transitions in the sense that the gaps open and close continuously at the same critical interaction when the transition is approached from the metallic and insulating sides, respectively. The endpoint of both continuous lines where all three phases meet deserves spatial attention. Unsurprisingly, finite-size corrections are most severe in this region of phase space, and the study of the region around the tricritical point is cumbersome and beyond the scope of our presentation. Figure 1: Phase diagram of the one-dimensional extended \(1/r\)-Hubbard model; energies in units of the bandwidth, \(W=1\). Dots: estimate for the critical interaction, \(\bar{U}_{\rm c}\), with error bounds; continuous lines: spline interpolations through the dots as guide to the eye; dotted line: Hartree-Fock (HF) result for the transition between metal and charge-density-wave insulator. ### Outline Our work is organized as follows. In Sect. II we define the Hubbard model with long-range electron transfers and onsite and nearest-neighbor Coulomb interactions. We introduce the ground-state properties of interest, namely, the ground-state energy, the two-particle gap, the momentum distribution, and the density-density correlation function from which we determine the Luttinger parameter in the metallic phase and the CDW order parameter. In Sect. III we present results for the ground-state properties and discuss their finite-size dependencies and extrapolations to the thermodynamic limit where appropriate. In Sect. IV we focus on the Mott-Hubbard transition in the presence of a nearest-neighbor interaction. We propose and discuss several methods to extract the critical interaction strength for the Mott transition based on the ground-state energy, the two-particle gap, the Luttinger parameter, and the structure factor whereby we study the Mott transition at fixed \(v\equiv V/U\) in the range \(0\leq v\leq 0.7\) (increment \(\Delta v=0.1\)) in units of the bandwidth, \(W\equiv 1\). In addition, we address the Mott transition as a function of \(V\) for fixed \(U=0.2\) and \(U=1.7\). Short conclusions, Sect. V, close our presentation. The Hartree-Fock calculations for the CDW transition are collected in the appendix. ## II Hubbard model with linear dispersion ### Hamiltonian In this work, we address the \(1/r\)-Hubbard model [2; 19] with nearest-neighbor interactions \[\hat{H}=\hat{T}+U\hat{D}+V\hat{V} \tag{1}\] on a ring with \(L\) sites (\(L\): even). We discuss the kinetic energy and the Coulomb interaction terms separately. #### ii.1.1 Kinetic energy The kinetic energy describes the tunneling of electrons with spin \(\sigma=\uparrow,\downarrow\) along a ring with \(L\) sites, \[\hat{T} = \sum_{\begin{subarray}{c}l,m=1\\ l\neq m;\sigma\end{subarray}}^{L}t(l-m)\hat{c}_{l,\sigma}^{+}\hat{c}_{m,\sigma }\;, \tag{2}\] \[t(r) = (-{\rm i}t)\frac{(-1)^{r}}{d(r)}\;,\] \[d(r) = \frac{L}{\pi}\sin\left(\frac{\pi r}{L}\right)\;. \tag{3}\] The creation and annihilation operators \(\hat{c}_{l,\sigma}^{+}\), \(\hat{c}_{l,\sigma}\) for an electron with spin \(\sigma=\uparrow,\downarrow\) on lattice site \(l\) obey the usual anti-commutation relations for fermions. In Eq. (3), \(d(l-m)\) is the chord distance between the sites \(l\) and \(m\) on a ring. In the thermodynamic limit and for \(|l-m|\ll L\) fixed, we have \(d(l-m)=(l-m)+\mathcal{O}(1/L^{2})\), and the electron transfer amplitude between two sites decays inversely proportional to their distance (\(\uparrow/r\)-Hubbard model!). Since \(L\) is even, we have anti-periodic electron transfer amplitudes because \(d(L+r)=-d(r)\). Therefore, we must choose anti-periodic boundary conditions \[\hat{c}_{L+l,\sigma}=-\hat{c}_{l,\sigma} \tag{4}\] for the operators, too. With these boundary conditions, the kinetic energy operator is diagonal in Fourier space, \[\hat{C}_{k,\sigma}^{+} = \frac{1}{\sqrt{L}}\sum_{l=1}^{L}e^{{\rm i}kl}\hat{c}_{l,\sigma}^{+}\;,\] \[\hat{c}_{l,\sigma}^{+} = \frac{1}{\sqrt{L}}\sum_{k}e^{-{\rm i}kl}\hat{C}_{k,\sigma}^{+}\;,\] \[k = \frac{(2m+1)\pi}{L}\;,\;m=-\frac{L}{2},\ldots,\frac{L}{2}-1\;, \tag{5}\] so that \[\hat{T}=\sum_{k,\sigma}\epsilon(k)\hat{C}_{k,\sigma}^{+}\hat{C}_{k,\sigma}\;, \quad\epsilon(k)=tk\;. \tag{6}\] The dispersion relation of the \(1/r\)-Hubbard model is linear. We set \[t=\frac{1}{2\pi} \tag{7}\] so that the bandwidth is unity, \(W\equiv 1\). In this work, we focus on the case of a paramagnetic half-filled ground state where we have the same number of electrons per spin species, \(N_{\uparrow}=N_{\downarrow}\), that equals half the number of lattice sites, \(N_{\sigma}=L/2\) (\(\sigma=\uparrow,\downarrow\)). #### ii.1.2 Coulomb interaction The Coulomb interaction is parameterized by two terms in Eq. (1). The on-site (Hubbard) interaction [3; 4; 5] acts locally between two electrons with opposite spins, \[\hat{D}=\sum_{l=1}^{L}\hat{n}_{l,\uparrow}\hat{n}_{l,\downarrow}\;,\quad\hat{n }_{l,\sigma}=\hat{c}_{l,\sigma}^{+}\hat{c}_{l,\sigma}\;, \tag{8}\] where \(\hat{n}_{l,\sigma}\) counts the number of electrons with spin \(\sigma\) on site \(l\), and \(\hat{n}_{l}=\hat{n}_{l,\uparrow}+\hat{n}_{l,\downarrow}\) counts the number of electrons on site \(l\). The corresponding operators for the total number of electrons with spin \(\sigma=\uparrow,\downarrow\) are denoted by \(\hat{N}_{\sigma}=\sum_{l}\hat{n}_{l,\sigma}\), and \(\hat{N}=\hat{N}_{\uparrow}+\hat{N}_{\downarrow}\). To discuss the influence of the extended nature of the Coulomb interaction, we consider the case of pure nearest-neighbor interactions, \[\hat{V}=\sum_{l=1}^{L}(\hat{n}_{l}-1)(\hat{n}_{l+1}-1)\;, \tag{9}\] where we disregard the long-range parts of the Coulomb interaction for distances \(|l-m|\geq 2\). The model in Eq. (1) describes the 'extended' \(1/r\)-Hubbard model with on-site interaction \(U\) and nearest-neighbor interaction \(V\). As we shall show in this work, the Mott-Hubbard transition at half band-filling remains continuous in the presence of short-range interactions. For not too large interactions and for \(V\lesssim U/2\), the model contains a transition from the Luttinger-liquid metal to the Mott-Hubbard insulator. For larger nearest-neighbor interactions, the model eventually describes transitions from the metallic state to a charge-density-wave (CDW) insulator. For strong interactions, \(U\gg W\), the model contains a transition from the Mott-Hubbard insulator to the CDW insulator around \(V\approx U/2\). We study several values for the ratio \(v=V/U\), namely, \(v=0,0.1,0.3,0.4,0.5,0.6,0.7\) for weak to strong nearest-neighbor interactions. Since we scan the value of \(U\), we must limit the number of values for \(v\) to keep the numerical effort within bounds when we include systems up to \(L_{\rm max}=80\) lattice sites; when finite-size effects are well behaved, e.g., for the ground-state energy, we limit our investigations to \(L=64\). Moreover, we scan \(V\) for fixed \(U=0.2\) and \(U=1.7\) to study the Mott transition as a function of the nearest-neighbor interaction. #### ii.1.3 Particle-hole symmetry Under the particle-hole transformation \[\hat{c}_{l,\sigma}\mapsto\hat{c}_{l,\sigma}^{+}\quad,\quad\hat{n}_{l,\sigma} \mapsto 1-\hat{n}_{l,\sigma}\;, \tag{10}\] the kinetic energy remains unchanged, \[\hat{T} \mapsto \sum_{\begin{subarray}{c}l,m=1\\ l\neq m;\sigma\end{subarray}}^{L}t(l-m)\hat{c}_{l,\sigma}\hat{c}_{m,\sigma}^{ +} \tag{11}\] \[= \sum_{\begin{subarray}{c}l,m=1\\ l\neq m;\sigma\end{subarray}}^{L}\left[-t(m-l)\right]\hat{c}_{l,\sigma}^{+}\hat{c }_{m,\sigma}=\hat{T}\] because \(t(-r)=-t(r)\). Furthermore, \[\hat{D}\mapsto\sum_{l=1}^{L}(1-\hat{n}_{l,\uparrow})(1-\hat{n}_{l,\downarrow} )=\hat{D}-\hat{N}+L\;, \tag{12}\] and \[\hat{V}\mapsto\hat{V}\;. \tag{13}\] Therefore, \(\hat{H}(N_{\uparrow},N_{\downarrow})\) has the same spectrum as \(\hat{H}(L-N_{\uparrow},L-N_{\downarrow})-U(2L-N)+LU\), where \(N=N_{\uparrow}+N_{\downarrow}\) is the particle number. ### Ground-state properties We are interested in the metal-insulator transition at half band-filling where the metallic Luttinger liquid for weak interactions turns into a paramagnetic Mott insulator for large interactions at some finite value \(U_{\rm c}(V)\) when \(V\) is small enough, or to a CDW insulator for strong nearest-neighbor interactions. The metal-insulator transition can be inferred from the finite-size extrapolation of the ground-state energy and of the two-particle gap [22]. Alternatively, the Luttinger parameter [23] and the finite-size extrapolation of the structure factor at the Brillouin zone boundary permit to determine the critical interaction strength. Moreover, the charge-density-wave state can be monitored by the CDW order parameter. In this section, we also introduce the momentum distribution for finite systems that is also accessible via DMRG. #### ii.2.1 Ground-state energy and two-particle gap We denote the ground-state energy by \[E_{0}(N,L;U,V)=\langle\Psi_{0}|\hat{H}|\Psi_{0}\rangle \tag{14}\] for given particle number \(N\), system size \(L\), and interaction parameters \(U,V\). Here, \(|\Psi_{0}\rangle\) is the normalized ground state of the Hamiltonian (1). We are interested in the thermodynamic limit, \(N,L\to\infty\) with \(n=N/L\) fixed. We denote the ground-state energy per site and its extrapolated value by \[e_{0}(N,L;U,V) = \frac{1}{L}E_{0}(N,L;U,V)\;,\] \[e_{0}(n;U,V) = \lim_{L\to\infty}e_{0}(N,L;U,V)\;, \tag{15}\] respectively. The two-particle gap is defined by \[\Delta_{2}(L;U,V)=\mu_{2}^{+}(L;U,V)-\mu_{2}^{-}(L;U,V)\;, \tag{16}\] where \[\mu_{2}^{-}(L;U,V) = E_{0}(L,L;U,V)-E_{0}(L-2,L;U,V)\;,\] \[\mu_{2}^{+}(L;U,V) = E_{0}(L+2,L;U,V)-E_{0}(L,L;U,V)\] are the chemical potentials for adding the last two particles to half filling and the first two particles beyond half filling, respectively. Due to particle-hole symmetry, we have \[\mu_{2}^{-}(L;U,V)=2U-\mu_{2}^{+}(L;U,V) \tag{18}\] so that \[\Delta_{2}(L;U,V)=2\mu_{2}^{+}(L;U,V)-2U \tag{19}\] and \[\Delta_{2}(U,V)=\lim_{L\to\infty}\Delta_{2}(L;U,V) \tag{20}\] in the thermodynamic limit. We always consider the spin symmetry sector \(S=S^{z}=0\). For this reason, we study the two-particle gap rather than the single-particle gap. The two added particles repel each other so that, in the thermodynamic limit, they are infinitely separated from each other. Therefore, we have \[\Delta_{2}(U,V)=2\Delta_{1}(U,V)\;, \tag{21}\] where \(\Delta_{1}(U,V)\) is the gap for single-particle excitations. For finite systems, we expect the interaction energy \[e_{\rm R}(L;U,V)=\Delta_{2}(L;U,V)-2\Delta_{1}(L;U,V)={\cal O}(1/L)>0 \tag{22}\] to be positive, of the order \(1/L\). We verified that the interaction energy vanishes in the thermodynamic limit for the case \(V=0\)[22]. #### ii.2.2 Momentum distribution We also study the spin-summed momentum distribution in the ground state at half band-filling, \(N=L\), \[n_{k}(L;U,V) = \langle\Psi_{0}|\hat{n}_{k,\uparrow}+\hat{n}_{k,\downarrow}| \Psi_{0}\rangle \tag{23}\] \[= \sum_{l,m;\sigma}e^{{\rm i}k(l-m)}P_{l,m;\sigma}\] with \(\hat{n}_{k,\sigma}=\hat{C}^{+}_{k,\sigma}\hat{C}_{k,\sigma}\) and the single-particle density matrix \(P_{l,m;\sigma}=\langle\Psi_{0}|\hat{c}^{+}_{l,\sigma}\hat{c}_{m,\sigma}|\Psi_ {0}\rangle\). Due to particle-hole symmetry we have \[n_{k}(L;U,V)=1-n_{-k}(L;U,V) \tag{24}\] at half band-filling. Therefore, it is sufficient to study wave numbers from the interval \(-\pi<k<0\). In contrast to our previous work [22], the slope of the momentum distribution at the band edge cannot be used to trace the Mott-Hubbard transition in the extended \(1/r\)-Hubbard model because the bound state moves away from the band edge for \(V>0\). #### ii.2.3 Density-density correlation function and Luttinger parameter Lastly, we address the density-density correlation function at half band-filling, \(N=L\), \[C^{\rm NN}(r,L;U,V)=\frac{1}{L}\sum_{l=1}^{L}\bigl{(}\langle\hat{n}_{l+r}\hat{ n}_{l}\rangle-\langle\hat{n}_{l+r}\rangle\langle\hat{n}_{l}\rangle\bigr{)}\;, \tag{25}\] where \(\langle\ldots\rangle\equiv\langle\Psi_{0}|\ldots|\Psi_{0}\rangle\). The limit \(L\gg r\gg 1\) for \(U,V\ll W\) is also accessible from field theory [24; 25; 26], \[C^{\rm NN}(r\gg 1;U,V)\sim-\frac{K(U,V)}{(\pi r)^{2}}+\frac{A(U,V)(-1)^{r}}{r ^{1+K}[\ln(r)]^{3/2}}+\ldots\;, \tag{26}\] where \(A(U,V)\) is a constant that depends on the interaction but not on the distance \(r\). We extract the Luttinger exponent \(K(U,V)\) from the structure factor, \[\tilde{C}^{\rm NN}(q,L;U,V)=\sum_{r=0}^{L-1}e^{-{\rm i}qr}C^{\rm NN}(r,L;U,V)\;, \tag{27}\] where the wave numbers are from momentum space, \(q=(2\pi/L)m_{q}\), \(m_{q}=-L/2,-L/2+1,\ldots,L/2-1\). By construction, \(\tilde{C}^{\rm NN}(q=0,L;U,V)=0\) because the particle number is fixed, \(N=L\) in the half-filled ground state. In the thermodynamic limit, the structure factor \(\tilde{C}^{\rm NN}(q,L;U,V)\) remains of the order unity even in the CDW phase because we subtract the contributions of the long-range order in the definition (25). The transition to a charge-density-wave insulator can be monitored from the CDW order parameter. In this work, we do not study the standard CDW order parameter, \[D(L;U,V)=\frac{1}{L}\left|\sum_{r=0}^{L-1}(-1)^{r}\left(\langle\hat{n}_{r} \rangle-1\right)\right|\leq 1\;. \tag{28}\] Instead, we include all short-range contributions and address \[N_{\pi}(L;U,V)=\frac{1}{L}\sum_{r=0}^{L-1}(-1)^{r}\frac{1}{L}\sum_{l=0}^{L-1} \left(\langle\hat{n}_{r+l}\hat{n}_{l}\rangle-1\right)\;. \tag{29}\] When the charges are distributed homogeneously, \(\langle\hat{n}_{l}\rangle=1\), we have \(N_{\pi}(L;U,V)=\tilde{C}^{\rm NN}(\pi,L;U,V)/L\), and the order parameter vanishes in the metallic phase. More generally, in the thermodynamic limit we have \(N_{\pi}(U,V)=(D(U,V))^{2}\). In the \(1/r\)-Hubbard model with its long-range electron transfer, it is advantageous to analyze \(N_{\pi}(L;U,V)\) to facilitate a reliable finite-size analysis. When Eq. (26) is employed, it follows that the Luttinger parameter for finite systems, \[K(L;U,V)=\frac{L}{2}\tilde{C}^{\rm NN}(2\pi/L,L;U,V)\;, \tag{30}\] can be used to calculate the Luttinger parameter in the thermodynamic limit, \[K(U,V) = \lim_{L\to\infty}K(L;U,V) \tag{31}\] \[= \pi\lim_{q\to 0}\frac{\tilde{C}^{\rm NN}(q;U,V)}{q}\;,\] where we denote the structure factor in the thermodynamic limit by \(\tilde{C}^{\rm NN}(q;U,V)\). Using Eq. (31), the Luttinger exponent can be calculated numerically with very good accuracy [27]. The Luttinger parameter can be used to locate the metal-insulator transition in one spatial dimension. ## III Ground-state properties Before we investigate the Mott transition for the half-filled extended \(1/r\) Hubbard model in more detail in the next section, we present DMRG results for the ground-state energy, the two-particle gap, the momentum distribution, the structure factor, and the CDW order parameter. For the numerical calculations we employ a DMRG code that permits the treatment of arbitrary quantum system with long-ranged complex interactions. It uses non-Abelian symmetries and optimization protocols inherited from quantum information theory [28]. Further technical details of the DMRG implementation can be found in Ref. [22]. Note that our finite-size scaling analysis requires very accurate data. We obtain those by imposing strict accuracy settings in our DMRG code, and by restricting the largest system size to \(L_{\rm max}=80\) to limit the truncation errors. ### Ground-state energy For \(V=0\), the ground-state energy per site for finite system sizes is given by (\(n=N/L\), \(N\): even) [2; 19; 22] \[e_{0} = \frac{1}{4}n(n-1)+\frac{U}{4}n\] \[-\frac{1}{2L}\sum_{r=0}^{(N/2)-1}\sqrt{1+U^{2}-4U(2r+1-L/2)/L}\] with the abbreviation \(e_{0}\equiv e_{0}(N,L;U,V=0)\). In the thermodynamic limit and at half band-filling, \(n=1\), the ground-state energy per site becomes particularly simple, \[e_{0}(n=1;U\leq 1,V=0) = -\frac{1}{4}+\frac{U}{4}-\frac{U^{2}}{12}\;,\] \[e_{0}(n=1;U\geq 1,V=0) = -\frac{1}{12U}\;. \tag{33}\] The analytic expressions (32) and (33) are useful for a comparison with numerical data at \(V=0\). For finite \(V\), we can use first-order perturbation theory for weak interactions, \(U,V\ll 1\), to find \[e_{0}^{\rm PT}(U,V)=-\frac{1}{4}+\frac{U}{4}\left(1-\frac{8v}{\pi^{2}}\right) +\mathcal{O}(U^{2}) \tag{34}\] with \(v=V/U\) in the thermodynamic limit and at half band-filling. Note that Eq. (34) holds for all \(v\). We display the ground-state energy per site at half band-filling, \(e_{0}(L,L;U,V)\), as a function of the inverse system size (\(L=8,16,24,32,48,64\)) and various values of \(U\) in Fig. 2a (\(v=0.1\)), Fig. 2b (\(v=0.3\)), and Fig. 2c (\(v=0.5\)). For the extrapolation to the thermodynamic limit, we use the algebraic fit function \[e_{0}(L,L;U,V)=e_{0}(n=1;U,V)+a_{0}(U,V)\left(\frac{1}{L}\right)^{\gamma_{0}(U,V)}\;, \tag{35}\] where \(e_{0}(n=1;U,V)\) denotes the numerical estimate for the ground-state energy density in the thermodynamic limit and \(a_{0}(U,V)\) and \(\gamma_{0}(U,V)\) are the two other fit parameters. This extrapolation scheme is appropriate for \(V=0\)[22] because the ground-state energy per site scales with \((1/L)^{2}\) for \(U\neq 1\) and with \((1/L)^{3/2}\) for \(U=U_{\rm c}(V=0)=1\), as follows from Eq. (32). More generally, Figure 2: Ground-state energy per lattice site at half band-filling, \(e_{0}(L,L;U,V)\), for the extended \(1/r\)-Hubbard model as a function of \(1/L\) for \(L=8,16,24,32,48,64\) and various values for \(U\) for (a) \(v=0.1\), (b) \(v=0.3\), (c) \(v=0.5\). The continuous lines are fits to the algebraic fit function (35). The intercept of the extrapolation curves with the ordinate defines the extrapolation estimate \(e_{0}(n=1;U,V)\) in the thermodynamic limit. we assume for all \((U,V)\) \[\gamma_{0}(U,V)=\left\{\begin{array}{ll}2&\mbox{for}\;\;U\neq U_{\rm c}(V)\\ \frac{3}{2}&\mbox{for}\;\;U=U_{\rm c}(V)\end{array}\right.. \tag{36}\] These exponents apply for very large system sizes. We shall discuss the finite-size modifications in detail in Sect. IV. The extrapolated ground-state energies are shown in Fig. 3 together with the exact result for \(V=0\). For small interactions, the nearest-neighbor interaction in the particle-hole symmetric form decreases the ground-state energy because the Hartree contribution at half band-filling is subtracted in the definition of the interaction, and the Fock contribution is negative because of the exchange hole. Therefore, the linear term in the interaction \((U/4)(1-8v/\pi^{2})\), see Eq. (34), is smaller in the presence of a nearest-neighbor interaction. At large interactions, the ground-state energy approaches zero, \(\lim_{U\to\infty}e_{0}(n=1;U,V=vU)=0\), as long as the charge-density wave is absent. In the presence of a CDW, the ground-state energy is negative and proportional to \(U\), \(e_{0}(U\gg 1,V)=U(1/2-v)\). ### Two-particle gap For \(V=0\) the two-particle gap is known exactly for all system sizes [2; 19; 22], \[\Delta_{2}(L;U\geq 1,V=0)=U-1+\frac{2}{L}+\sqrt{(U-1)^{2}+\frac{4U}{L}}\;. \tag{37}\] In the thermodynamic limit, we find \[\Delta_{2}(U\geq 1,V=0)=2(U-1)\;. \tag{38}\] The gap opens linearly above the critical interaction strength, \(U_{\rm c}(U,V=0)=1\). Eq. (37) shows that the finite-size data approach the value in the thermodynamic Figure 3: Ground-state energy per lattice site at half band-filling in the thermodynamic limit, \(e_{0}(n=1;U,V)\), for the extended \(1/r\)-Hubbard model from the extrapolation to the thermodynamic limit in Fig. 2. The dashed lines represent first-order order perturbation theory for \(v=V/U=0.3,0.5,0.7\), see Eq. (34). The continuous line is the exact result for \(V=0\), \(e_{0}(n=1;U,V=0)\), see Eq. (33). Figure 4: Two-particle gap \(\Delta_{2}(L;U,V)\) for the extended \(1/r\)-Hubbard model as a function of inverse system size for \(L=8,16,24,32,48,64\) and various values for \(U\) for (a) \(v=0.1\), (b) \(v=0.3\), (c) \(v=0.5\). The continuous lines are fits to the algebraic fit function (39). The intercept of the extrapolation curves with the ordinate defines the extrapolation estimate \(\Delta_{2}(U,V)\) for the two-particle gap. limit algebraically in \(1/L\), \[\Delta_{2}(L;U,V)=\Delta_{2}(U,V)+a_{2}(U,V)\left(\frac{1}{L}\right)^{\gamma_{2}( U,V)} \tag{39}\] with \(\gamma_{2}(U\neq U_{\rm c},V=0)=1\) and \(\gamma_{2}(U=U_{\rm c},V=0)=1/2\). More generally, we assume for all \((U,V)\) \[\gamma_{2}(U,V)=\left\{\begin{array}{ll}1&\mbox{for}\ \ U\neq U_{\rm c}(V)\\ \frac{1}{2}&\mbox{for}\ \ U=U_{\rm c}(V)\end{array}\right.. \tag{40}\] As for the ground-state energy, these exponents apply for very large system sizes. We shall discuss the finite-size modifications in more detail in Sect. IV. In Fig. 4 we show the DMRG results for \(\Delta_{2}(L;U,V)\) as a function of \(1/L\) for various values for \(U\) as a function of \(1/L\) for \(L=8,16,24,32,48,64\) for (a) \(v=0.1\), (b) \(v=0.3\), (c) \(v=0.5\). The lines are fits to the algebraic function in Eq. (39). The fits in Fig. 4 are seen to agree very well with the data, showing a steep decrease of the finite-size gap as a function of inverse system size. This indicates that large system sizes are required to obtain reasonable gap extrapolations. The extrapolated gaps becomes _smaller_ as a function of \(V\), i.e., the nearest-neighbor interaction _reduces_ the tendency to form a Mott-Hubbard insulator. The extrapolated gaps \(\Delta(U,V)\) are shown in Fig. 5 as a function of \(U\) for \(v=0\), \(v=0.1\), \(v=0.3\), and \(v=0.5\). Apparently, the nearest-neighbor interaction not only shifts the critical interaction to higher values, it also reduces the size of the gap in the Mott insulating phase. At first sight, the _increase_ of the critical interaction is counter-intuitive because one might argue that an additional repulsive nearest-neighbor Coulomb interaction should favor the insulating state, not the metallic state. From a wave-mechanical viewpoint, however, the repulsive nearest-neighbor interaction softens the two-particle scattering potential. Figuratively speaking, particles that are scattered by the weaker nearest-neighbor interaction \(V\) do not experience the stronger on-site interaction \(U\). For a quantitative analysis, see Sect. IV. When \(v=V/U\) is small, the change in the critical interaction strength is also small, and one might think of using perturbation theory around the bare \(1/r\)-Hubbard model. To test this idea, we consider \[C(L;U,V)=\frac{e_{0}(L,L;U,V)-e_{0}(L,L;U,V=0)}{V}\;. \tag{41}\] In the limit \(V\to 0\), leading-order perturbation theory gives \[\lim_{V\to 0}C(L;U,V)=C^{\rm NN}(r=1,L;U,V=0)\;, \tag{42}\] where \(C^{\rm NN}(r=1,L;U,V=0)\) is the nearest-neighbor density-density correlation function at half band-filling for the bare \(1/r\)-Hubbard model at finite system sizes \(L\), see Eq. (25). As an example, for \(v=0.3\) and \(U\lesssim 0.7\), we find that \(C^{\rm NN}(r=1,L;U,V=0)\) agrees fairly well with \(C(L;U,V)\) from Eq. (41). Around the Mott transition, however, the corrections become sizable, more noticeably for larger systems. Therefore, low-order perturbation theory around the limit \(V=0\) cannot be used to determine the critical interaction strength \(U_{\rm c}(V)\) reliably. ### Momentum distribution In Fig. 6 we show the momentum distribution from DMRG at half band-filling for \(L=64\) sites for various values of \(U\) and \(v=0.1\), \(v=0.3\), and \(v=0.5\) (from top to bottom). For small interactions, the momentum distribution resembles that of a Fermi liquid with all states \(-\pi<k<0\) occupied and all states \(0<k<\pi\) empty. For small \(U\), low-energy scattering processes are limited to the vicinity of the sole Fermi point \(k_{\rm F}=0\). Indeed, in the field-theoretical limit, \(U,V\ll 1\), the model reduces to a bare \(g_{4}\)-model of only right-moving particles [24]. This 'non-interacting Luttinger liquid' displays a jump discontinuity at \(k_{\rm F}\). However, the \(1/r\)-Hubbard model is defined on a lattice and the bandwidth is finite. Consequently, the second Fermi point at \(k_{\rm F,2}=-\pi\) starts to play a role when \(U\) becomes large, of the order of half the bandwidth. States near \(k_{\rm F,2}\) are depleted more quickly as a function of \(U\) than those deeper in the Brillouin zone. Therefore, as seen in Fig. 6, the momentum distribution develops a maximum around \(k=-\pi/2\), with a corresponding minimum around \(k=\pi/2\). These considerations show that the Luttinger parameter must deviate from unity, \(K(U,V)<1\), for all Figure 5: Two-particle gap \(\Delta_{2}(U,V)\) for the extended \(1/r\)-Hubbard model as a function of \(U\) for \(v=0.1\) (red dots), \(v=0.3\) (green dots), \(v=0.5\) (purple dots), extrapolated from finite-size data with up to \(L=64\) sites. The continuous line is the exact result in the thermodynamic limit for \(V=0\), \(\Delta_{2}(U,V=0)=2(U-1)\), see Eq. (38). \((U,V)\), even though corrections to unity are (exponentially) small for \(U,V\ll 1\). Therefore, the momentum distribution is a continuous function in the (extended) \(1/r\)-Hubbard model for all \(U,V>0\). In contrast to the case \(V=0\)[22], there is no Fano resonance discernible in the slope of the momentum distribution at \(k=-\pi\) as the slope is always positive at \(k=-\pi\). This indicates that the bound state for \(V=0\) moves away from the band edge for finite \(V>0\) and thus cannot be detected in the momentum distribution. Consequently, we cannot use the resonance to locate the metal-insulator transition in the extended \(1/r\)-Hubbard model. ### Structure factor and CDW order parameter Lastly, we show the structure factor from DMRG in Fig. 7 for \(v=0.1\), \(v=0.3\), and \(v=0.5\) (from top to bottom) for the extended \(1/r\)-Hubbard model at system sizes \(L=16,64\) below (left) and above (right) the Mott Figure 6: Momentum distribution \(n_{k}(L;U,V)\) from DMRG at half band-filling for the extended \(1/r\)-Hubbard model for \(L=64\) sites and for various values for \(U\) for \(v=0.1\), \(v=0.3\), and \(v=0.5\) (from top to bottom). Figure 7: Structure factor \(\bar{C}^{\rm NN}(q,L;U,V)\) for the extended \(1/r\)-Hubbard model for \(L=16,64\) below (left) and above (right) the Mott transition for (a) \(v=0.1\), (b) \(v=0.3\), (c) \(v=0.5\). transition. It is seen that the finite-size effects are fairly small but larger systems permit a much better resolution in momentum space. In comparison with the exact result for the non-interacting system, \[\tilde{C}^{\rm NN}(q,n=1;U=0,V=0)=\frac{|q|}{\pi}\;, \tag{43}\] we see that the local interaction reduces the charge fluctuations. This is expected because the suppression of double occupancies likewise reduces the number of holes and the charges are more homogeneously distributed in the system. Therefore, the charge correlations become smaller when we compare the left and right figure in the same row. The nearest-neighbor interaction counters the effect of the Hubbard interaction because nearest-neighbor pairs of a double occupancy and a hole are energetically favorable. Therefore, the charge correlations increase when we go from top to bottom in the left/right row, even though \(U\) also increases from top to bottom. When the nearest-neighbor interaction increases beyond a certain threshold value \(V_{\rm c}(U)\), the ground state displays charge-density-wave order. In Fig. 8(a) we show the charge-density-wave order parameter \(N_{\pi}(L;U,V=0.7U)\), see Eq. (29), as a function of \(1/L\) for various values of \(U\), and the extrapolated result \(N_{\pi}(U,V=0.7U)\) into the thermodynamic limit using a second-order polynomial fit in Fig. 8(b), \[N_{\pi}(L;U,V)=N_{\pi}(U,V)+\frac{N_{1}(U,V)}{L}+\frac{N_{2}(U,V)}{L^{2}}\;. \tag{44}\] Apparently, the CDW order parameter is continuous over the CDW transition. Close to the transition, \(U\gtrsim U_{\rm c}(V)\), \[N_{\pi}(U,V)=N_{0}\left[U-U_{\rm c}(V)\right]^{2\nu}\;, \tag{45}\] where \(\nu\) is the critical exponent for the CDW order parameter \(D(U,V)\). Note that we pass the CDW transition for a fixed ratio \(v=U/V\). To make use of Eq. (45), the critical interaction \(U_{\rm c}(V)\) must be known. In addition, the region of validity of Eq. (45) is unknown a priori. Typically, one has to study system parameters close to the transition to obtain a reliable estimate for \(\nu\). Therefore, very large system sizes might be necessary to reach the scaling limit, and we have to be satisfied with the result from Fig. 8(b) that the CDW transition at \(v=0.7\) is continuous with exponent \(\nu\leq 1/2\). ## IV Mott transition In this section we determine the critical value for the Mott transition in the extended \(1/r\)-Hubbard model. We investigate the two-particle gap, the ground-state energy, the Luttinger parameter, and the structure factor at the Brillouin zone boundary to locate the critical interaction strength \(U_{\rm c}(V)\). The Mott transition remains continuous for all \(V/U\). ### Two-particle gap In our previous work [22], we showed that the exponent \(\gamma_{2}(U)=\gamma_{2}(U,V=0)\) sensitively depends on \(U\) in the vicinity of the Mott-Hubbard transition, and the critical interaction for the \(1/r\)-Hubbard model, \(U_{\rm c}(V=0)=1\), was obtained with an accuracy of one per mil. To illustrate this result for the bare \(1/r\)-Hubbard model, in Fig. 9 we show the extrapolated gap exponent \(\gamma_{2}(U)\equiv\gamma_{2}(U,V=0)\) using the analytic expression (37) for various combinations of system sizes in the range \(L=8,16,24,32,48,64,80,96,128,256,512,1024,2048,4096\). The minimal value for \(\gamma_{2}(U)\) depends on the selected range of system sizes. The gap exponent in the thermodynamic limit, see Eq. (40), cannot be reproduced from finite-size studies but it is approached systematically with increasing system size. Furthermore, it can Figure 8: (a) CDW order parameter \(N_{\pi}(L;U,V)\) for the extended half-filled \(1/r\)-Hubbard model as a function of \(1/L\) (\(L\leq 80\)) for \(v=0.7\) and various \(U\)-values. Lines are a second-order polynomial fit in \(1/L\), see Eq. (44); (b) Extrapolated CDW order parameter \(N_{\pi}(U,V=0.7U)\) as a function of \(U\). The line is an algebraic fit to the data in the vicinity of the CDW transition, see Eq. (45), with \(U_{\rm c}(v=0.7)=0.6\), \(N_{0}=1\) and \(2\nu=0.3\). be seen from Fig. 9 that the inclusion of smaller system sizes such as \(L=8,16\) leads to stronger deviations so that the smallest system sizes should be discarded. Note, however, that the _position_ of the minimum and thus the critical interaction strength are very well reproduced in all cases. Therefore, the minimum of \(\gamma_{2}(U,V)\) permits to locate the Mott transition \(U_{\rm c}(V)\) fairly accurately. In Fig. 10 we display the exponent \(\gamma_{2}(U,V)\), as obtained from the fit of the finite-size data in the range \(16\leq L\leq 80\) to the algebraic function in Eq. (39). Also shown in the figure are the quartic fits around the minima which lead to the critical interactions \(U_{\rm c,gap}(V)\) listed in table 1. Note that the curves flatten out for increasing \(v\) so that it becomes more difficult to determine accurately the minima for \(v\to 0.5\). The comparison with the exact value for \(V=0\) shows that the gap exponent \(\gamma_{2}(U,V)\) provides a fairly accurate estimate for the critical interaction. The same accuracy can be obtained when using the ground-state energy exponent \(\gamma_{0}(U,V)\), as we shall show next. ### Ground-state energy As seen from Eq. (36), the \(1/L\) corrections to the ground-state energy density also permit to locate the Mott transition in the extended \(1/r\)-Hubbard model, in the same way as the two-particle gap. In Fig. 11 we show the exponent \(\gamma_{0}(U,V)\), as obtained from the fit of the finite-size data in the range \(16\leq L\leq 80\) to the algebraic function in Eq. (35). Also shown in the figure \begin{table} \begin{tabular}{r r r r r r} \hline \hline \(V/U\) & \(U_{\rm c,gap}(V)\) & \(U_{\rm c,gv}(V)\) & \(U_{\rm c,LL}(V)\) & \(U_{\rm c,af}(V)\) & \(\overline{U}_{\rm c}(V)\) \\ \hline 0 & 1.009 & 1.000 & 1.033 & 0.965 & 1.002 \\ 0.1 & 1.024 & 1.022 & 1.056 & 0.984 & 1.021 \\ 0.2 & 1.055 & 1.056 & 1.090 & 1.018 & 1.055 \\ 0.3 & 1.109 & 1.116 & 1.144 & 1.075 & 1.111 \\ 0.4 & 1.202 & 1.221 & 1.243 & 1.175 & 1.210 \\ 0.5 & 1.425 & 1.500 & 1.540 & 1.456 & 1.480 \\ 0.6 & 0.828 & 0.838 & 0.883 & 0.876 & 0.856 \\ 0.7 & 0.587 & 0.600 & 0.616 & 0.611 & 0.604 \\ \hline \hline \end{tabular} \end{table} Table 1: Critical interaction strengths for the extended \(1/r\)-Hubbard model, as obtained from the two-particle gap, the ground-state energy, the Luttinger parameter, and the structure factor for systems with \(16\leq L\leq 80\) lattice sites. For \(V=0\), the exact result in the thermodynamic is known [19], \(U_{\rm c}(V=0)=1\). Figure 10: Exponent \(\gamma_{2}(U,V)\) for the two-particle gap in the extended \(1/r\)-Hubbard model as a function of \(U\) for various values of \(v=V/U\), based on system sizes \(16\leq L\leq 80\). The minimum of the curve determines \(U_{\rm c,gap}(V)\). Figure 9: Extrapolated gap exponent \(\gamma_{2}(U)=\gamma_{2}(U,V=0)\) using the analytical expression of the two-particle gap in Eq. (37). Various system sizes are used, ranging from \(L=8,16,24,32,48,64,80,96,128,256,512,1024,2048,4096\). are the quartic fits around the minima which lead to the critical interactions \(U_{\rm c,gs}(V)\) listed in table 1. The critical interaction strengths obtained from the minima of \(\gamma_{0}(U,V)\) very well agree with the exact result at \(V=0\) and with the values obtained from the gap exponent \(\gamma_{2}(U,V)\) with deviations in the low percentage range. Therefore, we can be confident that we found reliable estimates for the critical interaction strength for the Mott transition. ### Luttinger parameter As an alternative to locate the Mott transition, we monitor the Luttinger parameter and determine \(U_{\rm c}(U,V)\) from the condition [24] \[K(U_{\rm c}(V),v)=\frac{1}{2} \tag{46}\] for fixed ratios \(v=V/U\), see also Ref. [22]. In Fig. 12 we show the Luttinger parameter \(K(L;U,V)\) from DMRG for the extended \(1/r\)-Hubbard model with nearest-neighbor interaction \(V=0.3U\) as a function of \(U\) for system sizes \(L=8,16,24,32,48,64\) including a second-order polynomial extrapolation to the thermodynamic limit. The intersection of the extrapolation into the thermodynamic limit with \(K_{\rm c}=1/2\) determines \(U_{\rm c}(V)\). To obtain a reliable estimate for the intersection we can either use the two data points closest to the transition and perform a linear interpolation, in this case \(U=1.1\) and \(U=1.2\). Alternatively, we use a four-parameter fit of the whole data set that employs the information that the Luttinger parameter deviates from unity by exponentially small terms for \(U,V\to 0\). Thus, we use \[K(U,V)=a+b\tanh(c+dU) \tag{47}\] to fit the extrapolated data for finite values of \(U\) to a continuous curve which is parameterized by \(a,b,c,d\) that depend on \(v\). Then, we solve Eq. (46) for \(U_{\rm c,LL}(V)\). The results are also listed in table 1. Alternatively, we could have solved Eq. (46) for each system size, and extrapolated the resulting system-size dependent critical interactions strengths to the thermodynamic limit. Since the results deviate more strongly from the exact value for \(V=0\), we refrain from pursuing this approach further. As seen from table 1, the critical value from Luttinger parameter systematically overestimate the correct interaction strengths by some three percent. A similar effect was found for the charge-density-wave transition in a one-dimensional model for spinless fermions with nearest-neighbor interactions ('\(t\)-\(V\)-model') [23]. Apparently, much larger systems are required to overcome this systematic error. In this work, we do not apply correction factors for a better fit but use the critical interaction strengths \(U_{\rm c,LL}(V)\) as an upper bound to the exact value \(U_{\rm c}(V)\). ### Structure factor and CDW order parameter For the \(1/r\)-Hubbard model, the finite-size corrections to the structure factor \(\tilde{C}_{\pi}(U,V)\equiv\tilde{C}(\pi;U,V)\), \[\tilde{C}_{\pi}(L;U,V)=\tilde{C}_{\pi}(U,V)+\frac{C_{1}(U,V)}{L}+\frac{C_{2}( U,V)}{L^{2}}\, \tag{48}\] Figure 12: Luttinger parameter \(K(L;U,V)\) from DMRG for the extended \(1/r\)-Hubbard model with nearest-neighbor interaction \(V=0.3U\) as a function of \(U\) for system sizes \(L=8,16,24,32,48,64\) including a second-order polynomial extrapolation to the thermodynamic limit. The intersection of the extrapolation with \(K_{\rm c}=1/2\) determines \(U_{\rm c}(V)\). Figure 13: Structure factor \(\tilde{C}_{\pi}(L;U,V)\) at \(q=\pi\) as a function of \(1/L\) for various values of \(U\) for the extended \(1/r\)-Hubbard model with nearest-neighbor interaction \(V=0.3U\) for system sizes \(L=8,16,24,32,48,64\). Lines are a second-order polynomial extrapolation to the thermodynamic limit, see Eq. (48). and the CDW order parameter \(N_{\pi}(L;U,V)\), see Eq. (44), permit to locate the critical interaction strength. In Fig. 13 we show the structure factor for \(v=0.3\) and various values of \(U\) as a function of inverse system size for \(L=8,16,24,32,48,64\). As can be seen from the figure, the coefficient in \(1/L\) changes its sign at the critical interaction strength, \[C_{1}(U_{\rm c,sf}(V),V)=0\;. \tag{49}\] To see this more clearly, in Fig. 14(a) we show the coefficient \(C_{1}(U,V)\) as a function of \(U\) for \(v=0.1\), \(v=0.3\), and \(v=0.5\), and fit the data to a Fano resonance, \[C_{1}^{\rm Fano}(U,V)=a(V)+b(V)\frac{[q_{\rm F}(V)\Gamma(V)+U-U_{\rm c}(V)]^{ 2}}{[\Gamma(V)]^{2}+[U-U_{\rm c}(V)]^{2}}\;. \tag{50}\] Analogously, we find the critical interaction strengths in the CDW phase from the \(1/L\) corrections to the CDW order parameter (29), see Eq. (44), in Fig. 14(b). As in our study of the \(1/r\)-Hubbard model [22], a bound state that interacts with the continuum shows up in physical quantities and thus contributes a Fano resonance to various physical quantities, with weight of the order \(1/L\). Using the Fano resonance formula and the conditions \(C_{1}(U_{\rm c,sf},V)=0=N_{1}(U_{\rm c,sf},V)\), the \(1/L\)-corrections of the structure factor and the CDW order parameter provide the estimate \(U_{\rm c,sf}(V)\) for the critical interaction. The resulting data are listed for various \(v\) in table 1. The critical interaction strength \(U_{\rm c,sf}(V)\) systematically underestimates the exact value for the Mott transition by a few percent. Together with the critical interaction strength from the Luttinger parameter \(U_{\rm c,LL}(V)\) we thus can set tight limits to \(U_{\rm c}(V)\). ### Critical interactions for fixed interaction ratios In table 1 we collect the results for the critical interaction strengths \(U_{\rm c}(V)\) obtained from the analysis of the two-particle gap, the ground-state energy, the Luttinger parameter, and the structure factor for \(v=V/U=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7\), as obtained from the previous four subsections. We observe that * the arithmetic average of the four values, \(\overline{U}_{\rm c}\), reproduces the exact result at \(V=0\) with an accuracy of a few per mil; * the values for \(U_{\rm c,gs}(V)\) are close to the average for all \(V\), with a deviation below two percent. Therefore, the ground-state exponent alone provides a reliable estimate for \(U_{\rm c}(V)\) in all cases; * the estimates \(U_{\rm c,LL}(V)\), using the Luttinger parameter, and \(U_{\rm c,sf}\), using the structure factor, respectively, systematically overestimate and underestimate the critical interaction strength for the transition from the Luttinger liquid to the Mott-Hubbard insulator. Therefore, they provide natural bounds to \(U_{\rm c}(V)\) for \(v\leq 0.5\); * the transitions to the CDW insulator at \(v=0.6,0.7\) can be determined fairly accurately from all four approaches individually. In Fig. 1, we connect the data points for \(\overline{U}_{\rm c}(V)\) using a third-order spline interpolation. Error bars at the data points result from the overestimates and underestimates listed in table 1. In Fig. 1 we also include the results from the analysis for the Mott transition between the Luttinger liquid and the CDW insulator at fixed \(U=0.2\), as we discuss next. ### Transitions at fixed Hubbard interaction Lastly, we study the metal-to-insulator transition at fixed Hubbard interaction \(U\) as a function of \(V\), namely for \(U=0.2\) and \(U=1.7\). Figure 14: (a) Finite-size coefficient \(C_{1}(U,V)\) of the structure factor as a function of \(U\) for the extended \(1/r\)-Hubbard model for \(v=0.1\) and \(v=0.3\) (inset: \(v=0.5\)). (b) Finite-size coefficient \(N_{1}(U,V=0.7U)\) for the CDW order parameter, see Eqs. (29) and (44). Lines are fitted Fano resonance curves, see Eq. (50). Transition from Luttinger liquid to CDW insulator At \(U=0.2\), we find a transition from the Luttinger liquid metal to the CDW insulator at \(V_{\rm c}(U=0.2)=0.29\pm 0.01\). The analysis follows the route outlined in the previous subsections, and will not be repeated here. We increase \(V\) in steps of \(\Delta V=0.02\) around the transition. Using the coefficient \(\gamma_{0}\) from the ground-state energy, see Sect. IV.2, we find \(V_{\rm c,gs}(U=0.2)=0.286\), the coefficient \(\gamma_{2}\) from the two-particle gap in Sect. IV.1 leads to \(V_{\rm c,gap}(U=0.2)=0.280\), and the Luttinger parameter of Sect. IV.3 leads to \(V_{\rm c,LL}(U=0.2)=0.298\), almost identical to the values from the structure factor, see Sect. IV.4. This leads to the average value quoted above. Due to the absence of perfect nesting in the dispersion relation, it requires a finite interaction strength \(V\) to stabilize the CDW phase even at \(U=0\). Qualitatively, Hartree-Fock theory leads to the same result. Hartree-Fock theory systematically overestimates the stability of the CDW phase and thus underestimates \(V_{\rm c}(U)\), see Fig. 1. The analytical approach can be improved by including second-order corrections to Hartree-Fock theory, see, e.g., Refs. [23; 29]. This is beyond the purpose of our present analysis. #### iv.2.2 Transition from Mott-Hubbard to CDW insulator At \(U=1.7\), not included in the phase diagram in Fig. 1, we have a brief look at the transition from the Mott-Hubbard insulator to the CDW insulator. The results for the two-particle gap are shown in Fig. 15. They are corroborated by the behavior of the order parameter quantities \(C_{\pi}\) and \(N_{\pi}\). The analysis of the parameters \(\gamma_{0}\) and \(\gamma_{2}\) lead to quantitatively identical but less accurate results. For the one-dimensional extended Hubbard model it is known that the critical interaction is larger than \(U/2\). For the extended \(1/r\)-Hubbard model we also find \(V_{\rm c}=0.87\pm 0.01>1.7/2=0.85\) for the onset of the CDW. When expressed in units of the bandwidth, the offset of \(\delta_{\rm c}(U)=V_{\rm c}(U)-U/2\) agrees almost quantitatively with the value obtained from DMRG and QMC calculations for the one-dimensional extended Hubbard model, \(\delta_{\rm c}(U=1.7)\approx 0.02\), see Ref. [30]. The shift \(\delta_{\rm c}(U)\) can be determined analytically using higher-order strong-coupling perturbation [31]. Unfortunately, this program cannot be carried out for the extended \(1/r\)-Hubbard model because the exact ground state is not known for the effective spin model which is a linear combination of the Heisenberg model with nearest-neighbor interaction and the Haldane-Shastry model with \(1/r^{2}\)-exchange interaction [32; 33]. A variational strong-coupling approach that employs the Baeriswyl wave function [34; 35] can neither be carried out analytically because it requires the evaluation of \(\langle\bar{T}^{3}\rangle\) in the Gutzwiller-projected Fermi sea. In the one-dimensional extended Hubbard model, there is a bond-order-wave phase below a critical interaction strength \(U_{\rm tri}\) that separates the Mott-Hubbard and CDW insulators [15; 16; 17]. For \(U>U_{\rm tri}\), the transition from the Mott-Hubbard insulator to the CDW insulator is discontinuous. As can be seen from Fig. 15, we also find indications for the existence of a bond-order-wave phase. The charge gap of the Mott-Hubbard insulator closes around \(V_{\rm c}(U=1.7)\approx 0.87\) and reopens beyond \(V_{\rm c}(U=1.7)\) with a small value. The extrapolation of the gap remains linear as a function of \(1/L\) even at \(V=0.88\), as seen in Fig. (a)a. For larger values of \(V\), the gap drastically increases and the extrapolation displays a \(1/L^{2}\) behavior for large \(L\). The same behavior of the gap was observed for the one-dimensional extended Hubbard model at \(U=2W\)[16; 17] where it was numerically shown in detail that as a function of \(V\) the Mott-Hubbard insulator gives in to a bond-order-wave insulator before the CDW phase eventually takes over. Further investigations are necessary to corroborate the existence of a bond-order-wave phase in the vicinity of the CDW transition also for the extended \(1/r\)-Hubbard model. Note, however that we do not expect a bond-order wave as an intermediate phase for small interactions because in the extended \(1/r\)-Hubbard model the metallic Luttinger liquid overrides a conceivable bond-order wave. Figure 15: (a) Two-particle gap for the extended \(1/r\)-Hubbard model at \(U=1.7\) for various values of \(V\) as a function of \(1/L\) for \(L=8,16,32,64,80\). (b) Extrapolated two-particle gap as a function of \(V\). Conclusions In this work we applied the density-matrix renormalization group (DMRG) method to the half-filled extended \(1/r\)-Hubbard model where the electron transfer amplitudes decay proportional to the inverse chord distance of two lattice sites on a ring. The model describes a linear dispersion within the Brillouin zone and thus provides an ideal case to study the Mott-Hubbard transition because it lacks Umklapp scattering. Therefore, the metal-to-insulator transitions occur at finite interaction strengths. Consequently, all generic phases, namely Luttinger-liquid metal, Mott-Hubbard insulator, and charge-density-wave insulator, occupy a finite region in the \((U,V)\) ground-state phase diagram, see Fig. 1. Mapping the quantum phase transition boundaries for the specific model is one of the main achievements of this work. To this end, we use DMRG data for up to \(L=80\) sites to calculate the ground-state energy, the two-particle gap, the momentum distribution, the Luttinger parameter, and the structure factor. The finite-size behavior of the ground-state energy, of the two-particle gap, and of the structure factor permit to determine the critical interaction parameters for the instability of the Luttinger liquid metal against the Mott-Hubbard insulator and the charge-density-wave insulator, respectively. Moreover, we monitor the Luttinger parameter that also signals the breakdown of the Luttinger liquid metal at a metal-to-insulator transition. We tested the validity of our analysis against exact results for \(V=0\) for which analytic results for the ground-state energy and the gap exist for all interactions \(U\) and system sizes \(L\). The phase diagram in Fig. 1 shows that the nearest-neighbor interaction and the Hubbard interaction counteract each other. On the one hand, the Mott transition shifts to larger values, i.e., a weak to moderate nearest-neighbor interaction stabilizes the Luttinger liquid metal. Apparently, the two-particle scattering interaction becomes smoother in position space and renders the total interaction less effective. On the other hand, as can readily be understood from classical considerations, the Hubbard interaction opposes the formation of a charge-density wave because, by definition, a CDW augments the particle density on the same lattice site. In contrast to the'standard' extended Hubbard model in one dimension, the absence of Umklapp scattering and the competition of both interactions leads to an extended metallic region in the phase diagram. The extrapolations suggest that there is a tri-critical point where all three phases touch. It will be interesting to analyze this region in phase space with higher accuracy, i.e., more data points in the \((U,V)\) parameter space close to \((U_{\rm tri},V_{\rm tri})\approx(1.5,0.75)\), and larger system sizes, \(L>80\). Moreover, a conceivable bond-order wave above the tri-critical point between Mott insulator and charge-density-wave insulator should be investigated in more detail. These tasks are left for future studies. ###### Acknowledgements. O.L. has been supported by the Hungarian National Research, Development, and Innovation Office (NKFIH) through Grant No. K134983, and TKP2021-NVA by the Quantum Information National Laboratory of Hungary. O.L. also acknowledges financial support from the Hans Fischer Senior Fellowship program funded by the Technical University of Munich - Institute for Advanced Study and from the Center for Scalable and Predictive methods for Excitation and Correlated phenomena (SPEC), which is funded as part of the Computational Chemical Sciences Program by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences at Pacific Northwest National Laboratory. ## Appendix A Hartree-Fock theory ### CDW Hartree-Fock Hamiltonian In Hartree-Fock theory, we decouple the two-particle interaction as follows, \[\hat{D}^{\rm HF}=\hat{D}^{\rm H} = \sum_{l}\langle\hat{n}_{l,\uparrow}\rangle\hat{n}_{l,\downarrow} +\hat{n}_{l,\uparrow}\langle\hat{n}_{l,\downarrow}\rangle-\langle\hat{n}_{l, \uparrow}\rangle\langle\hat{n}_{l,\downarrow}\rangle\;, \tag{15}\] \[\hat{V}^{\rm HF} = \hat{V}^{\rm H}+\hat{V}^{\rm F}\;,\] (16) \[\hat{V}^{\rm H} = \sum_{l}\left(\langle\hat{n}_{l}\rangle-1\right)\left(\hat{n}_{l +1}-1\right)\] (17) \[\qquad-\left(\langle\hat{n}_{l}\rangle-1\right)\left(\langle\hat{ n}_{l+1}\rangle-1\right)\;,\] \[\hat{V}^{\rm F} = \sum_{l,\sigma}\langle\hat{c}_{l,\sigma}^{+}\hat{c}_{l+1,\sigma} \rangle\hat{c}_{l,\sigma}\hat{c}_{l+1,\sigma}^{+}\] (18) \[\qquad-\langle\hat{c}_{l,\sigma}^{+}\hat{c}_{l+1,\sigma}\rangle \langle\hat{c}_{l,\sigma}\hat{c}_{l+1,\sigma}^{+}\rangle\;.\] Here, where \(\langle\hat{A}\rangle\) denotes the ground-state expectation value of the operator \(\hat{A}\), \[\langle\hat{A}\rangle\equiv\langle\Phi_{0}|\hat{A}|\Phi_{0}\rangle \tag{19}\] with \(|\Phi_{0}\rangle\) as the ground state of the Hartree-Fock Hamiltonian \(\hat{H}^{\rm HF}\), see below. We make the CDW Ansatz for the order parameter \[\langle\hat{n}_{l,\sigma}\rangle=\frac{1}{2}\left(1+(-1)^{l}\Delta\right) \tag{20}\] with the real CDW parameter \(\Delta\geq 0\), and introduce the abbreviation \[B=\langle\hat{c}_{l,\sigma}^{+}\hat{c}_{l+1,\sigma}\rangle={\rm i}b\;. \tag{21}\] Particle-hole symmetry implies that \(B\) is purely complex at half band-filling, i.e., \(b\) is real. Note that we disregard a possible bond-order wave (BOW) by assuming that \(B\) does not alternate from site to site. With these abbreviations, we can rewrite the Hartree-Fock interaction at half band-filling as \[\hat{D}^{\rm H} = \frac{L}{4}\left(1-\Delta^{2}\right)+\frac{\Delta}{2}\sum_{l, \sigma}(-1)^{l}\hat{n}_{l,\sigma}\;, \tag{100}\] \[\hat{V}^{\rm H} = L\Delta^{2}-2\Delta\sum_{l,\sigma}(-1)^{l}\hat{n}_{l,\sigma}\;,\] (101) \[\hat{V}^{\rm F} = 2Lb^{2}+{\rm i}b\sum_{l,\sigma}\left[\hat{c}_{l,\sigma}^{+}\hat {c}_{l+1,\sigma}-\hat{c}_{l+1,\sigma}^{+}\hat{c}_{l,\sigma}\right]. \tag{102}\] The resulting single-particle problem defines the Hartree-Fock Hamiltonian for a possible CDW ground state \[\hat{H}^{\rm HF}=\hat{T}+U\hat{D}^{\rm H}+V\left(\hat{V}^{\rm H}+\hat{V}^{\rm F }\right)\;. \tag{103}\] It has to be solved self-consistently, i.e., \(\Delta\) must be chosen such that the ground state fulfills Eq. (100). ### Diagonalization of the Hartree-Fock Hamiltonian In the CDW phase, the Hartree-Fock Hamiltonian is identical for both spin species, \(\hat{H}^{\rm HF}=\sum_{\sigma}\hat{H}^{\rm HF}_{\sigma}\). Dropping the spin index we must diagonalize \[\hat{H}_{\rm sf} = \sum_{k}\epsilon(k)\hat{C}_{k}^{+}\hat{C}_{k}+\left(\frac{U}{2}-2 V\right)\Delta\sum_{l}(-1)^{l}\hat{n}_{l} \tag{104}\] \[+{\rm i}b\sum_{l}\left[\hat{c}_{l}^{+}\hat{c}_{l+1}-\hat{c}_{l+1 }^{+}\hat{c}_{l}\right]+C\] for spinless fermions ('sf'), where \(C=UL/8(1-\Delta^{2})+LV\Delta^{2}/2+LVb^{2}\). In momentum space, the Hamiltonian reads \[\hat{H}_{\rm sf} = C+\,\sum_{k}{}^{\prime}\Bigl{[}\left(\epsilon(k)+b(k)\right)\hat {C}_{k}^{+}\hat{C}_{k}\] \[\qquad+\left(\epsilon(k+\pi)-b(k)\right)\hat{C}_{k+\pi}^{+}\hat{C }_{k+\pi}\Bigr{]}\] \[+\left(\frac{U}{2}-2V\right)\Delta\sum_{k}{}^{\prime}\left(\hat {C}_{k}^{+}\hat{C}_{k+\pi}-\hat{C}_{k+\pi}^{+}\hat{C}_{k}\right)\,,\] where the prime on the sum indicates the \(k\)-space region \(-\pi<k<0\) and \(b(k)=-2bV\sin(k)\geq 0\). We diagonalize \(\hat{H}_{\rm sf}\) with the help of the linear transformation \[\hat{C}_{k} = c_{k}\hat{\alpha}_{k}-s_{k}\hat{\beta}_{k}\;,\] \[\hat{C}_{k+\pi} = s_{k}\hat{\alpha}_{k}+c_{k}\hat{\beta}_{k}\;, \tag{105}\] where we abbreviate \(c_{k}\equiv\cos(\varphi_{k})\) and \(s_{k}=\sin(\varphi_{k})\). The mixed terms in \(\hat{H}_{\rm sf}\) vanish when we demand \[\tan(2\varphi_{k})=-\frac{(2V-U/2)\Delta}{b(k)+(\epsilon(k)+\epsilon(k+\pi))/ 2}\geq 0\;. \tag{106}\] The diagonal terms result in \[\hat{H}_{\rm sf}=\,\sum_{k}{}^{\prime}E_{\alpha}(k)\hat{\alpha}_{k}^{+}\hat{ \alpha}_{k}+E_{\beta}(k)\hat{\beta}_{k}^{+}\hat{\beta}_{k}+C \tag{107}\] with \[E_{\beta}(k) = \frac{1}{2}\left(\epsilon(k)+\epsilon(k+\pi)\right)\mp s(k)\] \[s(k) = \sqrt{\left[2V-\frac{U}{2}\right]^{2}\Delta^{2}+\left[b(k)+ \frac{\epsilon(k)-\epsilon(k+\pi)}{2}\right]^{2}}\] so that \(E_{\alpha}(k)<E_{\beta}(k)\) for all \(-\pi<k<0\). Therefore, the ground state contains only \(\alpha\)-particles, \[|\Phi_{0}\rangle=\prod_{-\pi<k<0}\hat{\alpha}_{k,\sigma}^{+}|{\rm vac}\rangle\;, \tag{108}\] where we re-introduced the spin index. ### Self-consistency equations and CDW transition The self-consistency equations (100) and (101) become \[\Delta = \Delta\int_{-\pi}^{0}\frac{{\rm d}k}{\pi}\frac{2V-U/2}{s(k)}\;, \tag{109}\] \[b = -\int_{-\pi}^{0}\frac{{\rm d}k}{2\pi}\frac{\sin(k)\left[b(k)+( \epsilon(k)-\epsilon(k+\pi))/2\right]}{s(k)}\] in the thermodynamic limit. The set \(\{\Delta=0,\,b=-1/\pi\}\) provides the solution for the Fermi sea of non-interacting particles. Within Hartree-Fock theory, the CDW transition is continuous. We seek a solution for \(\Delta=0^{+}\) and \(b=-1/\pi\) so that \(V_{\rm c}(U)\) must obey the equation \[\frac{1}{2V_{\rm c}(U)-U/2}=\int_{0}^{\pi}\frac{{\rm d}k}{\pi}\frac{1}{1/4+2V_{ \rm c}(U)\sin(k)/\pi}\;. \tag{110}\] Using Mathematica[36] and with the abbreviation \(a_{\rm c}=8V_{\rm c}/\pi\) we find \[\frac{1}{a_{\rm c}-2U/\pi}=\frac{\pi}{\sqrt{1-a_{c}^{2}}}-\frac{2}{\sqrt{1-a_{ c}^{2}}}\arctan\left(\frac{a_{c}}{\sqrt{1-a_{c}^{2}}}\right)\;. \tag{111}\] This equation must be solved numerically for given \(U\). For example, at \(U=0\) we find \(a_{c}\approx 0.394235\) so that \(V_{\rm c}(U=0)\approx 0.154816\) in Hartree-Fock theory. The resulting curve \(V_{\rm c}^{\rm HF}(U)\) is shown in Fig. 1.
2309.14553
Inverse non-linear problem of the long wave run-up on coast
The study of the process of catastrophic tsunami-type waves on the coast makes it possible to determine the destructive force of waves on the coast. In hydrodynamics, the one-dimensional theory of the run-up of non-linear waves on a flat slope has gained great popularity, within which rigorous analytical results have been obtained in the class of non-breaking waves. In general, the result depends on the characteristics of the wave approaching (or generated on) the slope, which is usually not known in the measurements. Here we describe a rigorous method for recovering the initial displacement in a source localised in an inclined power-shaped channel from the characteristics of a moving shoreline. The method uses the generalised Carrier-Greenspan transformation, which allows one-dimensional non-linear shallow-water equations to be reduced to linear ones. The solution is found in terms of Erd\'elyi-Kober integral operator. Numerical verification of our results is presented for the cases of a parabolic bay and an infinite plane beach.
Alexei Rybkin, Efim Pelinovsky, Oleksandr Bobrovnikov, Noah Palmer, Ekaterina Pniushkova, Daniel Abramowicz
2023-09-25T21:52:41Z
http://arxiv.org/abs/2309.14553v1
# Inverse non-linear problem of the long wave run-up on coast ###### Abstract The study of the process of catastrophic tsunami-type waves on the coast makes it possible to determine the destructive force of waves on the coast. In hydrodynamics, the one-dimensional theory of the run-up of non-linear waves on a flat slope has gained great popularity, within which rigorous analytical results have been obtained in the class of non-breaking waves. In general, the result depends on the characteristics of the wave approaching (or generated on) the slope, which is usually not known in the measurements. Here we describe a rigorous method for recovering the initial displacement in a source localised in an inclined power-shaped channel from the characteristics of a moving shoreline. The method uses the generalised Carrier-Greenspan transformation, which allows one-dimensional non-linear shallow-water equations to be reduced to linear ones. The solution is found in terms of Erdelyi-Kober integral operator. Numerical verification of our results is presented for the cases of a parabolic bay and an infinite plane beach. Alexei Rybkin, Efim Pelinovsky, Oleksandr Bobrovnikov, Noah Palmer, Ekaterina Pniushkova and Daniel Abramowicz Ekaterina Pniushkova and Daniel Abramowicz ## 1 Introduction With the devastating loss of life caused by tsunamis such as the Indian Ocean Tsunami in 2004 and the Tohoku Tsunami in 2011, predictive modelling of tsunami wave run-up is of great practical importance. In modern tsunami wave modelling, the shallow water, or long-wave, approximations are commonly used to predict inundation areas [10]. Such models require initial conditions to compute wave propagation. However, due to a lack of data on initial water displacement, models such as the Okada seismic model [11] are often used to generate the initial data. An alternative approach is to indirectly estimate characteristics of the tsunami source through various inversion methods. Implementations have been used to recover the initial height of the water at the source [1], the source location [12] as well as fault motion [13] to name just three. The latter, through the inversion of data gathered for the wave signal, is crucial in such fields as seismic hazard assessment. Through studying the accumulation of slip on each segment of a fault via the inverse problem, the prediction of earthquake recurrence intervals becomes increasingly more accurate. We suggest the recent review by K. Satake [13] and the sources therein for more details on tsunami inversion methods, including waveform inversion, inverse modelling for the purpose of examining the tsunami source, and the generation of tsunami inverse refraction diagrams. Note that most methods assume that wave propagation is linear, while the tsunami wave run-up is a notoriously non-linear phenomenon. Unfortunately, inverse problems for non-linear PDEs are intractable in general. That is why it is very important to identify a realistic class of bathymetries where non-linear inversion methods are available. This is the main goal of our contribution. At present, the calculations of the zones of flooding of the coast by tsunami waves are carried out within the framework of numerical codes from the source to the coast. Their testing is performed on a number of benchmarks well supported by experimental data. A special place here is occupied by the problem of the run-up of a non-linear long wave on a flat slope, which has a rigorous analytical solution in the class of non-breaking waves using the Carrier-Greenspan transformation, which makes it possible to reduce non-linear shallow-water equations to a linear wave equation with cylindrical symmetry (a particular class of the Euler-Poisson-Darboux equation) [Carrier & Greenspan (1958)]. Within the framework of this approach, the run-up of waves generated on the slope is considered (the initial displacement of the water surface is given). In this case, an initial condition of various forms is specified: soliton [Synolakis (1987)], Gaussian [Carrier et al. (2003)] and Lorentz [Pelinovsky and Mazova (1992)] pulses, \(N\)-wave [Tadepalli & Synolakis (1994)], cnoidal wave [Synolakis et al. (1988)], algebraic pulse [Dorbokhotov et al. (2017)], Okada solution [Tinti & Tonini (2005), Lovholt et al. (2012)]. Naturally, the specific characteristics of the run-up depend on the features of the shape of the initial displacement in the source. Therefore, attempts were made to parametrise these formulas in order to reduce the number of initial perturbation parameters [Didenkukova et al. (2008), Lovholt et al. (2012)]. Note that later it was possible to study a more general problem, such as taking into account the velocity of fluid movement in the source [Kanoglu & Synolakis (2006), Rybkin et al. (2019)] and solving the boundary problem when a wave approaching the coast is given [Antuono & Brocchini (2007), Aydin (2020)]. Similar approaches have also been developed for wave run-up in power-shaped bags [Hartle et al. (2021), Nicolsky et al. (2018), Rybkin et al. (2014), Rybkin et al. (2021), Shimozono (2016)]. In the papers cited above, the direct problem (the Cauchy problem) of the non-linear equations of the shallow water theory was solved. In this case, since there are no measurements of wave parameters in the shelf zone, model functions were used as initial conditions. In view of this, the inverse problem of restoring the initial conditions from the given (experimental or model) characteristics of the moving shoreline is of interest. This is especially important for fast estimates of tsunami waves in situations with uncertain wave properties during real events. In this work we consider the problem of restoring the shape of an incident wave from the known oscillations of a moving shoreline. This problem was first considered in [Rybkin et al. (2023)] and our work here is a generalisation of those results to a more diverse set of bathymetries. In this case, the following restrictions are imposed: the tsunami source is located on a slope at an arbitrary distance from the shoreline. Two configurations of the bottom relief are considered: a flat slope and an inclined parabolic channel. The solution of the inverse problem is found using the Abel transform in the class of non-breaking waves. Our work here is organised as follows. In section 2, we introduce the shallow water framework our model is built upon. In section 3 we give the statement of both the direct and inverse problem and introduce the Carrier-Greenspan hodograph transformation on which our method is based. We solve both the direct and inverse problems in section 3 through the derivation of what we call the shoreline equation, an equation relating the mechanical energy of the wave at the shoreline and the initial wave profile. Section 5 discusses the recovery of certain characteristics of the initial wave. In section 6 we give numerical verifications of our method. Finally, in section 7 we give some concluding remarks and discuss some potential future directions. ## 2 Shallow water equations (SWE) The shallow water equations (SWE) are a set of non-linear, hyperbolic PDEs which are commonly used to model tsunami wave run-up. The 2+1 (two spatial and one temporal derivative) SWE are a simplification of the Euler equations, a highly non-linear 3+1 system. They can be derived with the truncation of Taylor expansions of non-linear terms and the assumptions of no vorticity, small vertical velocity, and small depth/wavelength and wave height/depth ratios. The 2+1 SWE can be further reduced to a 1+1 system by assuming that the bathymetry is centred along the \(x\) axis and is uniformly inclined. For the bathymetries we are concerned with here (see Fig. 1), power-shaped bays with \(y\) cross-section \(|y|^{m}\), the non-linear SWE in non-dimensional units are given as \[\begin{split}\partial_{t}\eta+u\left(1+\partial_{x}\eta\right)+ \frac{m}{m+1}\left(x+\eta\right)\partial_{x}u=0,\\ \partial_{t}u+u\partial_{x}u+\partial_{x}\eta=0,\end{split} \tag{2.1}\] where \(u(x,t)\) is the depth averaged flow velocity over the corresponding cross-section, and \(\eta(x,t)\) is the water displacement exceeding the unperturbed water level. The total perturbed water depth is given as \(H(x,t)=h(x)+\eta(x,t)\) along the \(x\) axis, where \(h(x)\) is the depth of the bay, and so in dimensionless units we simply have \(h(x)=x\). Typically, the system seen in (2.1) is given in dimensional units. The substitution \[x=(H_{0}/\alpha)\widetilde{x},\quad t=\sqrt{H_{0}/9}\,\widetilde{t}/\alpha, \quad\eta=H_{0}\widetilde{\eta},\quad u=\sqrt{H_{0}g}\ \widetilde{u}, \tag{2.2}\] where \(H_{0}\) is the characteristic height of the wave, \(\alpha\) is the slope of the bathymetry and \(g\) is the acceleration of gravity, turns the dimensionless system into one with dimension. The shoreline in the physical plane (i.e., the wet/dry boundary) is given by \[x+\eta(x,t)=0. \tag{2.3}\] The solution to (2.3) equation \(x_{0}(t)\) describes the run-up and draw-down of the tsunami wave. We consider the initial value problem IVP for (2.1) with typical initial conditions characterised the instantaneous bottom displacement (see for instance [Okada et al. (1985)]) \[\eta(x,0)=\eta_{0}(x), \tag{2.4}\] \[u(x,0)=0.\] While the choice of zero initial velocity may be restrictive in physical application there are good reasons for this choice. For earthquake generated tsunamis it is typical to assume that the initial velocity is zero. Additionally, it is a convenient choice, a common technique in finding solutions to the SWE ## 3 Statement of the direct and inverse problems In this paper we investigate the following direct problem: knowing the initial displacement of the water \(\eta_{0}(x)\) and assuming zero initial velocity of the water, we find the movement of the shoreline \(x_{0}(t)\). The direct problem was solved both numerically and analytically by many authors (see the references in the introduction). The corresponding inverse problem then consists of restoring the initial displacement of the water, assuming zero initial velocity and knowing the shoreline movement \(x_{0}(t)\) and the time of an earthquake, i.e., zero time. It is worth noting that a non-linear inverse problem, as is the case with our problem, presents several challenges in terms of deriving a solution. The main Figure 1: Geometrical representations of a power-shaped bathymetry resembling the case \(m=2\). In **(a)** we have cross sectional view of the \(xOz\) plane, in **(b)** a cross sectional view of the \(yOz\) plane, and in **(c)** a 3-dimensional view of the bay and an \(N\)-wave. difficulty is that the shoreline is moving. The Carrier-Greenspan transform allows to reduce the original problem to a linear one on \(\mathbb{R}_{>0}\). ### Carrier-Greenspan Transform The Carrier-Greenspan (CG) hodograph transform, introduced in [Carrier & Greenspan (1958)], can be used to linearise (2.1) into a form which can then be solved using Hankel transforms [Courant & Hilbert]. We use the form of the CG transform, originally introduced for power-shaped bags in [Tuck & Hwang (1972)]: \[\varphi(\tau,\sigma)=u(x,t),\quad\sigma=x+\eta(x,t),\quad\psi(\tau,\sigma)=\eta (x,t)+u^{2}(x,t)/2,\quad\tau=t-u(x,t). \tag{3.1}\] Applying (3.1) to (2.1) yields the linear hyperbolic system \[\begin{split}\partial_{\tau}\psi+\frac{m}{m+1}\sigma\partial_{ \sigma}\varphi+\varphi&=0,\\ \partial_{\tau}\varphi+\partial_{\sigma}\psi&=0, \end{split} \tag{3.2}\] which is often written as the second order equation \[\partial_{\tau}^{2}\psi=\frac{m}{m+1}\sigma\partial_{\sigma}^{2}\psi+\partial _{\sigma}\psi. \tag{3.3}\] We therefore obtain a linear hyperbolic equation (3.3) from a non-linear system (2.1). Physically, \(\sigma\) denotes wave height from the bottom, \(\tau\) is a delayed time, \(\varphi\) is the flow velocity, and \(\psi\) can be called the total energy. The CG transforms main benefit is that the moving shoreline \(x_{0}(t)\) is fixed at \(\sigma=0\). Nevertheless, the CG transform has some notable drawbacks; for one, the ICs become complicated in the hodograph coordinates, making standard techniques difficult to apply. However, by setting the initial velocity of the wave to be zero, that is \(u_{0}(x)=0\), one avoids this issue. While, this premiss is restrictive, it is typical when considering earthquake generated tsunamis. Thus, we assume this condition which is equivalent to \(\varphi(0,\sigma)=0\), and so (2.4) becomes \[\varphi\Big{|}_{\Gamma}=0,\quad\psi\Big{|}_{\Gamma}=\psi_{0}(\sigma)=\eta_{0} (\gamma(\sigma)) \tag{3.4}\] where \(\Gamma\) is the vertical line \((0,\sigma)\) in the hodograph plane and \(x=\gamma(\sigma)\) solves \(\sigma=\eta(x,t)+x\). Additionally, the regular singularity at \(\sigma=0\) causes computational difficulties at the shoreline. Finally, we note that the transformation only works provided it is invertible, i.e. the wave does not break [Rybkin et al. (2021)], so we must surmise this going forward. ## 4 The Shoreline Equation In this section we derive what we call the shoreline equation of an arbitrary power-shaped bay. Specifically, we derive an equation relating \(\psi(\tau,0)\), the energy of the water at the shoreline, and \(\psi(0,\sigma)=\psi_{0}(\sigma)\) the initial displacement of the water. Notably, the direct problem has previously been solved for power-shaped bathymetries (see, for instance, [Garayshin et al. (2016)],[Didenkulova & Pelinovsky (2011)]) and the inverse problem in the narrow case of a plane beach [Rybkin et al. (2023)]. Here the direct problem is solved both analytically, as follows in this section, and numerically, as can be seen in Section 6, to ensure that propagation of the wave is being accounted for as described in [Satake et al. (1987)]. Since the energy at the shoreline can be computed from the movement of the shoreline \(x_{0}(t)\), the shoreline equation allows us to easily solve the inverse problem and recover \(\eta_{0}(x)\) after converting back into physical space. We start with the bounded analytical solution to the initial value problem (3.3, 3.4), which is given in [Rybkin et al. (2021)]: \[\psi(\tau,\sigma)=\sigma^{-\frac{1}{2m}}\int_{0}^{\infty}2k\left(\int_{0}^{ \infty}\psi_{0}(s)s^{\frac{1}{2m}}J_{\frac{1}{m}}(2k\sqrt{s})\ \mathrm{d}s\right)\cos\left(\sqrt{\frac{m}{m+1}}k\tau\right)J_{\frac{1}{m}}(2 k\sqrt{\sigma})\ \mathrm{d}k, \tag{4.1}\] where \(J_{\frac{1}{m}}\) is the Bessel function of the first kind of order \(1/m\) and \(\Gamma(z)\) is the gamma function. Since \(J_{\nu}(z)=z^{\nu}2^{-\nu}/\Gamma\left(\nu+1\right)+o(1)\) as \(z\to+0\), we obtain \[\psi(\tau,0)=\frac{2}{\Gamma(\frac{1}{m}+1)}\int_{0}^{\infty}k^{\frac{1}{m}+1 }\left(\int_{0}^{\infty}\psi_{0}(s)s^{\frac{1}{2m}}J_{\frac{1}{m}}(2k\sqrt{s} )\ \mathrm{d}s\right)\cos\left(\sqrt{\frac{m}{m+1}}k\tau\right)\ \mathrm{d}k, \tag{4.2}\] which after the substitution \(\lambda=\sqrt{s}\) and \(\widehat{\psi}_{0}(\lambda)=\psi_{0}(\lambda^{2})\) becomes \[\psi(\tau,0)=\frac{4}{\Gamma(\frac{1}{m}+1)}\int_{0}^{\infty}k^{\frac{1}{m}+1} \left(\int_{0}^{\infty}\widehat{\psi}_{0}(\lambda)^{\frac{1}{m}+1}J_{\frac{1}{ m}}(2k\lambda)\ \mathrm{d}\lambda\right)\cos\left(\sqrt{\frac{m}{m+1}}k\tau\right)\ \mathrm{d}k. \tag{4.3}\] Now, define the modified Hankel transform as \[\left[\widehat{\mathcal{H}}_{\nu}f(r)\right](k)=k^{-\nu}\int_{0}^{\infty}f(r)J _{\nu}(kr)r^{\nu+1}\ \mathrm{d}r. \tag{4.4}\] Note that \[\left[\widehat{\mathcal{H}}_{\nu}f(r)\right](\lambda)=\lambda^{-\nu}\left[ \mathcal{H}_{\nu}r^{\nu}f(r)\right](\lambda), \tag{4.5}\] where \(\mathcal{H}_{\nu}\) is the standard Hankel transform, and so we observe that \(\widehat{\mathcal{H}}_{\nu}\) is self-inverse. So, applying (4.4) to (4.3) we have \[\psi(\tau,0)=\frac{2^{2+\frac{1}{m}}}{\Gamma(\frac{1}{m}+1)}\int_{0}^{\infty} k^{1+\frac{3}{m}}\left[\widehat{\mathcal{H}}_{\frac{1}{m}}\widehat{\psi}_{0}( \lambda)\right](2k)\cos\left(2\pi\xi k\right)\ \mathrm{d}k, \tag{4.6}\] where \(\xi=(\sqrt{m/(m+1)}\tau)/2\pi=:q^{-1}\tau\). Let \(g(k)=k^{1+\frac{1}{m}}\left[\widehat{\mathcal{H}}_{\frac{1}{m}}\widehat{\psi}_ {0}(\lambda)\right](2k)\) and denote \[\left[\mathcal{F}_{c}f(t)\right](\xi)=\int_{0}^{\infty}f(t)\cos(2\pi\xi t)\ \mathrm{d}t, \tag{4.7}\] as the Fourier cosine transform. Then we obtain \[\psi(\tau,0)=\frac{2^{2+\frac{1}{m}}}{\Gamma(\frac{1}{m}+1)}\int_{0}^{\infty} g(k)\cos(2\pi\xi k)\ \mathrm{d}k=\frac{2^{2+\frac{1}{m}}}{\Gamma(\frac{1}{m}+1)}\left[ \mathcal{F}_{c}g(k)\right](\xi)=\frac{2^{2+\frac{1}{m}}}{\Gamma(\frac{1}{m}+1 )}\left[\mathcal{F}_{c}k^{1+\frac{1}{2m}}\left[\widehat{\mathcal{H}}_{\frac{1 }{m}}\widehat{\psi}_{0}(\lambda)\right](2k)\right](\xi). \tag{4.8}\] So (4.8) allows us to solve the direct problem. For that one would need to find \(\psi_{0}(\sigma)\) from the Carrier-Greenspan transform as \[\eta_{0}\left(x\right)=\psi_{0}\left(\sigma\right),\quad\sigma=x+\eta_{0} \left(x\right), \tag{4.9}\] then compute two integral transforms, and finally return to the \(\left(x,t\right)\) space using the inverse CG transform, which at the shoreline becomes \[\psi\left(\tau,0\right)=-x_{0}\left(t\right)+\dot{x}_{0}\left(t\right)^{2}/2, \quad\tau=t-\dot{x}_{0}\left(t\right). \tag{4.10}\] ### Solution to the Inverse problem In this section we invert the transform given in (4.8) in order to solve the inverse problem. Applying the inverse Fourier cosine transform to (4.8) we obtain \[\frac{\Gamma(\frac{1}{m}+1)}{2^{2+\frac{1}{m}}}\psi(q\xi,0) =\left[\mathcal{F}_{c}g(k)\right](\xi) \tag{4.11}\] \[\frac{\Gamma(\frac{1}{m}+1)}{2^{2+\frac{1}{m}}}\left[\mathcal{F} _{c}^{-1}\psi(q\xi,0)\right](k) =k^{1+\frac{1}{m}}\left[\widehat{\mathcal{H}}_{\frac{1}{m}} \widehat{\psi}_{0}(\lambda)\right](2k)\] \[\frac{\Gamma(\frac{1}{m}+1)}{2^{2+\frac{1}{m}}k^{1+\frac{1}{2m}}} \left[\mathcal{F}_{c}^{-1}\psi(q\xi,0)\right](k) =\left[\widehat{\mathcal{H}}_{\frac{1}{m}}\widehat{\psi}_{0}( \lambda)\right](2k).\] Applying the inverse Hankel transform and utilising the identity [Gradshteyn & Ryzhik (2007)] \[J_{\nu}(z)=\frac{2\left(\frac{z}{2}\right)^{\nu}}{\Gamma\left(\nu+\frac{1}{2} \right)\sqrt{\pi}}\int_{0}^{1}\cos zs(1-s^{2})^{\nu-\frac{1}{2}}\ \mathrm{d}s\quad\text{for}\ z>0,\ \nu>-\frac{1}{2}, \tag{4.12}\] we obtain \[\widehat{\psi}_{0}(\lambda)=2\Gamma(1+1/m)\lambda^{-1/m}\int_{0}^{ \infty}k^{-1/m}\int_{0}^{\infty}\psi(q\xi,0)\cos 2\pi\xi k\ \mathrm{d}\xi J_{1/m}(2k \lambda)\ \mathrm{d}(2k)\\ =\frac{8\Gamma(1+1/m)}{\sqrt{\pi}\Gamma(1/m+1/2)}\int_{0}^{\infty }\int_{0}^{\infty}\psi(q\xi,0)\cos(2\pi\xi k)\ \mathrm{d}\xi\int_{0}^{1}\cos(2\pi\lambda s)(1-s^{2})^{1/m-1/2}\ \mathrm{d}s\ \mathrm{d}k\\ =\frac{2\Gamma(1+1/m)}{\lambda\Gamma(1/m+1/2)}\int_{0}^{1}(1-s^ {2})^{1/m-1/2}4\int_{0}^{\infty}\int_{0}^{\infty}\psi(q\xi,0)\cos 2\pi k\xi\ \mathrm{d}\xi\cos(2k \lambda s)\ \mathrm{d}k\ \mathrm{d}s\\ =\frac{2\sqrt{\pi}\Gamma(1+1/m)}{\lambda\Gamma(1/m+1/2)}\int_{0 }^{\lambda/\pi}\left(1-\frac{\pi^{2}}{\lambda^{2}}\xi\right)^{1/m-1/2}4\int_{ 0}^{\infty}\int_{0}^{\infty}\psi(q\xi,0)\cos 2\pi k\xi\ \mathrm{d}\xi\cos 2 \pi k\xi\ \mathrm{d}k\ \mathrm{d}\xi\\ =\frac{2\sqrt{\pi}\Gamma(1+1/m)}{\lambda\Gamma(1/m+1/2)}\int_{0 }^{\lambda/\pi}\left(1-\left(\frac{\pi\xi}{\lambda}\right)^{2}\right)^{1/m-1/2 }\psi(q\xi,0)\ \mathrm{d}\xi. \tag{4.13}\] Upon switching back to variable \(\sigma\) one obtains \[\psi_{0}(\sigma^{2})=\frac{2\sqrt{\pi}\Gamma(1+1/m)}{\sigma^{2}\Gamma(1/m+1/2 )}\int_{0}^{\sigma^{2}/\pi}\left(1-\left(\frac{\pi\xi}{\sigma^{2}}\right)^{2} \right)^{1/m-1/2}\psi(\tau,0)\ \mathrm{d}\xi. \tag{4.14}\] Now denote \[\left[\mathcal{A}_{\alpha}f(x)\right](s)=\int_{0}^{s}\frac{f(x)\ \mathrm{d}x}{(s^{2}-x^{2})^{\alpha}} \tag{4.15}\] as the singular Abel type integral of order \(\alpha\) as seen in [3]. After a straightforward substitution one obtains that \[\psi_{0}(\sigma^{2})=\frac{2\Gamma\left(\frac{m+1}{m}\right)}{\Gamma\left( \frac{1}{m}+\frac{1}{2}\right)\sqrt{\pi}}\sigma^{-\frac{\alpha}{m}}\left[ \mathcal{A}_{1/2-1/m}\psi\left(\frac{qr}{\pi},0\right)\right](\sigma^{2}). \tag{4.16}\] Now we can use (4.16) to solve the inverse problem as follows: from the shoreline movement \(x_{0}(t)\) we find \(\psi(\tau,0)=-x_{0}(t)+\dot{x}_{0}(t)^{2}/2\) and \(\tau=t-\dot{x}_{0}(t)\), after that from (4.16) we find \(\psi_{0}(\sigma)\), and finally we find \(\eta_{0}(x)=\psi_{0}(\sigma)\) and \(x=\sigma-\eta_{0}(x)\) ### Some Remarks The transform defined in (4.15) is in fact Erdelyi-Kober fractional integration operator (see for example [21]). This operator is closely connected [10] to the Euler-Poisson-Darboux (EPD) equation, which one can obtain from SWE (2.1) by taking in the CG transform (3.1) \(\sigma^{2}=x+\eta(x,t)\). Moreover, one can use the CG transform used in [11] to obtain the IBVP for the EPD equation, that solves the inverse problem, and after that use the technique laid out in [10] to solve it. The only disadvantage of this approach is that it only applies for \(m>2\), while our method works for any positive \(m\). In [10] Erdelyi claims, that this restriction can be relaxed to any positive \(m\), however we have not investigated that. It is worth noting that (4.15) has an inverse formula for \(\alpha\in(0,1)\)[3], given as \[\left[\mathcal{A}_{\alpha}^{-1}f(x)\right](s)=\frac{2\sin(\alpha\pi)}{\pi} \frac{\mathrm{d}}{\mathrm{d}s}\int_{0}^{s}\frac{xf(x)\ \mathrm{d}x}{(s^{2}-x^{2})^{1-\alpha}}. \tag{4.17}\] Thus, for all \(m>2\) we can invert (4.16) to obtain (after substituting \(s=\sigma^{2}\)) \[\psi(qr/\pi,0)=\frac{\sqrt{\pi}}{\Gamma(1+\frac{1}{m})\Gamma(\frac{1}{2}-\frac {1}{m})}\frac{\mathrm{d}}{\mathrm{d}r}\left[\mathcal{A}_{\frac{m+2}{2m}}s^{ \frac{\alpha}{m}}\psi_{0}(s)\right](r). \tag{4.18}\] This allows to solve the direct problem using one integral operator, rather then composing two Fourier transform for \(m>2\). ### Particular cases In the two most interesting cases, that is the case of the infinite plane beach corresponding to \(m=\infty\) and a parabolic bay for \(m=2\), our solution can be shown to reduce down to particularly nice forms. For \(m=\infty\) we have \(q=2\pi\) and so (4.16) easily simplifies to \[\psi_{0}(\sigma^{2})=\frac{2}{\pi}\left[\mathcal{A}_{\frac{1}{2}}\psi(2r,0) \right](\sigma^{2}). \tag{4.19}\] The substitution \(\sigma^{\prime}=\sigma^{2}\) and \(\tau^{\prime}=\tau/2\) turns (4.19) to the form obtained in [Rybkin et al. (2023)]. For \(m=2\) we have \(q=\pi\sqrt{6}\), and so (4.16) simplifies to \[\psi_{0}(\sigma^{2})=\frac{1}{\sigma^{2}}\left[\mathcal{A}_{0}\psi(\sqrt{6}r,0 )\right](\sigma^{2})=\frac{1}{\sigma^{2}}\int_{0}^{\sigma^{2}}\psi(\sqrt{6}r,0 )\ \mathrm{d}r. \tag{4.20}\] ## 5 Estimate of the shape of the incoming wave In this section we give the exact lower bound for the support of the initial water displacement in terms of the shoreline data. First we remind that for a scalar-valued function \(f:X\to\mathbb{C}\) the support is the set \(\mathrm{supp}\,f=\{x\in X\,|\,f(x)\neq 0\}\). For the case \(m=2\) we have \[\psi_{0}(\sigma^{2})=\sigma^{-2}\int_{0}^{\sigma^{2}}\psi(\sqrt{6}r,0)\ \mathrm{d}r, \tag{5.1}\] and so we deduce that \(\inf\mathrm{supp}\,\psi_{0}(\sigma^{2})=\inf\mathrm{supp}\,\psi\left(\sqrt{6 }r,0\right)\). For \(m>2\) we can use Titchmarsh's convolution theorem, which states (see [Titchmarsh (1926)] for details) that if \[\left(f*g\right)\Big{|}_{x\in(0,a)}=\left(\int_{0}^{x}f(t)g(x-t)\ \mathrm{d}t \right)\Big{|}_{x\in(0,a)}=0, \tag{5.2}\] and \(g(x)>0\) on \((0,a)\), then \(f(x)=0\) almost everywhere on \((0,a)\). From (4.18) we have \[\psi\left(\frac{qr}{\pi},0\right)=C(m)\frac{\mathrm{d}}{\mathrm{d}r}\left( \int_{0}^{r}\left(\psi_{0}(s)s^{\frac{\pi}{2}}(r+s)^{\frac{2i+m}{2m}}\right)( r-s)^{\frac{2i+m}{2m}}\ \mathrm{d}s\right), \tag{5.3}\] and so we deduce that \(\inf\mathrm{supp}\,\psi_{0}(s)\leq\inf\mathrm{supp}\,\psi\left(\frac{qr}{\pi},0\right)\). The inverse inequality immediately follows from (4.16), and so combining these results we obtain \(\inf\mathrm{supp}\,\psi_{0}(s)=\inf\mathrm{supp}\,\psi\left(\frac{qr}{\pi},0\right)\). So we can express the lower bound of the support of \(\psi_{0}(\sigma)\). Since \(\psi_{0}(\sigma)=\eta_{0}(x)\) and \(\sigma=x+\eta_{0}(x)\), we can obtain the exact lower bound of the support \(\eta_{0}(x)\), namely \(\inf\mathrm{supp}\,\eta_{0}(x)=\inf\mathrm{supp}\,\psi_{0}(\sigma)\). In simple language that means that we can express how far from the shore the displacement is at the time of an earthquake. ## 6 Numerical Computations In this section we numerically verify our method for recovering \(\eta_{0}\) in cases where \(m\in\{1,2,3,+\infty\}\), that is for inclined parabolic boys of different shapes and an infinite sloping beach. In all of bathymetries we consider an "\(N\)-wave" \[\eta_{0}(x)=2.5\times 10^{-3}e^{-3.5(x-1.9625)^{2}}-1.25\times 10^{-3}e^{-3.5(x- 1.4)^{2}}, \tag{6.1}\] and a Gaussian wave \[\eta_{0}(x)=5\times 10^{-3}e^{-(x-3)^{2}}, \tag{6.2}\] with zero initial velocity. Existing code provided by Rybkin et al. [Rybkin et al. (2021)] was used to generate the shoreline data. We then implemented (4.19) and (4.20) respectively to recover the initial displacements. Comparison of the exact initial wave profiles and those predicted by our model can be seen in Figures 4 and 2. Corresponding shoreline movements can be seen in Figures 5 and 3. It is worth noting that when we consider the same initial displacement for various bathymetries, the amplitude of the shoreline movement decreases as \(m\) increases. It is common when a long tsunami wave is masked by wind waves that have higher frequency. Typically, the tsunami wave length is above 1 kilometre, while wind waves have length of 90 to 180 metres. The integral transform we derived cuts off high-frequency oscillations. To demonstrate that we consider a long wave with added disturbance \[\eta_{0}(x)=5\times 10^{-3}e^{-(x-2)^{2}}+2.5\times 10^{-4}\sin(50x). \tag{6.3}\] Using the same methodology we are again able to recover the initial wave profile (see Fig. 6). We also consider a noisy shoreline data and recover a more smooth initial displacement (figures 7 and 8) Figure 2: A comparison of an initial displacement of an \(N\)-wave with the displacement predicted by our model for varying power-shaped bays. The solid black line gives the exact initial displacement and the open circles denote the initial displacement predicted by our model. Figure 3: Estimated vertical shift for an initial \(N\)-wave displacement corresponding to various bathymetries, where \(R(t)=-x_{0}(t)/\alpha\), where \(\alpha=1\). Figure 4: A comparison of an initial displacement of an Gaussian wave with the displacement predicted by our model for varying power-shaped bays. The solid black line gives the exact initial displacement and the open circles denote the initial displacement predicted by our model. Figure 5: Estimated vertical shift for an initial Gaussian wave displacement corresponding to various bathymetries, where \(R(t)=-x_{0}(t)/\alpha\), where \(\alpha=1\). Figure 6: Our model effectively cuts off any high frequency waves, as can be seen with a Gaussian initial wave profile (\(m=\infty\)). Figure 8: The model appears to reduce high frequency noise. Here we took the shoreline to be \(x_{0}(t)=5\times 10^{-3}e^{-3.5(t-1.9625)^{2}}-2.5\times 10^{-3}e^{-3.5(t-1.4)^{2}}+2.5 \times 10^{-4}\pi\sin(50\pi t)\) and then computed \(\eta_{0}\) using (4.19) which recovered a relatively smooth curve, as seen above. Figure 7: While our numerical model doesn’t remove noise found within the shoreline data, the model is able to cope with the existence of noise and recover it’s behaviour. In this case, our model begins by taking an analytical shoreline function with noise, given by \(x_{0}(t)=5\times 10^{-3}e^{-3.5(t-1.9625)^{2}}-2.5\times 10^{-3}e^{-3.5(t-1.4)^{2} }+2.5\times 10^{-4}\sin(50t)\). This \(x_{0}(t)\) is utilised to compute \(\eta_{0}\) through the inverse problem. Subsequently, the model verifies the validity of this ensuring that the \(\eta_{0}\) gained by recovering \(x_{0}(t)\) through the direct problem directly matches the analytical \(x_{0}(t)\) used to begin the computations. (Done in the case \(m=2\)) Conclusions We have put forth and solved an inverse problem for tsunami in power shaped bays assuming zero initial velocity. While not considered here, we believe that a similar inverse problem can be treated in the case where the initial velocity is given as a function of the initial displacement, e.g., in the important case where \(u_{0}=-(2\sqrt{x+\eta_{0}}-2\sqrt{x})\). Indeed, our preliminary results show this to be possible in the plane beach bathymetry. We hope to return to this case in a future work. Our results here consider a tsunami wave with source an arbitrary distance from the shoreline. However, this is a highly idealised situation. In practice dispersion can only be ignored when the wave is close to the shoreline. This suggests a more practical inverse problem where we have a finite bathymetry, that is \(x\leq L\) for \(L>0\) and attempt to recover the wave at \(L\). This is a boundary value problem and the techniques developed in [Antuono & Brochini (2007)] and [Rybkin et al. (2021)] may be used to derive a shoreline equation. Finally we note that our inversion method can be readily adjusted to the data read from a mareograph which is close to the shore. Indeed, using the angle of inclination the mareograph readings can be converted into a displacement of the shoreline. ## 8 Acknowledgements This work was done as part of the 2023 summer REU program run by Dr. Alexei Rybkin and was supported by NSF grant DMS-2009980. Dr. Efim Pelinovsky thanks a support from RSF 22-17-00153. Oleksandr Bobrovnikov acknowledges support from Alaska EPSCoR NSF award #OIA-1757348 and DMS-2009980. We also thank Dr. Ed Bueler for his valuable discussions of the problem with us. We also thank the UAF DMS for hosting us.
2301.00304
Sample-Efficient Unsupervised Domain Adaptation of Speech Recognition Systems A case study for Modern Greek
Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a $120$ hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a test-bed for multi-domain evaluation of Greek ASR systems. In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.
Georgios Paraskevopoulos, Theodoros Kouzelis, Georgios Rouvalis, Athanasios Katsamanis, Vassilis Katsouros, Alexandros Potamianos
2022-12-31T22:57:30Z
http://arxiv.org/abs/2301.00304v1
# Sample-Efficient Unsupervised Domain Adaptation ###### Abstract Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a \(120\) hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a test-bed for multi-domain evaluation of Greek ASR systems. In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines. Unsupervised Domain Adaptation, Automatic Speech Recognition, Multi-Domain Evaluation, Greek Speech ## I Introduction Automatic Speech recognition (ASR) models have matured to the point where they can enable commercial, real-world applications, e.g., voice assistants, dictation systems, etc., thus being one of machine learning's success stories. However, the performance of ASR systems rapidly deteriorates when the test data domain differs significantly from the training data. Domain mismatches can be caused by differences in the recording conditions, such as environmental noise, room reverberation, speaker and accent variability, or shifts in the target vocabulary. These issues are extenuated in the case of low-resource languages, where diversity in the training data is limited due to poor availability of high-quality transcribed audio. Therefore, specialized domain adaptation approaches need to be employed when operating under domain-shift. Unsupervised Domain Adaptation (UDA) methods are of special interest, as they do not rely on expensive annotation of domain-specific data for supervised in-domain training. In contrast to supervised approaches, where the existence of labeled data would allow to train domain-specific models, UDA methods aim to leverage data in the absense of labels to improve system performance in the domain of interest [1, 2]. In the context of speech recognition the importance of UDA is extenuated, as the transcription and alignment process is especially expensive and time-consuming. Adaptation methods have been explored since the early days of ASR, at different levels of the system and different deployment settings [3]. UDA has been used to improve the robustness of ASR on a variety of recording conditions including far-field speech, environmental noise and reverberation [4, 5, 6]. Furthermore, UDA has been used for speaker adaptation, and to improve performance under speaker, gender and accent variability [7, 8]. UDA has also been employed for multilingual and cross-lingual ASR, in order to improve ASR models for low-resource languages [9], adapt to different dialects [10], and even train speech recognition systems for endangered languages [11]. Classical speech adaptation techniques involve feature-based techniques, e.g., speaker normalization [12], feature-based approaches [13, 14, 15], or multi-condition training [16]. Generally, traditional approaches require some knowledge about the target domain, and the domain mismatch, e.g., regarding the noise and reverberation variability [17], and require specific engineering for each adaptation scenario. Modern ASR pipelines, increasingly rely on end-to-end neural networks, e.g., [18, 19], or large pretrained models with self-supervised objectives [20, 21]. The key approaches employed for UDA of end-to-end ASR models can be grouped in three categories, namely, teacher-student learning [10], domain adversarial training [22], and target domain self-supervision [23]. The benefit of these techniques is that they do not require any special knowledge about the source or the target domain. This makes end-to-end UDA approaches versatile and able to be utilized in a larger array of adaptation scenarios. In particular, adaptation through self-supervision has been shown to be a robust, simple and efficient technique for adaptation of state-of-the-art speech models [24]. Here, we leverage in-domain self-supervision to propose the Mixed Multi-Domain Self-Supervision (M2DS2) finetuning strategy, enabling sample-efficient domain adaptation of wav2vec2 [20] based speech recognition models, even when available in-domain data are scarce. Our key contributions are organized as follows: 1. Inspired by recent advances on UDA for Natural Language Processing systems [45], we propose a finetuning strategy for speech models, where the self-supervised objective is based on a contrastive loss in Section III. Contrary to prior works, who leverage only in-domain self-supervision, we find that in this contrastive setting this leads to mode-collapse of the latent representations, and mixed source and target domain self-supervision is essential. We demonstrate this empirically in Section VII-B. 2. We collect and curate HParl, the largest publicly available1 speech corpus for Greek, collected from plenary sessions in the Greek Parliament between 2018 and 2022. We establish a data collection, pre-processing and alignment pipeline that can be used for continuous data integration, as the parliamentary proceedings get regularly uploaded. We provide a detailed description of our data collection process and the dataset statistics in Section IV-A. HParl is merged in Section IV with two popular Greek corpora (Logotyprogratia and CommonVoice) to create GREC-MD, a testbed for multi-domain evaluation of ASR systems in Greek. Footnote 1: We plan to release this version of HParl under the CC BY-NC 4.0 license upon publication. The other corpora used in this work are available through their respective distributors. 3. We demonstrate that, while other baselines fail at UDA in our resource-constrained setting, M2DS2 can improve model performance in the target domain in multiple adaptation scenarios in Section VII. Special emphasis is given in the sample efficiency of our approach in Section VII-A, where we demonstrate successful adaptation even when we reduce the available in-domain data. 4. When we relax the problem to a weakly supervised adaptation setting, where some in-domain text is available but the pairing between audio and text is unknown, we find that M2DS2 can be effectively combined with simple N-gram adaptation techniques to get comparable performance with the fully supervised baseline in Section VIII. Furthermore we find that a simple text augmentation approach, based on perplexity filtering of a large corpus can produce strong adaptation results, even for small amounts of in-domain text. Additionally, we provide a formulation of the UDA problem for ASR in Section II-A and link prior works to this formulation in Sections II-B, II-C and II-D. We provide detailed experimental settings for reproducibility in Section V, and an upper-bound estimation for UDA performance with fully supervised finetuning in Section VI. ## II Background We start by formally defining the Unsupervised Domain Adaptation (UDA) problem. Initially, we formulate the problem in a classification setting and then we extend it for speech recognition. We then provide an overview of different adaptation approaches in the literature, and link each approach to the UDA problem formulation. Table I presents a summary of the key adaptation settings and applications that are explored in the literature. We see, that a relatively small amount of methods, and their variants, is used to address multiple real-world ASR problems, for example, cross-lingual, accent, speaker and noise adaptation. Furthermore, while the majority of the works focus on the English language, there is an effort to explore other popular languages, e.g., Mandarin, and under-resourced languages, e.g., Ainu, Somali etc. ### _Problem Definition_ Formally, the problem of UDA can be defined as follows. Let \(X\subseteq\mathbb{R}^{n}\) be a real-valued space that consists of \(n\)-dimentional feature vectors \(x\in X\), and \(Y\) a finite set of labels \(y\in Y\), i.e., \(Y=\{1,2,\ldots,L\}\). Furthermore, assume two different distributions, i.e., the source domain distribution \(\mathcal{S}(x,y)\) and the target domain distribution \(\mathcal{T}(x,y)\), defined on the cartesian product \(X\times Y\). The goal is to train a model that learns a mapping between feature vectors \(x_{\mathcal{T}}\) to their respective labels \(y_{\mathcal{T}}\) for samples drawn from the target distribution \((x_{\mathcal{T}},y_{\mathcal{T}})\sim\mathcal{T}\). At training time we have access to samples from the source distribution \(\mathcal{S}(x,y)\) and the marginalized target distribution \(\mathcal{T}(x)\), i.e., no target labels are provided. We define the training dataset \(D\) as the concatenation of the source and target training sets, \(D=(D_{S},D_{T})\). \(D_{S}\) and \(D_{T}\) are defined as sequences of tuples, i.e., \[\begin{split} D_{S}&=\{(x_{i},y_{i})\,|\,(x_{i},y _{i})\sim\mathcal{S}(x,y),\,1\leq i\leq N\}\\ D_{T}&=\{(x_{i},\emptyset)\,|\,x_{i}\sim\mathcal{T} (x),\,1\leq i\leq M\},\end{split} \tag{1}\] where we draw \(N\) samples from \(\mathcal{S}(x,y)\) and \(M\) samples from \(\mathcal{T}(x)\). Finally, we augment tuples in \(D\) with a domain indicator function: \[\begin{split} D&=\{(x_{i},y^{\prime}_{i},\mathbb{1 }_{i})\,|\,1\leq i\leq N+M\}\\ \mathbb{1}_{i}&=\begin{cases}0&\text{if}\;\;x_{i} \sim\mathcal{S}(x),\\ 1&\text{if}\;\;x_{i}\sim\mathcal{T}(x).\end{cases}\\ y^{\prime}_{i}&=\begin{cases}y_{i}&\text{if}\;\;x_{i} \sim\mathcal{S}(x),\\ \emptyset&\text{if}\;\;x_{i}\sim\mathcal{T}(x).\end{cases}\end{split} \tag{2}\] #### Ii-A1 Unsupervised (Acoustic) Adaptation for ASR The above definition can be directly extended in the case of speech recognition, with some modifications. In detail, we modify the feature space \(X\), to be the set of (finite) sequences of real-valued feature vectors \((x_{k})_{k\in\mathbb{N}\setminus\{\infty\}}\in X\subseteq(\mathbb{R}^{n})^{*}\). Furthermore, the label space \(Y\) is modified to be the set of sequences \((y_{m})_{m\in\mathbb{N}\setminus\{\infty\}}\), where \(Y=(\{1,2,\ldots,L\})^{*}\) contains finite-length sequences over a finite lexicon. For CTC training we make the assumption that \(k>m\) for any sample \((x_{k},y_{m})\), i.e., feature sequences are longer than their respective label sequences [46]. The rest of the definitions need no modifications. #### Ii-A2 Unsupervised (Language) Adaptation for ASR Adaptation for ASR systems can also be performed at the language level, i.e., the label space. In this setting, we assume that the target domain samples are drawn from the marginalized target distribution \(\mathcal{T}(y)\). The target dataset \(D_{T}\) now consists of tuples in the form \((\emptyset,y_{i})\), where \(y_{i}\) is the label word sequence \((y_{m})_{m\in\mathbb{N}\setminus\{\infty\}}\) for the \(i\)-th sample. #### Ii-A3 Weakly supervised Adaptation for ASR The last setting we explore is the case were both audio and language in-domain samples are available, but the mapping between them is unknown. This situation can be encountered in real-world settings, e.g., in the case in-domain audio and text are collected independently. For example consider the case where audio clips from news casts are collected, along with contemporary newspaper articles. Another example is the case where long audio clips alongside with transcriptions are available, but no fine-grained time alignments2. In this case the target domain samples are drawn independently from the marginalized distributions \(\mathcal{T}(x)\) and \(\mathcal{T}(y)\), and the target dataset \(D_{T}\) consists of tuples in the form \((x_{i},\emptyset)\) and \((\emptyset,y_{i})\). Footnote 2: While a fully supervised in-domain dataset can be constructed in this case using long / forced alignment methods, this is not a focal point for the experimental part of this work. ### _Teacher-Student Models_ Teacher-Student learning or self-training, is one of the earliest methods in semi-supervised learning [47, 48, 49]. The key idea is to reduce the problem of unsupervised learning of the task at hand in the target domain to a supervised one. The general methodology is to train a teacher model \(g_{S}\) using the labeled data in the source domain \(D_{S}\), and then use this for inference on the target domain to produce pseudolabels \(\hat{y}_{i}=g_{S}(x_{i}),\,x_{i}\sim\mathcal{T}(x)\). The target domain dataset \(D_{T}\) is augmented with these silver labels, to contain tuples \((x_{i},\hat{y}_{i})\). Finally, a student model \(g_{T}\) is trained in a supervised fashion, using the augmented \(D_{T}\) or a combination of \(D_{S}\) and \(D_{T}\). This process is usually repeated, with the student model serving as the teacher model for the next iteration, until no further improvement is observed. More recently, soft target Teacher-Student learning has been explored for ASR [26, 31, 50], where the KL divergence between the teacher and student output label distributions is used as the loss function. Being trained only on the source domain data the teacher model is susceptible to error propagation. Filtering is a commonly used technique to achieve the right balance between the size of the target domain used for training the student model and the noise in the pseudolabels. Confidence scoring based on the likelihood is usually applied, discarding those utterances for which the hypothesized labels are untrustworthy [51]. In [25] dropout is used to measure the model uncertainty. The agreement between model predictions with and without dropout are used for confidence scoring. In [23] a multi-task training objective with a confidence loss is applied to minimise the binary cross entropy between the estimated confidence and the binary target sequence. In order to learn more robust and generalizable features from the teacher model, Noisy Student Training (NST) has been proposed in [52]. The teacher models generates pseudolabels for \(D_{T}\) while the student models are trained on a heavily augmented version of \(D_{T}\)[52]. In [52, 53] the augmentation of the input target data is performed with SpecAugment [54], while in [29] a spectrum frequency augmentation is performed. In [4] Teacher-Student learning with soft labels is introduced for ASR to tackle noisy, far-field, and children speech. In [5], this approach is extended for LF-MMI based models and used for noisy, far-field and bandwidth adaptation. In [29] a weighted sum of hard and soft target cross entropy losses is used for Japanese dialects and children speech adaptation. Ramabhadran et al. [31] propose a self-adaptive distillation, and a method for distilling from multiple teachers that is applied across several multilingual ASR systems for different language groups. A comparison between soft and hard targets for RNN-T models [19] showed that soft targets perform better when both the teacher and student models have the same architecture. Otherwise, hard targets are superior [50]. ### _Domain Adversarial Training_ Domain Adversarial Training (DAT) was initially introduced for image classification [55]. The key idea is to train a model that learns deep features that solve the task at hand in the source domain, while being invariant with respect to the domain shift. Concretely, the model is trained end-to-end using a combination of the supervised task loss \(L_{t}\), learned on \(D_{S}\), and the domain discrimination loss \(L_{a}\), i.e., \(L=L_{t}-\alpha L_{a}\). The loss \(L_{a}\) is binary cross-entropy, trained for domain discrimination using the tuples \((x_{i},\mathbbm{1}_{i})\). Notice the \(-\) sign in the loss indicates adversarial learning, i.e., the model should learn features that cannot discriminate between domains, while solving the task. In [6] DAT is employed for noise adaptation on a noise corrupted version of WSJ [56] as the target dataset. Using the Aurora-4 [57] dataset which has labels associated to the noise type, Serdyuk et al. [33] train an adversarial noise classifier. In [8] and [39] DAT is utilized for accent adaptation for Mandarin and English respectively. Anoop C.S. et al. [9] propose DAT, to address the scarcity of data in low-resource languages which share a common acoustic space with a high-resource language, namely Sanskrit and Hindi. They empirically demonstrate the effectiveness of adversarial training, presenting experiments with and without the reversal of the domain classification loss. ### _Leveraging In-domain Self-supervision_ These lines of work have roots in Natural Language Processing tasks [45, 58], and explore domain adaptation by leveraging the in-domain data \(D_{T}\) for self-supervised learning. The core focus is domain adaptation of large pre-trained models, e.g., [59], and self-supervision is achieved by use of the pre-training self-supervised loss \(L_{s}\). This process can either take part in stages, via continual pre-training [58], or by constructing a multitask objective \(L=L_{t}+\alpha L_{s}\), as in [45]. Continual Pre-Training (CPT) has been explored for adaptation of ASR models. Robust wav2vec2 [24] explores the effectiveness of CPT for domain adaptation, indicating the importance of utilizing unlabeled in-domain data. In CASTLE [42], CPT is combined with an online pseudolabeling strategy for domain adaptation of wav2vec2. Cross-dataset evaluation for popular English speech corpora indicates that CPT helps to reduce the error rate in the target domain. In [43] and [11] CPT is utilized for cross-lingual adaptation of wav2vec2 for Korean and Ainu respectively. Notably for Ainu, which is an endangered language, CPT has resulted in significant system improvement. DeHaven and Jayadev [44] compare CPT and pseudolabeling for adapting XLSR-53 to four under-resourced languages, i.e., Georgian, Somali, Tagalog and Farsi. They find that both approaches yield similar improvements, with CPT being the more computationally efficient approach. While CPT yields significant improvements in a variety of tasks, one common theme in these works is the assumption of hundreds or thousands of hours of available in-domain data, mostly from online resources, e.g., YouTube. This can be infeasible when we consider more niche adaptation settings, or possible privacy concerns, e.g., how would one collect \(1000\) hours of psychotherapy sessions in Greek? In this work, we explore domain adaptation methods in a more resource-constrained environment. ## III Domain Adaptation Through Multi-Domain Self-Supervision The proposed approach is based on end-to-end adaptation of a large pre-trained speech model during the finetuning phase, by including in-domain self-supervision. We extend UDALM [45], that has shown promise for NLP tasks, for adaptation of wav2vec2 based acoustic models, and specifically XLSR. We focus on the problem of UDA in the context of a low-resource language, i.e., Greek. The key finding of our exploration is that straight-forward extension of UDALM, i.e., by using only target domain self-supervision, underperforms in this setting, and use of both source and target domain data is essential for successful adaptation. In this section, first, we will present a quick overview of the XLSR-53 training procedure, and then we are going to outline the proposed domain adaptation approach, which is shown in Fig. 1. ### _Xlsr-53_ XLSR-53 [21] is a massively pre-trained speech model, trained on \(56,000\) hours of multilingual speech, covering \(53\) languages. The model is based on wav2vec2 [20], which is composed of a multi-layer convolutional feature encoder, that Fig. 1: Target-domain adaptation through self-supervision. In the left we see the general pre-training stage of XLSR-53 using the self-supervised loss \(L_{s}\). General pre-training is performed on \(56,000\) hours of audio in \(53\) languages. In the right, we see the proposed domain-adaptive finetuning stage, where the speech recognition task is learned using transcribed source domain data, while adaptation to the target domain is performed by including the self-supervised loss over (audio-only) source and target domain data extracts audio features \(z_{t}\) from the raw audio, and a transformer context encoder that maps the latent audio features to the output hidden states \(c_{t}\). Each latent feature \(z_{t}\) corresponds to \(25\) ms of audio with stride \(20\) ms. A contrastive objective \(L_{c}\) is used for pre-training. For this, product quantization [60] is applied to the features \(z_{t}\), and then a discrete approximation of \(z_{t}\) is obtained by sampling from a Gumbel-softmax distribution [61], to obtain discrete code vectors \(q_{t}\), organized into \(G=2\) codebooks with \(V=320\) vocabulary entries each. The contrastive loss aims to identify the correct code vector for a given time step, among a set of distractors \(Q_{t}\), obtained through negative sampling from other timesteps. To avoid mode collapse, a diversity loss \(L_{d}\) is included by maximizing the entropy over the averaged softmax distribution over the code vector entries \(\bar{p}_{g}\). The total loss is: \[L_{s}=\underbrace{-log\frac{e^{x(x_{t},q_{t})}}{\sum_{\tilde{q}\sim Q_{t}}e^{x (x_{t},q)}}}_{\text{Contrastive Loss}}\overbrace{-\frac{1}{GV}\sum_{g=1}^{G} \sum_{v=1}^{V}\bar{p}_{g,v}log(\bar{p}_{g,v})} \tag{3}\] ### _Domain Adaptive finentuning for Contrastive Learning of Speech Representations_ Fig. 1 shows the proposed finetuning process. The key intuition is that we want the model to synergistically learn the task at hand (in our case ASR), while being adapted to the target domain by in-domain self-supervision. In the left we see the general pre-training stage of XLSR-53, which is pre-trained on \(56\)K hours of multilingual audio corpora using the contrastive pre-training objective. In the right we see the proposed finetuning stage, which is inspired by [45]. During finetuning we form a mixed objective function: \[L=L_{CTC}(x_{s},y_{s})+\alpha L_{s}(x_{s})+\beta L_{s}(x_{t}), \tag{4}\] where \((x_{s},y_{s})\sim\mathcal{S}(x,y)\), \(x_{t}\sim\mathcal{T}(x)\), \(L_{CTC}\) is the CTC objective function, optimized using transcribed source domain data, and \(L_{s}\) is the contrastive loss from Eq. (3). We scale the contribution of each term using hyper-parameters \(\alpha\) and \(\beta\). Note that contrary to [45], who use only in-domain self-supervision, we leverage both source and target domain samples for the mixed self-supervision. We find that this is essential in our case to avoid mode collapse, i.e., the model using only a few of the available discrete code vectors. Simultaneous self-supervision on both the source and target data alleviates mode collapse by anchoring the target code vector space to have a similar structure as the source code vectors. Hence we refer to this approach as Mixed Multi-Domain Self-Supervision (M2DS2). ## IV The Grec-md corpus For our experiments we compose a speech corpus for the Greek language, that is suitable for multi- and cross-domain evaluation. The GREC-MD corpus contains \(206\) hours of Greek speech. Audio is segmented into individual utterances and each utterance is paired with its corresponding transcription. Table II summarizes the included sub-corpora, as well as the train, development and test splits. The dataset is constructed with three core principles in mind: 1. **Data Volume**: We collect the largest publicly available speech recognition corpus for the Greek language, able to scale to hundreds of hours of transcribed audio. 2. **Temporal Relevance**: Language changes over time. We aim at an up-to-date corpus that encompasses the latest terms and topics that appear in daily speech. 3. **Multi-Domain Evaluation:** Single domain evaluation can lead to misleading estimations of the expected performance for ASR models. For example, state-of-the-art ASR models [27] achieve under \(5\%\) Word Error Rate (WER) on Librispeech [62] test sets, but this is an over-estimation of system performance in the field. This is attenuated when considering different acoustic conditions or terminology. We consider multi-domain evaluation essential when developing and deploying real-world ASR models. To satisfy the first two points, we collect data from a public, continuously updated resource, i.e., the Hellenic Parliament Proceedings, where recordings of the parliamentary sessions are regularly uploaded. The benefit of using this resource is the straight-forward collection of a continuously growing, multi-speaker corpus of transcribed audio that is always up-to-date, as the parliamentary discussions revolve around current affairs. We refer to this corpus as HParl. For the multi-domain evaluation, we merge HParl with two publicly available corpora, that have different acoustic and language characteristics. We refer to the merged, multi-domain corpus as GREC-MD. In this Section, we will describe the collection and curation process of HParl, and present the relevant statistics for the experiments. ### _Collection and Curation of HPArl_ Modern technological advances allow for more direct government transparency, through the commodification of storage and internet speeds. In this spirit, the records of plenary sessions of the Hellenic Parliament are made publicly available, for direct access through a webpage3. The available video recordings date back to 2015. For each plenary session, a video recording is uploaded, along with a full transcription that is recorded verbatim, and in real time by the parliament secretaries. For the creation of HPArl, we build a web-crawler that can traverse and download the video recordings, along with the transcriptions from the official website. The collection process is parallelized over multiple threads, and parameterized by a range of dates and, optionally, a target corpus size in GB or in hours. For this version of HPArl, we collect the plenary sessions in four date ranges, as described in Table III. The majority of the collected sessions are from 2019, but we also include sessions from 2018 and 2022 to include coverage of different topics. The individual components of the HPArl curation pipeline are: Audio Pre-processing, Text Pre-processing, Alignment, Post-processing, and dataset Splitting. Footnote 3: [https://www.helleniciparliament.gr/en/](https://www.helleniciparliament.gr/en/) #### Iii-A1 Audio Pre-processing Fig. 2 shows the layout of the Hellenic Parliament Chamber. Plenary sessions mainly take place in this room, or in the secondary House Chamber that has similar setup but is smaller in size. Because of the room and microphone characteristics, the captured audio in the video streams contains reverberation, due to sound reflections. We employ a light preprocessing pipeline, by passing the input video streams through FFmpeg, and converting them to monophonic, lossless audio format at \(16000\) Hz sampling rate. The resulting audio is not passed through any de-reverberation or speech enhancement software. The resulting audio files have a minimum, average and maximum duration of \(6\) minutes, \(6\) hours and \(16\) hours respectively. #### Iii-A2 Text Pre-processing The text files contain full, word-by-word transcription of the speeches and questions asked by members of the audience, as well as extra annotations made by the parliament secretaries. Some annotations are relevant, i.e., the speaker name, while others are plain text descriptions of events happening during the session and need to be filtered out (e.g., "The session is interrupted for a 15 minute break"). We use a rule-based system, based on regular expressions, that filters the unnecessary information, keeping only the transcriptions and the speaker names. The speaker labels are created by transliterating their names and roles from Greek to Greekish using the "All Greek to Me!" tool [63]. Text is lower-cased and normalized to remove multiple whitespaces. The result is a text file containing the raw transcriptions, and a mapping from speaker labels to their respective text parts. #### Iii-A3 Alignment and Segmentation The primary challenge of exploiting the plenary sessions for ASR purposes is the length of the plenary recordings, as their durations vary from \(6\) minutes to \(16\) hours in length. However, data samples used to train ASR are generally less than \(30\) seconds long. Computational challenges have limited the length of training utterances for HMM-GMM models [64], and continue to do so in the contemporary neural network models. Therefore, we need to segment the sessions into smaller pieces more suitable for ASR training. A second challenge is posed by mismatches between audio and transcripts. Parliamentary proceedings do not fully capture everything that is said during the parliamentary sessions, and do not account for speech disfluencies. In order to obtain smaller, clean segments, that are suitable for ASR training we follow the segmentation procedure proposed by [65]. Initially the raw recordings are segmented into \(~{}30\) second segments and the transcriptions are split into smaller segments of approximately \(1000\) words called _documents_. Each segment is decoded using a seed acoustic model trained on the Logotypografia corpus [66] and a 4-gram biased LM trained on the corresponding transcription of each recording. The best path transcript of each segment is obtained and paired with the best matching _document_ via TF-IDF similarity. Finally each hypothesis is aligned with the transcription using Smith-Waterman alignment [67] to select the best matching sub-sequence of words. The above method yields a list of text utterances, with their corresponding start and end times in the source audio files. The procedure yields \(120\) hours of useable segmented utterances out of the original \(303\) hours of raw audio, or a ratio of \(39.6\%\). #### Iii-A4 Post-processing After the segments are extracted, we filter out extremely short segments (less than \(2\) words). Moreover, the iterative alignment algorithm may replace some intermediate words with a <spoken-noise> tag. When this tag is inserted, we match the surrounding text with the raw transcriptions and re-insert the missing words. Furthermore, we match each segment to its corresponding speaker label. Segments without a speaker label are discarded. Lastly, speakers are associated to their gender based on name suffixes, using a simple, Greek language-specific, rule: Speaker names which end in a(a), h(r), w(c) or is(c) are classified as female, while the rest as male. We format the segments, speaker and gender mappings in the standard folder structure used by the Kaldi speech recognition toolkit [36]. #### Iii-A5 Data Splitting We provide an official train - development - test split. The development set contains \(3\) plenary sessions, one from 2018, one from 2019 and one from 2022, Fig. 2: Overview of the Hellenic Parliament Chamber. The chamber has an amphithematical shape and can accomodate approximately \(400-450\) people. The positions of the key speakers, i.e., current speaker and the parliament president are annotated in the image. resulting to \(9\) hours of segmented speech. Similarly, the test set contains one session from each year, resulting to \(11\) hours of segmented speech. The rest \(99\) hours of segmented speech are assigned to the training set. ### _Including corpora from different domains_ We merge HParl with two publicly available corpora to create GREC-MD for multi-domain evaluation. #### Iv-B1 Common Voice Common Voice (CV) [68] is a crowdsourced, multi-lingual corpus of dictated speech, created by Mozilla. The data collection is performed by use of a web app or an iPhone app. Contributors are presented with a prompt and are asked to read it. The prompts are taken from public domain sources, i.e., books, wikipedia, user submitted prompts and other public corpora. The maximum prompt length is \(15\) words. A rating system is built into the platform, where contributors can upvote or downvote submitted <audio, transcript> pairs. A pair is considered valid, if it receives two upvotes. Speaker independent train, development and test splits are provided. The dataset is open to the research community, released under a permisFive Creative Commons license (CC0). In this work, we use version 9.0 of CV, accessed on April 27, 2022. We keep only the valid utterances, i.e., \(16\) hours of speech from \(325\) contributors (\(19-49\) years old, \(67\%\) male / \(23\%\) female). #### Iv-B2 Logotypografia Logotypografia [66] is one of the first corpora for Large Vocabulary Continuous Speech Recognition in Greek. The dataset contains \(33,136\) newscast utterances, or \(72\) hours of speech. The utterances were collected from \(125\) speakers (\(55\) male, \(70\) female), who were staff of the popular "Eleftherotypia" newspaper in Greece, under varied acoustic conditions. Approximately one third of the utterances were collected in a sound proof room, one third in a quiet room and the last third in an office room. The average utterance duration is \(7.8\) seconds. The transcriptions contain several speech and non-speech events (e.g., <cough>), lower-cased Greek words and stress marks. Numbers are expanded to full words. We use the whole dataset, and perform light preprocessing in the transcriptions, by discarding the annotated events and punctuation. We hence refer to each dataset by the abbreviations: HParl: HP, CommonVoice: CV, Logotypografia: LG. ## V Experimental Settings For our experiments we use the following hyper-parameter settings, unless explicitly stated otherwise. For model training, we use AdamW optimizer [69] with learning rate \(0.0003\). We apply warmup for the first \(10\%\) of the maximum training steps, and a linear learning rate decay after that. Models are finetuned for a maximum of \(10000\) steps. For speech recognition training, we make use of the Connectionist Temporal Classification (CTC) loss [70], optimized using the available transcribed data in each scenario. Validation runs every \(500\) steps on the development set, and early stopping is employed on the development CTC loss with patience \(5\). Batch size is set to \(8\) during finetuning for all scenarios, except for M2DS2. In the case of M2DS2 we create mixed batches of size \(12\), containing \(4\) transcribed source domain samples and \(8\) unlabeled target domain samples and train for \(10,000\) CTC updates. For memory reasons we split the mixed batches in mini-batches of \(4\) and interleave them during model training. Gradients are accumulated over \(3\) interleaved batches. For the self-supervised objective, we create masks of maximum timestep length \(10\), with masking probability \(0.4\). We weigh the contributions of the source and target domain contrastive objectives, and bring them to the same order of magnitude as the CTC loss, by setting \(\alpha=0.01\) and \(\beta=0.02\). The convolutional feature encoder is kept frozen for all experiments. Our code is based on the huggingface 4 implementation of XLSR. For all experiments we resample the audio files to \(16\) kHz and downsample to single channel audio. We exclude utterances in the training set that are longer than \(12\) seconds. All experiments are run on a single NVIDIA RTX 3090 GPU, with mixed precision training. Footnote 4: [https://huggingface.co/docs/transformers/](https://huggingface.co/docs/transformers/) For the Language model training, we create a large corpus for the Greek language using a subset of the Greek part of CC-Net [71] (approximately \(11\) billion tokens) and combine it with \(1.5\) billion tokens from the Greek version of Wikipedia and the Hellenic National Corpus (HNC) [72]. During preprocessing, we remove all punctuation and accents, deduplicate lines and convert all letters to lowercase. We will refer to this corpus as the Generic Greek Corpus (GGC). We train a 4-gram language model on GGC using KenLM [73] and prune bigrams, trigrams and four-grams with counts less than \(3\), \(5\) and \(7\) respectively. We incorporate the n-gram LMs at inference time using the yctdecode framework5. We use language model rescoring over a beam search decoder with \(13\) beams. Footnote 5: [https://github.com/kensho-technologies/pyctdecode](https://github.com/kensho-technologies/pyctdecode) The evaluation metric is the Word Error Rate (WER) over the target test set. For assessing the adaptation effectiveness we also report the relative WER improvement over the unadapted baseline in appropriate scenarios, which is defined in Eq. (5). We refer to this metric as Relative Adaptation Improvement (RAI) for the rest of this paper: \[RAI=-\frac{WER_{adapted}-WER_{unadapted}}{WER_{unadapted}}\times 100\% \tag{5}\] The minus sign is included, so that RAI takes negative values when the adaptation fails, i.e., when \(WER_{unadapted}<WER_{adapted}\). ## VI Supervised In-Domain Training In the first set of experiments, we explore the performance of supervised finetuning of XLSR-53 for each domain. This will give an upper bound estimation for UDA performance. We finetune XLSR-53 on CV, HP and LG (separately) and perform in-domain evaluation on the respective test sets. Results are summarized in Table IV. The first row indicates the performance of greedy decoding, while in the second row we report the performance of the beam search decoder, rescored using the scores of the 4-gram GGC language model. We observe that the greedy decoding performance is under \(30\) WER for both HP and CV, while for LG we achieve \(\sim 32\) WER. This makes sense, as LG is the most diverse dataset, with respect to the included acoustic conditions. Furthermore, we observe that the incorporation of a language model results in an impressive WER reduction on CV, followed by HP and then LG. While CV includes relatively simple phrases with common vocabulary, HP and LG contain more specialized terminology. ## VII Unsupervised Domain Adaptation Using In-Domain Audio Here, we evaluate the effectiveness of M2DS2 for UDA. We compare with three baselines: 1. **Source Only Training (SO):** We perform supervised finetuning of XLSR-53 (CTC) using only the source-domain data, and run decoding on the target domain test set. No in-domain data are used for adaptation. 2. **Continual Pre-Training (CPT):** We perform a pre-training phase using the loss in Eq. (3) on the target domain train set, to create adapted versions of XLSR. Pre-training is run for \(20000\) steps with batch size \(4\). Only the audio is used, without transcriptions. The adapted checkpoints are then finetuned by use of CTC loss on the source domain transcribed data. Evaluation is performed on the target test set. 3. **Pseudolabeling (PSL):** We finetune XLSR-53 using the source domain data with CTC loss. Then we run inference on the source model, to extract silver transcriptions for the target domain training set. We use the silver transcriptions for supervised finetuning on the target domain. In Table V we compare M2DS2 with the SO, CPT and PSL baselines for six adaptation scenarios, i.e., cross dataset evaluation between the three datasets in GREC-MD. The left half corresponds to greedy decoding, while for the right half we use the 4-gram LM trained on GGC. First, we observe the SO model performance. The SO models are the finetuned models from Table IV, evaluated in out-of-domain settings. We see that out-of-domain evaluation results in a large performance hit, e.g., while in the CV9 \(\rightarrow\) CV9 in-domain setting we achieve \(29.33\) WER, in the CV9 \(\rightarrow\) HP out-of-domain setting we get \(69.55\) WER. This confirms that for real-world ASR tasks, multi-domain evaluation is of essence. Second, we observe that in most adaptation scenarios both CPT and PSL fail to surpass the SO (unadapted) baseline. In the case of CPT, we hypothesize that is due to the relatively data constrained version of our setting. In the best-case scenario, we have \(99\) hours of available target domain audio, which is not enough to perform a discrete CPT stage. Note that most of works in the literature use \(\sim 1000\) hours of target audio for CPT. In the case of PSL, the poor performance is due to the quality of the silver labels created by the seed model. While the performance would improve with more elaborate approaches (e.g., confidence filtering), in challenging adaptation scenarios PSL approaches are limited by the SO model's performance. Lastly, we observe that M2DS2 is the only approach among our baselines that manages to achieve a positive RAI in most adaptation scenarios, by consistently outperforming the SO baseline by significant margins. This is exaggerated when we include a LM during inference. One exception in this pattern is the HP \(\rightarrow\) LG scenario, where the SO baseline achieves the best performance. We attribute this to the fact that we performed minimal hyper-parameter tuning during model development. Fig. 3: Performance of M2DS2 (blue line) for the LG \(\rightarrow\) CV setting, when reducing the amount of available target samples to \(50\%\), \(25\%\), and \(10\%\) of the original dataset (horizontal axis). SO performance is indicated with the orange line. Vertical axis: WER, Horizontal Axis: target audio percentage (\(100\%\to 0\%\)) ### _The sample efficiency of M2DS2_ One key observation in the literature, and in our experiments is that CPT requires a large amount of un-transcribed target domain audio. This raises the question, can we leverage self-supervision for domain adaptation in data constrained settings? In Fig. 3 we evaluate the performance of M2DS2, when we reduce the amount of target domain audio. Specifically we focus on the scenario of LG \(\rightarrow\) CV. The full training corpus of CV contains \(12\) hours of audio. We train M2DS2 with \(50\%\), \(25\%\) and \(10\%\) of the available samples, or \(6\), \(3\) and \(1.2\) hours of audio respectively, and plot the resulting WER on the target (CV) test set. In all cases, the full source (LG) training corpus is used. We observe that M2DS2 achieves lower WER than the SO baseline, even with only \(3\) hours of target domain audio. While CPT can suffer from catastrophic forgetting, as most multi-stage training approaches, M2DS2 avoids this issue, being a single-stage approach with a mixed task-specific and self-supervised objective. This provides a promising avenue for adaptation, when collection of in-domain recordings is expensive, or infeasible. ### _The importance of Multi-Domain Self-Supervision_ In Section III-B we argue that it is essential to include both source and target domain data for the self-supervised objective of M2DS2. To illustrate the effect of this approach, we train two versions of M2DS2 for the LG \(\rightarrow\) CV scenario. For the first version we set \(\alpha=0.01\), while for the second we set \(\alpha=0\), removing the second term of Eq. (4). We extract the code vectors for the first \(100\) samples of both LG and CV, and flatten them across the time steps, resulting to \(60000\times 768\) code vectors corresponding to individual timesteps. We plot these code vectors using T-SNE [74] in Fig. 4 for both models. We see that when we do not include the source domain self-supervision, the code vector space collapses in a few tight clusters, and most audio segments correspond to just a few code vectors. This is a visual clue that indicates the mode collapse problem. When we include the source domain term, we see that the that the code vector space has more structure, and coverage of the space is more complete, both for CV (target domain) and LG (source domain). Experimentally we train M2DS2 with \(\alpha=0\) for all source / target domain pairs and we find that the mode collapse is destructive for target domain performance. During our experiments we got WER in the range \(80-99\), indicating failure to converge to acceptable solutions across all scenarios. The simple inclusion of both source and target domain self supervision stabilizes training, avoids mode collapse and leads to successful unsupervised adaptation between domains. ## VIII Unsupervised and Weakly Supervised Language Adaptation When small amounts of in-domain textual data are available, simple N-gram LM adaptation techniques can be very effective. In this brief set of experiments, we first explore the unsupervised language adaptation setting, where no in Fig. 4: T-SNE scatter plots of code vectors extracted from M2DS2 without source domain self-supervision (top) and with source domain self-supervision (bottom) for LG (red) and CV (teal) Fig. 5: Language-only adaptation for LG \(\rightarrow\) HP using the SO model finetuned on LG. In-domain text data range from 11M tokens (left) to 110K tokens (right). Bluve/dashed: Baseline with generic LM. Purple/circles: Biased LM. Orange/diamonds: Augmented LM. domain audio is used, and then we relax the problem to the weakly supervised setting, where M2DS2 is combined with the adapted N-Gram LMs. These settings are described in Sections II-A2 and II-A3 respectively. We explore two approaches for LM adaptation: biased LMs, and in-domain data augmentation. To create biased LMs, we train a 4-gram LM on the available in-domain data. Then we replace the generic LM trained on GGC. For LM data augmentation we follow a perplexity filtering approach similar to [71]. We first train a biased LM using available target domain text, and then use it to calculate the perplexity of each line in the GGC corpus. We keep the \(10\%\) of the lines with the lowest perplexity. Then we train a 4-gram LM on the augmented "in-domain" corpus and use it for inference. Fig. 5 shows the performance of the SO LG \(\rightarrow\) HP model with biased and augmented LMs, as we reduce the amount of available in-domain text data from \(100\%\) to \(1\%\) of the in-domain transcriptions (11B tokens to \(110\)K tokens respectively). As a baseline we include the LG \(\rightarrow\) HP SO model in combination with the generic LM trained on GGC. We observe that the use of biased LMs can lead to successful adaptation, when an adequate amount of in-domain text data is available. On the other hand the LM augmentation approach results to successful augmentation, even with very small amounts of in-domain text. In Table VI we see the results of LM adaptation, combined with the M2DS2 LG \(\rightarrow\) CV model. To demonstrate the sample efficiency of the approach, we use the variant that was trained using only \(25\%\) of the target domain audio (3 hours). We compare with M2DS2 combined with the 4-gram GGC LM for inference. We draw similar conclusions, i.e., use of biased LMs performs well for sufficient text data. When we use augmented LMs we can leverage very small amounts of in-domain text. ## IX Discussion & Conclusions In this work, we have explored Unsupervised and Weakly Supervised Domain Adaptation of ASR systems in the context of an under-resourced language, i.e., Greek. We focus on domain adaptation through in-domain self-supervision for XLSR-53, a state-of-the-art multilingual ASR model. Specifically, we adopt a mixed task and self-supervised objective, inspired from NLP, and show that using only in-domain self-supervision can lead to mode collapse of the representations created by the contrastive loss of XLSR-53. Therefore, we propose the use of mixed task and multi-domain self-supervision, M2DS2, where the contrastive loss leverages both the source and target domain audio data. For evaluation we create and release HParl, the largest to-date public corpus of transcribed Greek speech (\(120\) hours), collected from the Greek Parliamentary Proceedings. HParl is combined with two other popular Greek speech corpora, i.e., Logotypografia and CommonVoice, for multi-domain evaluation. In our experiments, we find that while most UDA baselines fail in our low-resource setting, the proposed mixed task and multi-domain self-supervised finetuning strategy yields significant improvements for the majority of adaptation scenarios. Furthermore, we focus our ablations on showcasing the sample efficiency of the proposed finetuning strategy, and demonstrating the necessity of including both source and target domain data for self-supervision. Finally, we show that M2DS2 can be combined with simple language model adaptation techniques in a relaxed weakly supervised setting, where we achieve significant performance improvements with a few hours of in-domain audio and a small, unpaired in-domain text corpus. More concretely, in Table VII we present a summary of the discussed unsupervised and weakly supervised adaptation combinations, for different amounts of available in-domain audio and text. Note that for the weakly supervised scenarios, the in-domain audio and text are unpaired. We see, that when no in-domain data are available, including an n-gram LM trained on large corpora is recommended. Furthermore, when in-domain audio is available, following a mixed multi-domain finetuning strategy using M2DS2 can yield significant WER reductions, even for a few hours of audio. When small amounts of in-domain text is available, using a corpus augmentation strategy, e.g., perplexity filtering, can produce adapted LMs and yield small improvements to the final WER. In the case of sufficient amounts of unpaired in-domain text and audio, independent adaptation of XLSR-53 using the audio data and the n-gram LM using the text data can yield comparable performance with a fully supervised finetuning pipeline. ## X Future Work In the future we plan to explore the effectiveness of the proposed adaptation strategy for other languages, and different adaptation settings, e.g., accent or cross-lingual adaptation. Of special interest is the investigation of the effectiveness of our approach for endangered languages, e.g., Pomak. Furthermore, we plan to explore the combination of in-domain self-supervision, when combined with other popular UDA techniques, e.g., teacher student models, adversarial learning, and data augmentation approaches. On the language adaptation side, we plan to explore multi-resolution learning, which has shown promise for ASR [75], and investigate more elaborate end-to-end weakly supervised adaptation methods. Finally, we plan to expand our study in a multimodal setting, where both audio and video are available, e.g., lip reading.
2309.07249
Averages of completely multiplicative functions over the Gaussian integers -- a dynamical approach
We prove a pointwise convergence result for additive ergodic averages associated with certain multiplicative actions of the Gaussian integers. We derive several applications in dynamics and number theory, including: (i) Wirsing's theorem for Gaussian integers: if $f\colon \mathbb{G} \to \mathbb{R}$ is a bounded completely multiplicative function, then the following limit exists: $$\lim_{N \to \infty} \frac{1}{N^2} \sum_{1 \leq m, n \leq N} f(m + {\rm i} n).$$ (ii) An answer to a special case of a question of Frantzikinakis and Host: for any completely multiplicative real-valued function $f: \mathbb{N} \to \mathbb{R}$, the following limit exists: $$\lim_{N \to \infty} \frac{1}{N^2} \sum_{1 \leq m, n \leq N} f(m^2 + n^2).$$ (iii) A variant of a theorem of Bergelson and Richter on ergodic averages along the $\Omega$ function: if $(X,T)$ is a uniquely ergodic system with unique invariant measure $\mu$, then for any $x\in X$ and $f\in C(X)$, $$\lim_{N\to\infty}\frac{1}{N^2}\sum_{1 \leq m, n \leq N} f(T^{\Omega(m^2 + n^2)}x)=\int_Xf \ d\mu.$$
Sebastián Donoso, Anh N. Le, Joel Moreira, Wenbo Sun
2023-09-13T18:37:02Z
http://arxiv.org/abs/2309.07249v2
# Averages of completely multiplicative functions over the Gaussian integers - a dynamical approach ###### Abstract. We prove a pointwise convergence result for additive ergodic averages associated with certain multiplicative actions of the Gaussian integers. We derive several applications in dynamics and number theory, including: 1. Wirsing's theorem for Gaussian integers: if \(f\colon\mathbb{G}\to\mathbb{R}\) is a bounded completely multiplicative function, then the following limit exists: \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{1\leq m,n\leq N}f(m+\mathrm{i}n).\] 2. An answer to a special case of a question of Frantzikinakis and Host: for any completely multiplicative real-valued function \(f:\mathbb{N}\to\mathbb{R}\), the following limit exists: \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{1\leq m,n\leq N}f(m^{2}+n^{2}).\] 3. A variant of a theorem of Bergelson and Richter on ergodic averages along the \(\Omega\) function: if \((X,T)\) is a uniquely ergodic system with unique invariant measure \(\mu\), then for any \(x\in X\) and \(f\in C(X)\), \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{1\leq m,n\leq N}f(T^{\Omega(m^{2}+n^{2}) }x)=\int_{X}f\ d\mu.\] S.D. was partially funded by Centro de Modelamiento Matematico (CMM) FB210005 BASAL funds for centers of excellence from ANID-Chile and ANID/Fondecyt/1200897. W.S. was partially supported by the NSF Grant DMS-2247331. * 3.6 Proof of Lemma 3.5 * 4 Applications * 4.1 Comparing the averages of two multiplicative functions * 4.2 Proof of Theorem C * 4.3 Proofs of Theorems A, 1.2 and B * 4.4 Proof of Theorem D * 5 Open questions * A Some estimates * B A counterexample with non-dilated Folner sequences ## 1. Introduction ### Wirsing theorems for Gaussian integers A function \(f:\mathbb{N}\to\mathbb{C}\) is called _multiplicative_ if \(f(mn)=f(m)f(n)\) whenever \(m,n\in\mathbb{N}\) are co-prime; and \(f\) is _completely multiplicative_ if this relation holds for every \(m,n\in\mathbb{N}\). The statistical behavior of multiplicative functions is a central topic in analytic number theory and many important theorems in this area can be recast into the language of multiplicative functions. For instance, the Liouville function \(\lambda:\mathbb{N}\to\{-1,1\}\) defined as \(\lambda(n)=(-1)^{\Omega(n)}\), where \(\Omega(n)\) is the number of prime factors of \(n\) counting with multiplicities, is a completely multiplicative function. It is well known that the Prime Number theorem is equivalent to \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\lambda(n)=0 \tag{1.1}\] (see, for example, [2, Page 96]). The harder part of establishing (1.1) is to show that the limit exists. Generalizing this fact, Erdos and Wintner [8] conjectured that for any multiplicative function \(f:\mathbb{N}\to\{-1,1\}\), the limit \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}f(n) \tag{1.2}\] exists. Around 1961, theorems of Delange [6] and Wirsing [30] gave a satisfactory answer for multiplicative functions with non-zero mean values. The general case of the Erdos-Wintner conjecture, which contains a proof of the Prime Number Theorem, was only established by Wirsing [31] in 1967. In fact, Wirsing's theorem states that the limit (1.2) exists for any bounded real valued multiplicative function. A celebrated result of Halasz [16] in 1968 further extended the analysis to complex valued functions, where the picture is complicated by the fact that the limit (1.2) does not always exist. There are several possible ways to strengthen (1.1) or Wirsing's theorem. For example, a conjecture by Chowla [2, Page 96] states that if \(P\in\mathbb{Z}[x]\) is a polynomial satisfying \(P\neq cQ^{2}\) for every \(c\in\mathbb{Z},Q\in\mathbb{Z}[x]\), then \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}\lambda(P(n))=0. \tag{1.3}\] This conjecture is still wide open despite a large number of developments seen in the last decade. A survey of much of the progress can be found in [9] and references therein. Averages of multiplicative functions over one variable, such as in (1.3), are notoriously hard to analyze. However, their multivariate counterpart seems to be more tractable. In this direction, another conjecture, also attributed to Chowla states that if \(P\in\mathbb{Z}[x,y]\) and \(P\neq cQ^{2}\) for every \(c\in\mathbb{Z},Q\in\mathbb{Z}[x,y]\) then \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{m,n=1}^{N}\lambda(P(m,n))=0. \tag{1.4}\] (See [19, Equation (1.2)].) When \(\deg(P)=2\), this conjecture was verified by Helfgott [18], based on ideas of de la Vallee-Poussin [3, 4]. Helfgott later extended his analysis to cover the case \(\deg(P)=3\) in [19]. More recently, Green and Tao [15] established (1.4) when \(P\) is a product of pairwise independent linear forms. In place of the Liouville function \(\lambda\), a similar question can be asked about an arbitrary completely multiplicative function. In this direction, Frantzikinakis and Host [12] established the analogous statement to (1.4) for any "aperiodic" multiplicative function and for a class of polynomials \(P\) which includes all products of pairwise independent linear forms. They later posed the following question, which was a major motivator for our current paper. **Question 1.1** ([11, Page 91]).: Let \(f:\mathbb{N}\to\mathbb{R}\) be a real valued bounded completely multiplicative function and let \(P\in\mathbb{Z}[x,y]\) be a homogeneous polynomial with values on the positive integers. Does the limit \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{m,n=1}^{N}f(P(m,n))\] exist? Already in [11], Frantzikinakis and Host provided a positive answer to Question 1.1 in the special case when the polynomial \(P\) is a product of linear forms. Shortly after, Klurman and Mangerel [20] obtained a concrete, effective formula for the averages in that case. A related work was also done by Matthiesen in [24]. Nevertheless, when \(P\) is not a product of linear forms, the answer to Question 1.1 remains elusive, and even solving it for specific functions \(f\) poses significant challenges. Our first main result answers Question 1.1 for the polynomial \(P(m,n)=m^{2}+n^{2}\) and for an arbitrary completely multiplicative function \(f\). Given a function \(f:\{1,\dots,N\}\to\mathbb{C}\), we write \(\mathbb{E}_{1\leq m,n\leq N}\,f(x)\) for the average \(\frac{1}{N^{2}}\sum_{1\leq m,n\leq N}f(x)\). More generally, if \(A\) is a finite set and \(f:A\to\mathbb{C}\) is a function on \(A\), we use \(\mathbb{E}_{x\in A}f(x)\) as a shorthand notation for \(\frac{1}{|A|}\sum_{x\in A}f(x)\). **Theorem A**.: _Let \(f:\mathbb{N}\to\mathbb{R}\) be a bounded completely multiplicative function. Then the average_ \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}f(m^{2}+n^{2})\] _exists and equals_ \[\frac{1}{2-f(2)}\cdot\prod_{\begin{subarray}{c}p\text{ prime}\\ p\equiv 1\bmod 4\end{subarray}}\left(\frac{p-1}{p-f(p)}\right)^{2}\cdot \prod_{\begin{subarray}{c}p\text{ prime}\\ p\equiv 3\bmod 4\end{subarray}}\frac{p^{2}-1}{p^{2}-f(p)^{2}}. \tag{1.5}\] A main tool used in the proof of Frantzikinakis and Host's result [11] mentioned above is a structure theorem of multiplicative functions developed in [12] which roughly speaking says that any bounded multiplicative function can be decomposed into a component that resembles a periodic function and a Gowers-uniform error term (see also [28, 29]). Although this structure theorem provides an effective way to deal with multiplicative functions along linear forms, it does not seem to help for general higher degree polynomials. An alternative approach to handle certain higher degree polynomials is to realize them as norm forms of number fields. In this paper we focus on the polynomial \(P(m,n)=m^{2}+n^{2}\), which can naturally be viewed as the norm function over the Gaussian integers \(\mathbb{G}:=\{m+\mathrm{i}n:m,n\in\mathbb{N}\}\). Recall that the norm function over \(\mathbb{G}\) is \(\mathcal{N}(m+\mathrm{i}n)=m^{2}+n^{2}\) and it satisfies \(\mathcal{N}(ab)=\mathcal{N}(a)\mathcal{N}(b)\) for any \(a,b\in\mathbb{G}\). Therefore, given a completely multiplicative function \(f:\mathbb{N}\to\mathbb{R}\), the composition \(f\circ\mathcal{N}\) is a completely multiplicative function from the set of non-zero Gaussian integers \(\mathbb{G}^{*}\) to \(\mathbb{R}\). Using this observation, we are able to derive Theorem A from a version of Wirsing's theorem for Gaussian integers. **Theorem 1.2**.: _If \(f:\mathbb{G}^{*}\to\mathbb{R}\) is a real-valued bounded completely multiplicative function, then the limit_ \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}f(m+\mathrm{i}n)\] _exists._ It is possible to identify the limit in Theorem 1.2 as an Euler product. To write this product, denote by \(\mathbb{P}\) the set of Gaussian primes and by \(\mathbb{P}_{1}\) the set of Gaussian primes in the first quadrant. (We discuss basic properties of Gaussian primes in Section 2). Given a bounded completely multiplicative function \(f:\mathbb{G}^{*}\to\mathbb{C}\), we define \(P(f)\) as \[P(f)\coloneqq\prod_{p\in\mathbb{P}_{1}}\frac{\mathcal{N}(p)-1}{\mathcal{N}(p) -f(p)}. \tag{1.6}\] We remark that the infinite product defining \(P(f)\) does not necessarily converge for every bounded completely multiplicative function \(f:\mathbb{G}^{*}\to\mathbb{C}\); however, it does when \(f\) takes real values (see Lemma 4.5). At this stage we move from averages over squares to the more general situation of averages over what we call _dilated Folner sequences_. This concept will be introduced and discussed in detail in Section 2.3; for now, it suffices to say that examples of dilated Folner sequences include the sequence of squares \(\Phi_{N}=\{m+\mathrm{i}n\in\mathbb{G}^{*}:1\leq m,n\leq N\}\) and the sequence of discs \(\Phi_{N}=\{m+\mathrm{i}n\in\mathbb{G}^{*}:0<m^{2}+n^{2}<N^{2}\}\). We can now formulate a stronger version of Theorem 1.2. **Theorem B**.: _If \(f:\mathbb{G}^{*}\to\mathbb{R}\) is a real-valued bounded completely multiplicative function satisfying \(f(\mathrm{i})=1\) and \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence, then \(\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)\) exists and equals \(P(f)\)._ _Remark 1.3_.: The assumption \(f(\mathrm{i})=1\) is important in Theorem B: for instance if \(\Phi\) is a finite set invariant under multiplication by \(\mathrm{i}\) (such as a centered square or a disc), the average \(\operatorname*{\mathbb{E}}_{n\in\Phi}f(n)\) equals \(0\) whenever \(f(\mathrm{i})\neq 1\). However, the condition \(f(\mathrm{i})=1\) is not necessary for the limit to exist. In Theorem 4.7 below we derive from Theorem B a more general version without the assumption that \(f(\mathrm{i})=1\). _Remark 1.4_.: Theorem B is false for general (i.e., "non-dilated") Folner sequences. More precisely, in Section 2.3, we show that there exists a completely multiplicative function \(f:\mathbb{G}^{*}\to\{-1,1\}\) and an additive Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) such that \(\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)\) does not exist. Furthermore, a modification of an argument by Fish [10] shows that if one considers a random completely multiplicative function \(f:\mathbb{G}^{*}\to\{-1,1\}\), then almost surely, \(f\) will contain all finite patterns of \(-1\) and \(1\). Details are given in Appendix B. Halasz's theorem mentioned above, which contains Wirsing's theorem as a special case, was extended to arbitrary function fields by Granville, Harper, and Soundararajan [14]. On the other hand, as far as we know, there is no analogue of Wirsing's and Halasz's theorems in the number field setting. Theorem B partially fills this gap because it can be seen as an analogue of Wirsing's theorem for completely multiplicative functions on the Gaussian integers. In fact, we prove a more general result, which partially explains why the restriction that \(f\) takes on real values is convenient in Wirsing's theorem. For \(z\in\mathbb{C}\setminus\{0\}\), let \(\operatorname{Arg}(z)\in[-\pi,\pi)\) denote its argument, and define \(\operatorname{Arg}(0)=0\). Any real number \(x\) has \(\operatorname{Arg}(x)=0\) or \(-\pi\), so Theorem B is a special case of the following theorem. **Theorem C**.: _Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a bounded completely multiplicative function such that \(f(\mathrm{i})=1\) and \(\operatorname{Arg}(f(\mathbb{P}))\) is finite. Then for every dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\), the limit \(\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}\,f(n)\) exists and equals \(P(f)\)._ Similarly to Remark 1.3, it is possible to remove from Theorem C the assumption that \(f(\mathrm{i})=1\) (see Theorem 4.7). ### Ergodic averages along \(\Omega(m^{2}+n^{2})\) in uniquely ergodic systems To prove Theorems B and C, instead of following the more classical approach of Wirsing, Delange, and Halasz, we opt to use a dynamical approach, as developed in the recent work of Bergelson and Richter [1]. Recall that, for a natural number \(n\in\mathbb{N}\), \(\Omega(n)\) is the number of prime factors of \(n\) counted with multiplicities. A _uniquely ergodic system_ is a pair \((X,T)\) where \(X\) is a compact metric space, \(T:X\to X\) is a continuous map and there exists a unique Borel probability measure \(\mu\) on \(X\) satisfying \(\mu(T^{-1}A)=\mu(A)\) for every Borel set \(A\subset X\). The following result was proved in [1]. **Theorem 1.5** ([1, Theorem A]).: _Let \((X,T)\) be a uniquely ergodic system with the unique invariant measure \(\mu\). For any \(x\in X\) and any continuous function \(f:X\to\mathbb{C}\),_ \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq n\leq N}f(T^{\Omega(n)}x)= \int_{X}f\ d\mu.\] Theorem 1.5, when applied to the two-points system, reduces to (an equivalent form of) the prime number theorem. Applying it to finite systems, one recovers a theorem of Pillai [25] and Selberg [27] stating that \(\Omega(n)\) is equally distributed over all residue classes mod \(q\) for all \(q\in\mathbb{N}\). Theorem 1.5 also contains as a special case (when applying it to irrational rotations on the torus) the Erdos-Delange Theorem [5], which complements the results above by stating that for any irrational number \(\alpha\), the sequence \((\Omega(n)\alpha)_{n\in\mathbb{N}}\) is uniformly distributed mod \(1\). Here, a sequence \((a(n))_{n\in\mathbb{N}}\) of real numbers is called _uniformly distributed mod \(1\)_ if for every interval \(I\subset[0,1)\), \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq n\leq N}1_{I}(a(n)\bmod 1 )=|I|.\] Our next theorem is an analogue of Theorem 1.5 along sums of two squares. **Theorem D**.: _Let \((X,T)\) be a uniquely ergodic system with the unique invariant measure \(\mu\). Then for any \(x\in X\) and any \(f\in C(X)\),_ \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}f(T^{\Omega(m^{2} +n^{2})}x)=\int_{X}f\ d\mu.\] A sequence \((a(m,n))_{m,n\in\mathbb{N}}\) of two parameters is called _uniformly distributed mod \(1\)_ if for every interval \(I\subset[0,1)\), \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}1_{I}(a(m,n)\bmod 1 )=|I|.\] Taking \((X,T)\) to be the rotation by \(q\) points, Theorem D implies that \(\Omega(m^{2}+n^{2})\) is equally distributed over all residue classes mod \(q\). When applied to the rotation by an irrational \(\alpha\) on the torus \(\mathbb{R}/\mathbb{Z}\), we have \((\Omega(m^{2}+n^{2})\alpha)\) is uniformly distributed mod \(1\). By applying Theorem D to unipotent affine transformations on tori (following the approach of Furstenberg in [13, pages 67 - 69]), we obtain the following corollary: **Corollary 1.6**.: _For \(Q\in\mathbb{R}[x]\), \(Q(\Omega(m^{2}+n^{2}))_{m,n\in\mathbb{N}}\) is uniformly distributed mod \(1\) if and only if at least one of the coefficients of \(Q\) is irrational._ ### The main theorem The main technical result of this paper, stated below, is an ergodic theorem involving additive averages for multiplicative actions of the Gaussian integers. This theorem, albeit somewhat complicated to formulate, is the main ingredient in the proofs of Theorems C and D. **Theorem E**.: _Let \(X\) be a compact metric space and let \(\mathcal{T}\) denote the semigroup of all continuous transformations \(T:X\to X\) under composition. Let \(\tau:(\mathbb{G}^{*},\times)\to\mathcal{T}\) be a semigroup homomorphism such that \(\tau(\mathbb{P})\) is finite and let \(T_{1},\dots,T_{k}\in\tau(\mathbb{P})\) be the complete list of those transformations satisfying_ \[\sum_{p\in\mathbb{P}:\tau(p)=T_{j}}\frac{1}{\mathcal{N}(p)}=\infty. \tag{1.7}\] _Suppose that there exists a unique Borel probability measure \(\mu\) on \(X\) such that for all \(j\in[k]\) and every Borel set \(A\subset X\), \(\mu(T_{j}^{-1}A)=\mu(A)\). Then for any \(x\in X\), \(F\in C(X)\) and dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\),_ \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}F(\tau(n)x)=\int_{X} F\ d\mu.\] Theorem E is a direct analogue of [1, Theorem B], which was formulated for "finitely generated, strongly uniquely ergodic, multiplicative dynamical systems". While drawing inspiration from [1, Theorem B], the proof of Theorem E contains some major differences due to the more intricate geometry inherent in \(\mathbb{G}\), a rank two additive group, in contrast to the simpler rank one group, \(\mathbb{Z}\). Moreover, we must handle arbitrary dilated Folner sequences in \(\mathbb{G}^{*}\), which are not as rigid as the sequence of intervals \(\{1,\dots,N\}\) considered in [1] (see Remark 2.2). We also emphasize that the proof of Theorem E is dynamical and combinatorial in nature, and the only number theoretic input needed (besides a version of the Turan-Kubilius inequality) is some control on the distribution of the primes in \(\mathbb{G}\), which already follows from the works of Landau [21] and Hecke [17]. Theorem E involves taking additive averages on a multiplicative dynamical system and is reminiscent of the results in our previous paper [7] regarding measure preserving actions of \((\mathbb{N},\times)\). To put this in perspective, as opposed to the main results in [7], which dealt with arbitrary multiplicative actions, in Theorem E we require the assumption that \(\tau(\mathbb{P})\) is finite. On the other hand, the conclusion in Theorem E is significantly stronger than the results in [7]. ### Outline of the article In Section 2, we set up notation and present some basic facts about Gaussian integers, the prime number theorem in \(\mathbb{G}\), and define dilated Folner sequences. The proof of Theorem E occupies Section 3. A short proof of Theorem D is given in Section 4.4 and Theorem C is proved in Section 4.2. Some natural open questions are listed in Section 5. Lastly, the appendix contains some technical estimates which are needed for the proof of Theorem E and an argument showing that a random multiplicative functions taking values in \(\{-1,1\}\) almost surely contains all finite patterns of \(-1\) and \(1\). **Acknowledgements.** The authors would like to thank Vitaly Bergelson and Florian Richter for helpful discussions about their paper [1]. ## 2. Background ### Notation The absolute value of a complex number \(z\) is written \(|z|\), and its argument is denoted by \(\operatorname{Arg}(z)\in\mathbb{R}/(2\pi\mathbb{Z})\), usually identified with \([-\pi,\pi)\). We also use the convention that \(\operatorname{Arg}(0)=0\). As usual, we denote by \(\operatorname{Re}z\) and \(\operatorname{Im}z\) the real and imaginary parts of \(z\in\mathbb{C}\). We let \(S^{1}:=\{z\in\mathbb{C}\colon|z|=1\}\subset\mathbb{C}\) be the unit circle and \(\mathbb{D}:=\{z\in\mathbb{C}:|z|\leq 1\}\) be the unit disk. We denote by \(\mathbb{G}\) the ring of Gaussian integers \(\{a+b\mathbf{i}:a,b\in\mathbb{Z}\}\subset\mathbb{C}\), and we use \(\mathbb{G}^{*}:=\mathbb{G}\backslash\{0\}\) for the set of non-zero Gaussian integers. The norm of a Gaussian integer \(n\) is defined by \(\mathcal{N}(n)=|n|^{2}\). The greatest common divisor of two non-zero Gaussian integers \(m,n\) is only unique up to multiplication by a unit, so we avoid using \(\gcd(m,n)\) on its own; however the norm \(\mathcal{N}(\gcd(m,n))\) is well defined. Given \(A\subset\mathbb{C}\) and \(z\in\mathbb{C}\), we define \(A\pm z=\{a\pm z:a\in A\}\), and \(zA=\{za:a\in A\}\). Depending on \(A\) and \(z\), we assign different meanings to \(A/z\): if \(A\subset\mathbb{C}\) and \(z\in\mathbb{C}\setminus\{0\}\), define \(A/z=\{a/z:z\in A\}\)_unless_\(A\subset\mathbb{G}\) and \(z\in\mathbb{G}^{*}\), in which case we define \(A/z=\{x\in\mathbb{G}:xz\in A\}\). This distinction will be clear from the context. When \(A\) is a finite set, we denote by \(|A|\) its cardinality. Given a subset \(A\subset\mathbb{G}^{*}\), denote by \[\log(A):=\sum_{n\in A}\frac{1}{\mathcal{N}(n)}\] its _logarithmic weight_. If \(\log(A)=\infty\) we say that \(A\) is a _divergent set_. Otherwise we say that \(A\) is a _convergent set_. It is clear that every divergent set is infinite, and whenever a divergent set is partitioned into finitely many sets, at least one of them must be divergent. For a non-empty finite set \(A\) and function \(f:A\to\mathbb{C}\), we set: \[\operatorname*{\mathbb{E}}_{n\in A}f(n)\coloneqq\frac{1}{|A|}\sum_{n\in A}f(n).\] If \(A\subset\mathbb{G}^{*}\), then we set: \[\operatorname*{\mathbb{E}}_{n\in A}^{\log}f(n)\coloneqq\frac{1}{\log(A)}\sum_ {n\in A}\frac{f(n)}{\mathcal{N}(n)}.\] Given two functions \(f,g:\mathbb{G}^{*}\to\mathbb{C}\), we use the following notations: * \(f(x)=O(g(x))\) or \(f(x)\ll g(x)\) means there is a positive constant \(C\) such that \(|f(x)|<Cg(x)\) for all \(x\in\mathbb{G}^{*}\). * \(f(x)=o(g(x))\) indicates that \[\lim_{\mathcal{N}(x)\to\infty}\frac{f(x)}{g(x)}=0.\] * \(f(x)\sim g(x)\) means \[\lim_{\mathcal{N}(x)\to\infty}\frac{f(x)}{g(x)}=1.\] ### Distribution of Gaussian primes A Gaussian prime is an element \(p\in\mathbb{G}^{*}\) which cannot be decomposed as \(p=ab\) where \(a,b\) are non-unit Gaussian integers. We use \(\mathbb{P}\) to denote the set of Gaussian primes and \(\mathbb{P}_{1}=\{p\in\mathbb{P}:\operatorname{Re}p>0,\operatorname{Im}p\geq 0\}\) for the restriction of \(\mathbb{P}\) to the first quadrant. A Gaussian integer \(a+bi\) in the first quadrant is a Gaussian prime if and only if either: * \(b=0\) and \(a\) is a prime in \(\mathbb{N}\) of the form \(4n+3\), or * \(b>0\) and \(a^{2}+b^{2}\) is a prime number in \(\mathbb{N}\) (which will not be of the form \(4n+3\)). Note that the units of \(\mathbb{G}\) are \(\{1,\mathrm{i},-1,-\mathrm{i}\}\) and hence the first quadrant is a natural fundamental domain for their action on \(\mathbb{G}\). In particular, \(\mathbb{P}\) is invariant under multiplication by \(\mathrm{i}\). In our proofs we make crucial use of known results about the distribution \(\mathbb{P}\). In analogy to the prime number theorem, Landau [21] proved that \[\big{|}\{p\in\mathbb{P}:\mathcal{N}(p)<N\}\big{|}\sim\frac{N}{\log N}\text{ as }N\to\infty. \tag{2.1}\] Landau's result only estimates the number of primes in a disk around the origin. This was later extended by Hecke [17] who showed that the number of primes in a "slice" (or sector) of the complex plane is proportional to its amplitude. **Theorem 2.1** ([17], see also [26]).: _For any interval \(I\subset[0,\pi/2]\),_ \[\frac{|\{p\in\mathbb{P}:\mathcal{N}(p)<N\text{ and }\operatorname{Arg}(p)\in I\}|}{N /\log N}\to\frac{|I|}{2\pi}\text{ as }N\to\infty.\] As a corollary of Theorem 2.1 it follows that for any interval \(I\subset[-\pi,\pi)\) and any \(0\leq a<b\), \[\frac{|\{p\in\mathbb{P}:aN\leq\mathcal{N}(p)\leq bN\text{ and }\operatorname{Arg}(p) \in I\}|}{N/\log N}\to\frac{|I|}{2\pi}(b-a)\text{ as }N\to\infty. \tag{2.2}\] We will use (2.2) to estimate the amount of primes in dilations of certain small neighborhoods of \(1\). While we could use disks centered at \(1\), in view of (2.2) it is more convenient to use the following annulus sectors: For each \(\varepsilon\in(0,1)\) define \[S_{\varepsilon}:=\big{\{}z\in\mathbb{C}:1-\varepsilon<|z|<1+\varepsilon,\ \operatorname{Arg}(z)\in(-\pi\varepsilon,\pi\varepsilon)\,\big{\}}. \tag{2.3}\] For each \(\varepsilon\in(0,1)\) and \(n\in\mathbb{G}^{*}\), since \(\mathcal{N}(n)=|n|^{2}\), we have \[nS_{\varepsilon}\cap\mathbb{G}=\Big{\{}m\in\mathbb{G}:(1-\varepsilon)^{2} \mathcal{N}(n)<\mathcal{N}(m)<(1+\varepsilon)^{2}\mathcal{N}(n),\ \operatorname{Arg}(m)\in\operatorname{Arg}(n)+\big{(}-\pi \varepsilon,\pi\varepsilon)\Big{\}}.\] Using (2.2), we deduce that for every \(\varepsilon>0\), \[\lim_{n\in\mathbb{G}^{*}\mathcal{N}(n)\to\infty}\frac{\big{|}\mathbb{P}\cap nS _{\varepsilon}\big{|}}{\mathcal{N}(n)/\log\mathcal{N}(n)}=\frac{2\pi \varepsilon(4\varepsilon)}{2\pi}=4\varepsilon^{2}. \tag{2.4}\] ### Dilated Folner sequences Given a function \(f:\mathbb{Z}\to\mathbb{C}\) it is natural to consider its Cesaro average \(\mathbb{E}_{1\leq n\leq N}\,f(n)\) over the initial interval \(\{1,\dots,N\}\), or the average \(\mathbb{E}_{-N\leq n\leq N}\,f(n)\) over the centered interval \(\{-N,\dots,N\}\). However, when given a function \(f:\mathbb{G}\to\mathbb{C}\), there are several natural ways to average \(f\): one can consider, for instance, averages over disks \(\mathbb{E}_{\mathcal{N}(n)<N}\,f(n)\) or over the squares \(\mathbb{E}_{1\leq m,n\leq N}\,f(m+n\!\mathrm{i})\) or \(\mathbb{E}_{-N\leq m,n\leq N}\,f(m+n\!\mathrm{i})\). To simultaneously address all these averaging schemes, we make use of the notion of (additive) _Folner sequences_; these are sequences \(\Phi=(\Phi_{N})_{N\in\mathbb{N}}\) of finite subsets of \(\mathbb{G}\) such that for every \(n\in\mathbb{G}\), \[\lim_{N\to\infty}\frac{\left|(\Phi_{N}+n)\triangle\Phi_{N}\right|}{\left|\Phi_ {N}\right|}=0.\] _Remark 2.2_.: Another reason we work with abstract Folner sequences, as opposed to simply use the squares \(\Phi_{N}:=\{n\in\mathbb{G}^{*}:0<\operatorname{Re}n,\operatorname{Im}n\leq N\}\) is that we often have to consider the average over the set \(\Phi_{N}/a\), for some \(a\in\mathbb{G}\). Unlike the situation in \(\mathbb{Z}\), the set \(\Phi_{N}/a\) does not necessarily equal \(\Phi_{M}\) for some other \(M\). Therefore, even if one is interested solely in averages over squares, it is necessary to consider more general Folner sequences. As already mentioned in the introduction, not every Folner sequence works in our results. Indeed, by [10, Theorem 1.1], there is a real-valued completely multiplicative function \(g\colon\mathbb{N}\to\{-1,1\}\) such that both \(g^{-1}(\{1\})\) and \(g^{-1}(\{-1\})\) contain arbitrarily long intervals. Consider \(f\colon\mathbb{G}^{*}\to\{-1,1\}\) defined as \(f(n)=g(\mathcal{N}(n))\) for \(n\in\mathbb{G}^{*}\). It follows that \(f^{-1}(\{1\})\) and \(f^{-1}(\{-1\})\) contain Folner sequences \((\Phi_{N}^{+})_{N\in\mathbb{N}}\) and \((\Phi_{N}^{-})_{N\in\mathbb{N}}\) in \(\mathbb{G}\), respectively. Letting \(\Phi_{2N}=\Phi_{N}^{+}\) and \(\Phi_{2N+1}=\Phi_{N}^{-}\), we see \(\mathbb{E}_{n\in\Phi_{N}}\,f(n)\) does not converge. There are plenty of functions \(f\) as above. One can extend Fish's theorem [10] on random multiplicative functions from \(\mathbb{Z}^{*}\) to \(\mathbb{G}^{*}\) to show that most completely multiplicative functions \(f:\mathbb{G}^{*}\to\{-1,1\}\) contains all finite patterns of \(-1\) and \(1\). This is a result of independent interest, but because it is somewhat distinct from the primary content of the article, we put it in Appendix B. Due to the above reason, in Theorems B, C, and E, we must restrict to a subclass of Folner sequences obtained from dilating an open set. **Definition 2.3**.: A sequence \((\Phi_{N})_{N\in\mathbb{N}}\) of finite subsets of \(\mathbb{G}^{*}\) is called a _dilated Folner sequence_ if there exists a Jordan measurable1 set \(U\subset\mathbb{C}\) and a sequence of positive real numbers \((k_{N})_{N\in\mathbb{N}}\) such that \(k_{N}\to\infty\) as \(N\to\infty\) and \(\Phi_{N}=\mathbb{G}^{*}\cap k_{N}U\). Footnote 1: A set \(U\subset\mathbb{C}\) is Jordan measurable if and only if it is bounded and its boundary has measure zero. This is equivalent to the indicator function \(1_{U}\) being Riemann integrable. We stress that the definition does not require \(U\) to contain \(0\) and it is easy to check that every dilated Folner sequence is indeed an additive Folner sequence. _Example 2.4_.: 1. Two natural dilated Folner sequences are the squares \(\Phi_{N}:=\{n\in\mathbb{G}^{*}:0<\operatorname{Re}n,\operatorname{Im}n\leq N\}\) and the disks \(\Phi_{N}=\{z\in\mathbb{G}^{*}:\mathcal{N}(z)\leq N^{2}\}\). 2. It is immediate from the definition that any subsequence of a dilated Folner sequence is a dilated Folner sequence. 3. If \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence in \(\mathbb{G}\) and \(n\in\mathbb{G}\), then the sets \(\Phi_{N}=\Phi_{N}/n\) also form a dilated Folner sequence. 4. The sequence of shifted disks \(\Phi_{N}=N^{2}+(N\mathbb{D}\cap\mathbb{G})\) is not a dilated Folner sequence. It remains an interesting question whether the Folner sequence in part (iv) of the example satisfies the conclusion of Theorem E (see also the related Question 5.5). The main property that distinguishes dilated Folner sequences is captured in the following lemma and, roughly speaking, states that dilated Folner sequences are unchanged under small multiplicative perturbations. **Lemma 2.5**.: _If \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence, then for every \(\delta>0\), there exists \(\varepsilon>0\) such that whenever \(a,b\in\mathbb{G}^{*}\) satisfy \(b\in aS_{\varepsilon}\), then_ \[\lim_{N\to\infty}\frac{|\Phi_{N}/a\triangle\Phi_{N}/b|}{|\Phi_{N}/a|}<\delta.\] Proof.: Denote by \(m\) the Lebesgue measure on \(\mathbb{C}\). If \(V\subset\mathbb{C}\) is a Jordan measurable set and \(t\) denotes a real parameter, then \(|\mathbb{G}^{*}\cap tV|/t^{2}\to m(V)\) as \(t\to\infty\). Indeed, \[\lim_{t\to\infty}\frac{|\mathbb{G}^{*}\cap tV|}{t^{2}}=\lim_{t\to\infty}\frac{ 1}{t^{2}}\sum_{n\in G^{*}}1_{V}(n/t)=\lim_{\varepsilon\to 0}\varepsilon^{2} \sum_{n\in G^{*}}1_{V}(n\varepsilon).\] The last sum is a Riemann sum for \(1_{V}\) and since \(1_{V}\) is Riemann integrable, the limit exists and equals \(\int_{\mathbb{C}}1_{V}\,\mathrm{d}m=m(V)\). Now let \(U\subset\mathbb{C}\) be a Jordan measurable set and \((k_{N})\) be a sequence of real numbers such that \(\Phi_{N}=k_{N}U\cap\mathbb{G}^{*}\). Then \(\Phi_{N}/a=\{n\in\mathbb{G}^{*}:n\in k_{N}U/a\}=k_{N}\frac{U}{a}\cap G^{*}\). Since the set \(\frac{U}{a}\) is Jordan measurable, it follows that \(|\Phi_{N}/a|/k_{N}^{2}\to m(\frac{U}{a})=m(U)/\mathcal{N}(a)\). Similarly, \(\Phi_{N}/a\cap\Phi_{N}/b=k_{N}\Big{(}\frac{U}{a}\cap\frac{U}{b}\Big{)}\cap G^ {*}\), so \(|\Phi_{N}/a\cap\Phi_{N}/b|/k_{N}^{2}\to m\Big{(}\frac{U}{a}\cap\frac{U}{b} \Big{)}=m\Big{(}U\cap\frac{a}{b}U\Big{)}/\mathcal{N}(a)\). It follows that \[\lim_{N\to\infty}\frac{|\Phi_{N}/a\cap\Phi_{N}/b|}{|\Phi_{N}/a|}=\frac{m(U \cap(a/b)U)}{m(U)}=1-o_{a/b\to 1}(1).\] We use Lemma 2.5 only to prove Lemma 3.7. ## 3. Proof of Theorem E In this section we prove Theorem E. Throughout the section we fix a compact metric space \(X\), we let \(\mathcal{T}\) denote the semigroup of continuous functions \(T:X\to X\), and we fix a semigroup homomorphism \(\tau:(\mathbb{G},\times)\to\mathcal{T}\) such that \(\tau(\mathbb{P})\) is finite. We also fix the transformations \(T_{1},\ldots,T_{k}\in\mathcal{T}\) such that each set \(\mathbb{P}\cap\tau^{-1}(T_{i})\) is divergent and the remainder set \(\{p\in\mathbb{P}:\tau(p)\notin\{T_{1},\ldots,T_{k}\}\}\) is convergent. We say that a Borel probability measure \(\nu\) on \(X\) is an _additively empirical measure_ if there exists \(x\in X\) and some dilated Folner sequence \(\Phi\) such that \(\nu\) is a weak\({}^{*}\) limit \[\nu=\lim_{N\to\infty}\mathop{\mathbb{E}}_{n\in\Phi_{N}}\tau(n)\delta_{x},\] where \(\delta_{x}\) denotes the Dirac measure at the \(x\), and \(\tau(n)\delta_{x}\) denotes the pushforward of \(\delta_{x}\) under the transformation \(\tau(n)\in\mathcal{T}\). The first step of the proof of Theorem E is to reduce it to the following statement, whose proof occupies most of the section. **Theorem 3.1**.: _For every additively empirical measure \(\nu\) and every \(j\in[k]\), \(T_{j}\nu=T_{j}^{2}\nu\)._ Proof of Theorem E assuming Theorem 3.1.: Let \(\mu\) be the unique probability measure on \(X\) invariant under all the maps \(T_{j}\) for \(j\in[k]\). We need to show that for each \(x\in X\), \(f\in C(X)\) and every dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\), \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)x)=\int_{X} F\operatorname{d}\mu.\] This is equivalent to the statement that \(\mu\) is the unique additively empirical measure. Since \(\mu\) is the unique probability measure on \(X\) invariant under each \(T_{j}\), our task will be complete if we show that any additively empirical measure \(\nu\) on \(X\) satisfies \(T_{j}\nu=\nu\) for every \(j\in[k]\). This would follow directly from Theorem 3.1 if each \(T_{j}\) were invertible, but that is not necessarily true. Nevertheless, it is still possible to apply Theorem 3.1 and the special nature of additively empirical measures to conclude that \(T_{j}\nu=\nu\) for every \(j\in[k]\). Fix \(j\in[k]\). Let \(x\in X\) and let \(\Phi=(\Phi_{N})_{N\in\mathbb{N}}\) be a dilated Folner sequence such that \(\nu=\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}\tau(n)\delta_{x}\). Fix \(f\in C(X)\) and \(\varepsilon>0\). We will use a version of the Turan-Kubilius inequality for Gaussian integers to compare the averages of \(f(\tau(n)x)\) with the averages of \(f(T_{j}\tau(n)x)\); for completeness we formulate and prove the version we need in Appendix A. Let \(B\subset\mathbb{P}\cap\tau^{-1}(T_{j})\) be such that \(\log(B)>4/\varepsilon^{2}\) and apply Lemma A.1 with \(a(n)=f(\tau(n)x)\) together with Lemma A.2 to conclude that \[\limsup_{N\to\infty}\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)x )-\operatorname*{\mathbb{E}}_{p\in B}\operatorname*{\mathbb{E}}_{n\in\Phi_{N }/p}f(T_{j}\tau(n)x)\right|<\varepsilon. \tag{3.1}\] On the other hand, applying Lemma A.1 with \(a(n)=f(T_{j}\tau(n)x)\) together with Lemma A.2, it follows that \[\limsup_{N\to\infty}\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(T_{j} \tau(n)x)-\operatorname*{\mathbb{E}}_{p\in B}\operatorname*{\mathbb{E}}_{n \in\Phi_{N}/p}f(T_{j}^{2}\tau(n)x)\right|<\varepsilon. \tag{3.2}\] Note that, for every \(p\in\mathbb{P}\), the sequence \((\Phi_{N}/p)_{N\in\mathbb{N}}\) is a dilated Folner sequence, and hence any accumulation point of the measures \(\operatorname*{\mathbb{E}}_{n\in\Phi_{N}/p}\tau(n)\delta_{x}\) is an additively empirical measure \(\tilde{\nu}\). Using Theorem 3.1 we deduce that \(T_{j}\tilde{\nu}=T_{j}^{2}\tilde{\nu}\), which implies that \[\forall p\in\mathbb{P},\qquad\lim_{N\to\infty}\left|\operatorname*{\mathbb{E} }_{n\in\Phi_{N}/p}f(T_{j}\tau(n)x)-\operatorname*{\mathbb{E}}_{n\in\Phi_{N}/p }f(T_{j}^{2}\tau(n)x)\right|=0.\] Combining this with (3.1) and (3.2), we conclude that \[\limsup_{N\to\infty}\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)x )-\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(T_{j}\tau(n)x)\right|<2\varepsilon.\] Taking \(\varepsilon\to 0\) it follows that \[\int_{X}f\operatorname{d}\nu=\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n \in\Phi_{N}}f(\tau(n)x)=\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in \Phi_{N}}f(T_{j}\tau(n)x)=\int_{X}f\operatorname{d}(T_{j}\nu).\] Since \(f\) was arbitrary we conclude that \(\nu=T_{j}\nu\), and this finishes the proof. ### Roadmap of the proof of Theorem 3.1 Our main (and only) tool to show that \(T\nu=R\nu\) for some transformations \(T,R\in\mathcal{T}\) is the following result which is inspired by [1, Proposition 2.1]. **Lemma 3.2**.: _Suppose that \(T,R\in\mathcal{T}\) commute and for every \(\varepsilon>0\) there are a finite set \(F\subset\mathbb{G}^{*}\) and an injective map \(\alpha:F\to\mathbb{G}^{*}\) such that_ * \(\alpha(n)\in nS_{\varepsilon}\) _for all_ \(n\in F\)_, where_ \(S_{\varepsilon}\) _is defined in (_2.3_),_ * \(\tau(F)=\{T\}\) _and_ \(\tau(\alpha(F))=\{R\}\)_._ * \(\underset{n,m\in F}{\mathbb{E}}\mathcal{N}(\gcd(n,m))<1+\varepsilon\) _and_ \(\underset{n,m\in\alpha(F)}{\mathbb{E}}\mathcal{N}(\gcd(n,m))<1+\varepsilon\)_._ _Then \(T\nu=R\nu\) for every empirical measure \(\nu\)._ Lemma 3.2 is proved, using a variant of the Turan-Kubilius inequality, in Section 3.2 below. Unfortunately, Lemma 3.2 requires strong assumptions, and in particular we can not use it directly with \((T,R)=(T_{j},T_{j}^{2})\) as the hypothesis are not satisfied in general. We will instead use Lemma 3.2 to show that for every \(j\) there exists \(i\) such that \(T_{j}\nu=T_{i}\nu\) and also \(T_{j}^{2}\nu=T_{i}\nu\). In view of the first condition in Lemma 3.2 it is convenient to introduce the following notion: **Definition 3.3**.: Let \(D\subset\mathbb{G}^{*}\) and \(\varepsilon>0\). A map \(\alpha:D\to\mathbb{G}^{*}\) is an \(\varepsilon\)_-map_ if \(\alpha(d)\in dS_{\varepsilon}\) for every \(d\in D\). In the following lemma we find, for each \(j\in[k]\), some \(i\in[k]\) (which might be equal to \(j\)) satisfying some technical property. In the sequel we show that whenever \(j\) and \(i\) satisfy this technical property, then both \(T_{j}\nu=T_{i}\nu\) and \(T_{j}^{2}\nu=T_{i}\nu\) for every empirical measure \(\nu\). **Lemma 3.4**.: _Let \((X,\tau)\) and \(T_{1},\dots,T_{k}\) be as above. Then for every \(j\in[k]\) there exists \(i\in[k]\) satisfying the following property._ \[\begin{split}&\text{For every $\varepsilon>0$ there exist subsets $F,D\subset\mathbb{P}\cap\tau^{-1}(T_{j})$, where $F$ is finite with}\\ &\log(F)>1/\varepsilon\text{ and $D$ is divergent, and an injective $\varepsilon$-map $\alpha:FD\to\mathbb{P}\cap\tau^{-1}(T_{i})$.}\end{split}\] ( \[\star\] ) The next lemma shows that if \(i\) and \(j\) satisfy the property (\(\star\)) described in Lemma 3.4, then \(T_{i}\nu=T_{j}\nu\) for every empirical measure \(\nu\). **Lemma 3.5**.: _Let \((X,\tau)\) and \(T_{1},\dots,T_{k}\) be as above and let \(\nu\) be an empirical measure. Suppose \(i,j\in[k]\) and for every \(\varepsilon>0\) there exist a divergent set \(D\subset\mathbb{P}\cap\tau^{-1}(T_{j})\), some number \(m\in\mathbb{G}^{*}\) and an injective \(\varepsilon\)-map \(\alpha:mD\to\mathbb{P}\cap\tau^{-1}(T_{i})\). Then \(T_{i}\nu=T_{j}\nu\)._ Finally, the next lemma shows that whenever \(i\) and \(j\) satisfy the property (\(\star\)) described in Lemma 3.4, then \(T_{i}\nu=T_{j}^{2}\nu\) for every empirical measure \(\nu\). **Lemma 3.6**.: _Let \((X,\tau)\) and \(T_{1},\dots,T_{k}\) be as above and let \(\nu\) be an empirical measure. If \(i,j\in[k]\) satisfy (\(\star\)), then \(T_{i}\nu=T_{j}^{2}\nu\)._ The proofs of the various lemmas above are postponed to future subsections. For now we verify that together they imply Theorem 3.1. Proof of Theorem 3.1.: Given \(j\in[k]\), use Lemma 3.4 to find \(i\in[k]\) so that property (\(\star\)) holds. Then the conditions of Lemma 3.5 hold (taking an arbitrary \(m\in F\)) and hence \(T_{i}\nu=T_{j}\nu\). Finally, Lemma 3.6 implies that \(T_{i}\nu=T_{j}^{2}\nu\) and hence we conclude that \(T_{j}\nu=T_{i}\nu=T_{j}^{2}\nu\), finishing the proof. ### Proof of Lemma 3.2 First, we need a technical lemma, which makes use of the fact that the Folner sequences we consider are dilated. **Lemma 3.7**.: _Let \(\delta>0\) and let \(\Phi=(\Phi_{N})_{N\in\mathbb{N}}\) be a dilated Folner sequence. Then there exists \(\varepsilon\in(0,\delta)\) such that for every finite subset \(F\) of \(\mathbb{G}^{*}\), every \(\varepsilon\)-map \(\alpha:F\to\mathbb{G}^{*}\) and every function \(a:\mathbb{G}^{*}\to\mathbb{C}\) bounded by \(1\), we have_ \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{p\in F}\mathop{\mathbb{E}}_{n\in \Phi_{N}/p}a(n)-\mathop{\mathbb{E}}_{q\in\alpha(F)}\mathop{\mathbb{E}}_{n\in \Phi_{N}/q}a(n)\right|\leq\delta.\] Proof.: Using Lemma 2.5, there exists \(\varepsilon>0\) such that \[\limsup_{N\to\infty}\frac{\left|\big{(}\Phi_{N}/p\big{)}\triangle\big{(}\Phi_ {N}/q\big{)}\right|}{\big{|}\Phi_{N}/p\big{|}}\leq\frac{\delta}{6}. \tag{3.3}\] whenever \(q\in pS_{\varepsilon}\). If needed we can make \(\varepsilon\) smaller so that \(|z-1|^{2}<\delta/3\) whenever \(z\in S_{\varepsilon}\). Now if \(\alpha:F\to\mathbb{G}^{*}\) is an \(\varepsilon\)-map and \(a:\mathbb{G}^{*}\to\mathbb{D}\), then for every \(p\in F\), by (3.3), we have that \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}/p}a(n)-\mathop{ \mathbb{E}}_{n\in\Phi_{N}/\alpha(p)}a(n)\right|\leq\frac{\delta}{3}. \tag{3.4}\] The conclusion now follows directly from Lemma A.3 with \(w(p)=1/\mathcal{N}(p)\), \(v(p)=1/\mathcal{N}(\alpha(p))\), \(f(p)=\mathbb{E}_{n\in\Phi_{N}/p}\,a(n)\) and \(g(p)=\mathbb{E}_{n\in\Phi_{N}/\alpha(p)}\,a(n)\). The proof of Lemma 3.2 uses a version of the Turan-Kubiliys inequality for Gaussian integers. The precise version we need and its proof are provided in Appendix A for completeness. Proof of Lemma 3.2.: Let \(T,R\in\mathcal{T}\) be two commuting transformations satisfying the hypothesis of the lemma. Let \(\nu\) be an arbitrary additively empirical measure and let \(f\in C(X)\) with \(\|f\|_{\infty}\leq 1\). We need to show that \[\int_{X}f\circ T\,\mathrm{d}\nu=\int_{X}f\circ R\,\mathrm{d}\nu.\] Since \(\nu\) is an additively empirical measure, it suffices to show that for every \(x\in X\), every dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) and every \(\delta>0\), \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)Tx)- \mathop{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)Rx)\right|<4\delta. \tag{3.5}\] Fix \(\delta>0\) and let \(\varepsilon\in(0,\delta)\) be given by Lemma 3.7. Let \(F\subset\tau^{-1}(T)\) and \(\alpha\colon F\to\mathbb{G}^{*}\) satisfy the lemma's hypothesis. Note that if \(p\in\alpha(F)\subset\tau^{-1}(R)\), then for any \(n\in\mathbb{G}^{*}\), \(\tau(np)=\tau(n)R\). Therefore, by applying the Turan-Kublius inequality (Lemma A.1) with \(a(n)=f(\tau(n)Tx)\) and \(B=\alpha(F)\), we have \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)Tx)- \mathop{\mathbb{E}}_{p\in\alpha(F)}\mathop{\mathbb{E}}_{n\in\Phi_{N}/p}f( \tau(n)TRx)\right|^{2}\leq\mathop{\mathbb{E}}_{p,q\in\alpha(F)}^{\log} \mathcal{N}(\gcd(p,q))-1\leq\varepsilon\leq\delta \tag{3.6}\] Similarly, by using Lemma A.1 with \(a(n)=f(\tau(n)Rx)\) and \(B=F\), we get \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}}f(\tau(n)Rx)- \mathop{\mathbb{E}}_{p\in F}\mathop{\mathbb{E}}_{n\in\Phi_{N}/p}f(\tau(n)TRx )\right|^{2}\leq\mathop{\mathbb{E}}_{p,q\in F}^{\log}\mathcal{N}(\gcd(p,q))- 1\leq\varepsilon\leq\delta. \tag{3.7}\] By Lemma 3.7, we have \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{p\in\alpha(F)}\mathop{\mathbb{E} }_{n\in\Phi_{N}/p}f(\tau(n)TRx)-\mathop{\mathbb{E}}_{p\in F}\mathop{\mathbb{E} }_{n\in\Phi_{N}/p}f(\tau(n)TRx)\right|\leq 2\delta. \tag{3.8}\] Relations (3.6), (3.7), and (3.8) give (3.5), finishing the proof. ### Proof of Lemma 3.6 We will use Lemma 3.2 directly with \(T=T_{j}^{2}\) and \(R=T_{i}\). Let \(\varepsilon>0\). Without loss of generality we can assume that \(\varepsilon<1\). Using (\(\star\) *> 1), we can find sets \(F,D\subset\mathbb{P}\cap\tau^{-1}(T_{j})\) such that \(F\) is finite with \(\log(F)>1/\varepsilon\), \(D\) is divergent, and there is an injective \(\varepsilon\)-map \(\alpha:FD\to\mathbb{P}\cap\tau^{-1}(T_{i})\). Since \(F\) is finite and \(D\) is divergent, we can find a finite set \(F_{1}\subset D\) which is disjoint from \(F\) and satisfies \(\log(F_{1})>1/\varepsilon\). Let \(F_{2}=FF_{1}\). The restriction of \(\alpha\) to \(F_{2}\) is an \(\varepsilon\)-map, and by construction \(\tau(F_{2})=\{T_{j}^{2}\}\) and \(\tau(\alpha(F_{2}))=\{T_{i}\}\). One can compute that \[\log(\alpha(F_{2}))=\sum_{n\in F_{2}}\frac{1}{\mathcal{N}(\alpha(n))}\geq \sum_{n\in F_{2}}\frac{1}{4\mathcal{N}(n)}=\frac{\log(F_{2})}{4}=\frac{\log(F )\log(F_{1})}{4}\geq\frac{1}{4\varepsilon^{2}}.\] Finally, using Lemma A.2, we deduce that \[\operatorname*{\mathbb{E}}_{n,m\in F_{2}}^{\log}\gcd(n,m)<(1+8\varepsilon)^{2 }<1+80\varepsilon,\qquad\text{ and }\qquad\operatorname*{\mathbb{E}}_{n,m\in\alpha(F_{2})}^{\log}\gcd(n,m)<1+16 \varepsilon^{2}.\] Since \(\varepsilon\) can be taken arbitrarily small, we meet the conditions of Lemma 3.2, and hence conclude that \(T_{j}^{2}\nu=T_{i}\nu\) as desired. ### Sparse sets By assumption, when restricted to primes, \(\tau\) takes only finitely many values. Moreover, \(\tau(p)\in\{T_{1},\dots,T_{k}\}\) for all primes \(p\) outside a convergent set. Throughout the proof we also need to deal with numbers that are not primes, and we seek to find injective \(\varepsilon\)-maps into sets of primes. For that it is important to know the value of \(\tau\) on the primes "near" a given \(n\). For this purpose, it is convenient to introduce the following colorings of \(\mathbb{G}^{*}\): for each \(\delta>0\) we define \(\chi=\chi_{\delta}:\mathbb{G}^{*}\to\{0,1,\dots,k\}\) by letting \(\chi(n)\) be an arbitrary \(\ell\in[k]\) satisfying \[\big{|}\mathbb{P}\cap nS_{\delta}\cap\tau^{-1}(T_{\ell})\big{|}\geq\frac{3.5 \delta^{2}\mathcal{N}(n)}{k\log\mathcal{N}(n)}.\] If no such \(\ell\in[k]\) exists, we let \(\chi(n)=0\). Note that, in view of the prime number theorem (2.4), for each \(\delta>0\) only finitely many \(n\in\mathbb{G}^{*}\) get colored \(0\). We want to build injective \(\delta\)-maps from certain monochromatic divergent sets \(D\subset\chi^{-1}(\ell)\) to the set \(\mathbb{P}\cap\tau^{-1}(T_{\ell})\). To get injectivity we need to make sure that the elements of \(D\) are not too clumped together; indeed some divergent sets have no divergent subset with an injective \(\delta\)-map into the primes (see Example 3.10 below). This forces us to introduce the notion of sparse sets: **Definition 3.8**.: For \(\delta>0\), we say an infinite set \(D\subset\mathbb{G}\) is _\(\delta\)-sparse_ if \[|D\cap nS_{3\delta}|<\frac{\mathcal{N}(n)}{\log\mathcal{N}(n)}\left(\frac{3 \delta^{2}}{k}+o_{\mathcal{N}(n)\to\infty}(1)\right). \tag{3.9}\] In the next lemma, sparcity is used to find \(\delta\)-maps into sets of primes. **Lemma 3.9**.: _Let \(\delta>0\) and suppose \(D\subset\mathbb{G}^{*}\) is an infinite \(\delta\)-sparse set such that \(\chi_{\delta}(D)=\{j\}\) for some \(j\in[k]\). Then there exists a co-finite subset \(D^{\prime}\subset D\) and an injective \(\delta\)-map \(\alpha:D^{\prime}\to\mathbb{P}\cap\tau^{-1}(T_{j})\)._ Proof.: Since \(D\) is \(\delta\)-sparse, there exists a co-finite subset \(D^{\prime}\subset D\) such that \[|D^{\prime}\cap nS_{3\delta}|<\frac{3.5\delta^{2}\mathcal{N}(n)}{k\log \mathcal{N}(n)}\text{ for every }n\in\mathbb{G}^{*}. \tag{3.10}\] For convenience denote \(\chi:=\chi_{\delta}\) and enumerate \(D^{\prime}=\{n_{1},n_{2},\dots\}\). Since \(\chi(n_{1})=j\), there is some prime \(p_{1}\in\mathbb{P}\cap\tau^{-1}(T_{j})\) such that \(p_{1}\in n_{1}S_{\delta}\). Let \(\alpha(n_{1})=p_{1}\). Continuing in this manner, we seek to define recursively, for each \(i>1\), \(\alpha(n_{i})\) to be any prime \(p_{i}\in\mathbb{P}\cap\tau^{-1}(T_{j})\setminus\{\alpha(n_{1}),\ldots,\alpha(n_{ i-1})\}\) with \(p_{i}\in n_{i}S_{\delta}\). We claim that such \(p_{i}\) exists. Since \(\chi(n_{i})=j\), there are at least \(\frac{3.5\delta^{2}\mathcal{N}(n_{i})}{k\log\mathcal{N}(n_{i})}\) many primes \(p\) in \(\tau^{-1}(T_{j})\cap n_{i}S_{\delta}\). Let \[F_{i}:=\big{\{}n_{t}:t<i\text{ and }\alpha(n_{t})\in n_{i}S_{\delta}\big{\}}.\] For each \(n_{t}\in F_{i}\), we have \(\alpha(n_{t})\in n_{t}S_{\delta}\cap n_{i}S_{\delta}\), and so this intersection is non-empty. Thus, \[n_{t}\in\alpha(n_{t})S_{\delta}^{-1}\subset n_{i}S_{\delta}S_{\delta}^{-1} \subset n_{i}S_{3\delta}.\] It follows that \(F_{i}\subset D^{\prime}\cap n_{i}S_{3\delta}\). Since \(D^{\prime}\) satisfies (3.10), we have \(|F_{i}|<\frac{3.5\delta^{2}\mathcal{N}(n_{i})}{k\log\mathcal{N}(n_{i})}\). Hence, there must be a prime \(p\) other than \(\alpha(n_{1}),\ldots,\alpha(n_{i-1})\) belonging to \(\tau^{-1}(T_{j})\cap n_{i}S_{\delta}\). It is not hard to see that \(\alpha\) is an injective d-map on \(D^{\prime}\). Unfortunately, it is not true that every divergent set contains a sparse divergent subset, as shown by the next example. _Example 3.10_.: Fix \(\delta>0\), \(A\subset\mathbb{G}^{*}\), and let \[E_{A}:=\bigcup_{n\in A}\big{(}\mathbb{G}\cap nS_{\delta}\big{)}.\] It is not difficult to compute that \(\log\big{(}\mathbb{G}\cap nS_{\delta}\big{)}\approx\delta^{2}\). Therefore, if the sets \(nS_{\delta}\) with \(n\in A\) are pairwise disjoint then \(E_{A}\) is divergent as long as \(A\) is infinite. On the other hand, if \(S\) is a \(\delta\)-sparse set, then \[\log(S\cap nS_{\delta})\ll_{\delta}\frac{1}{\mathcal{N}(n)}\big{|}S\cap nS_{ \delta}\big{|}\ll_{\delta}\frac{1}{k\log\mathcal{N}(n)},\] so if \(\sum_{n\in A}\frac{1}{\log\mathcal{N}(n)}<\infty\), then any \(\delta\)-sparse subset of \(E_{A}\) must be convergent. However, the next lemma shows that any divergent set of primes has a sparse divergent subset. In fact, we need something stronger. **Lemma 3.11**.: _Let \(D\subset\mathbb{P}\) be a divergent set, let \(F\subset\mathbb{G}^{*}\) be a finite set and let \(\delta\in(0,1/3)\). Suppose that_ \[\forall n,m\in F,\qquad n\in mS_{10\delta}\quad\Rightarrow\quad n=m. \tag{3.11}\] _Then there exists a divergent subset \(D^{\prime}\subset D\) such that the product set \(FD^{\prime}\) satisfies_ \[\forall n\in\mathbb{G}^{*},\qquad|FD^{\prime}\cap nS_{3\delta}|<\frac{3\delta^ {2}\mathcal{N}(n)}{k\log\mathcal{N}(n)}. \tag{3.12}\] _In particular, \(FD^{\prime}\) is \(\delta\)-sparse._ Proof.: Consider the set \(S_{1}:=(F\cup\{1\})S_{3\delta}\) and let \(S_{2}=S_{1}S_{1}^{-1}\) and \(S=S_{2}S_{2}^{-1}\). Before we proceed with the proof we make some observations that will be useful. Note that \(1\in S_{1}\) so \(S_{1}\subset S_{2}\subset S\). The set \(F\) is finite, so \(C_{0}:=\sup\{|z|^{2}:z\in S\}\) is a positive and finite number that depends only on \(\delta\) and \(F\). The inverse \(S^{-1}\) equals \(S\), so for \(n,m\in\mathbb{G}\) the statements \(n\in mS\) and \(m\in nS\) are equivalent. Since \(S\) is bounded, it follows from (2.1) that \[\forall n\in\mathbb{G}^{*},\qquad\big{|}\mathbb{P}\cap nS\big{|}\leq C_{1} \frac{\mathcal{N}(n)}{\log\mathcal{N}(n)}\] for some constant \(C_{1}>0\) that depends only on \(\delta\) and \(F\). Moreover, for any \(m,n\in\mathbb{G}\) with \(m\in nS\), we have \(\mathcal{N}(m)\geq\mathcal{N}(n)/C_{0}\). So \[\forall n\in\mathbb{G}^{*},\qquad\log\big{(}\mathbb{P}\cap nS\big{)}=\sum_{p\in \mathbb{P}\cap nS}\frac{1}{\mathcal{N}(p)}\leq\frac{\left|\mathbb{P}\cap nS \right|}{\mathcal{N}(n)/C_{0}}\leq\frac{C_{0}C_{1}}{\log\mathcal{N}(n)}.\] Therefore, for any divergent set \(D_{0}\subset\mathbb{P}\), any subset \(D_{1}\subset D_{0}\) satisfying \(D_{0}\subset D_{1}S\) must satisfy \(\sum_{d\in D_{1}}\frac{1}{\log\mathcal{N}(d)}=\infty\). We are now ready to proceed with the proof. Construct \(D^{\prime}\) using a greedy algorithm as follows: Enumerate the elements of \(D=\{d_{1},d_{2},\dots\}\) so that \(\mathcal{N}(d_{i})\leq\mathcal{N}(d_{i+1})\). Let \(D^{\prime}_{0}=\emptyset\) and for each \(i\in\mathbb{N}\) let \(D^{\prime}_{i}:=D^{\prime}_{i-1}\cup\{d_{i}\}\) if it satisfies (3.12), or \(D^{\prime}_{i}:=D^{\prime}_{i-1}\) otherwise. Take \(D^{\prime}=\bigcup D^{\prime}_{i}\). Note that \(D^{\prime}\) satisfies (3.12), but for any \(d\in D\setminus D^{\prime}\), the set \(D^{\prime}\cup\{d\}\) does not. We will show that the set \(D^{\prime}\) thus constructed is divergent, and this will finish the proof. Let \(D_{0}=D\setminus D^{\prime}\). If \(D_{0}\) is not divergent then \(D^{\prime}\) must be divergent, finishing the proof, so we will now assume that \(D_{0}\) is a divergent subset of \(\mathbb{P}\). Using again a greedy algorithm, we may find a maximal subset \(D_{1}\subset D_{0}\) satisfying \[\forall n,m\in D_{1},\qquad n\in mS\quad\Rightarrow\quad n=m. \tag{3.13}\] Since \(D_{1}\) is maximal we have \(D_{0}\subset D_{1}S\), so it follows from the observations at the beginning of the proof that \(\sum_{d\in D_{1}}\frac{1}{\log\mathcal{N}(d)}=\infty\). We claim that for each \(d\in D_{1}\), \(\log(D^{\prime}\cap dS_{2})\geq\frac{C_{2}}{\log\mathcal{N}(d)}\), where \(C_{2}>0\) is a constant that only depends on \(\delta\), \(F\) and \(k\). Note that (3.13) and the fact that \(S=S_{2}S_{2}^{-1}\) imply that the sets \(dS_{2}\) with \(d\in D_{1}\) are pairwise disjoint. Therefore if we prove this claim, then we finish the proof that \(\log(D^{\prime})=\infty\). Since \(d\in D_{1}\subset D\setminus D^{\prime}\) and \(D^{\prime}\) is a maximal subset of \(D\) satisfying (3.12), it follows that \(\tilde{D}:=D^{\prime}\cup\{d\}\) does not satisfy (3.12) and hence there exists \(n\in\mathbb{G}^{*}\) such that the set \(A:=F\tilde{D}\cap nS_{3\delta}\) satisfies \[|A|\geq\frac{3\delta^{2}\mathcal{N}(n)}{k\log\mathcal{N}(n)}.\] Observe that necessarily \(n\in FdS_{3\delta}^{-1}\); indeed since \(D^{\prime}\) does satisfy (3.12), there exists \(a_{0}\in A\cap Fd\). Since \(A\subset nS_{3\delta}\), it follows that \(n\in a_{0}S_{3\delta}^{-1}\subset FdS_{3\delta}^{-1}\). For each \(a\in A\), there exists \(\tilde{a}\in\tilde{D}\) such that \(a\in F\tilde{a}\). In view of (3.11) (and the fact that \(A\subset nS_{3\delta}\)), the map \(a\mapsto\tilde{a}\) is injective, so \(a_{0}\) is the only element of \(A\) with \(\tilde{a_{0}}=d\). Letting \(\tilde{A}:=\big{\{}\tilde{a}:a\in A\big{\}}\setminus\{d\}\) we have \(|\tilde{A}|=|A|-1\) and \(\tilde{A}\subset D^{\prime}\). Moreover, \[\tilde{A}\subset AF^{-1}\subset nS_{3\delta}F^{-1}\subset FdS_{3\delta}^{-1}S_ {3\delta}F^{-1}\subset dS_{2},\] so \(\tilde{A}\subset D^{\prime}\cap dS_{2}\) and we've reduced the claim to showing that \(\log(\tilde{A})\geq\frac{C_{2}}{\log\mathcal{N}(d)}\). By removing from \(D\) a finite set (depending only on \(\delta,k\) and \(F\)), we may assume that \(\mathcal{N}(n)\) is large enough so that \(|A|>2\), and hence \(|\tilde{A}|\geq|A|/2\). Also, since \(\tilde{A}\subset nS_{3\delta}F^{-1}\subset nS\), it follows that for every \(\tilde{a}\in\tilde{A}\), we have the estimate \(\mathcal{N}(\tilde{a})\leq\mathcal{N}(n)C_{0}\). Similarly, from \(n\in FdS_{3\delta}^{-1}\subset dS\) it follows that \(\mathcal{N}(n)\leq\mathcal{N}(d)C_{0}\). By further removing a finite set (depending only on \(\delta,k\) and \(F\)) from \(D\), we may assume that \(\mathcal{N}(n)\leq\mathcal{N}(d)^{2}\). We conclude that \[\log(\tilde{A})=\sum_{\tilde{a}\in\tilde{A}}\frac{1}{\mathcal{N}(\tilde{a})} \geq\frac{|\tilde{A}|}{\max\{\mathcal{N}(\tilde{a}):\tilde{a}\in\tilde{A}\}} \geq\frac{|A|}{2C_{0}\mathcal{N}(n)}\geq\frac{3\delta^{2}}{2C_{0}k\log \mathcal{N}(n)}\geq\frac{3\delta^{2}}{4C_{0}k\log\mathcal{N}(d)}. \tag{3.14}\] This proves the claim with \(C_{2}=\frac{3\delta^{2}}{4C_{0}k}\). Applying Lemma 3.11 for the set \(F=\{1\}\), we deduced the following: **Corollary 3.12**.: _For every \(\delta>0\), any divergent set of \(\mathbb{P}\) has a \(\delta\)-sparse divergent subset._ ### Proof of Lemma 3.4 Let \((X,\tau),T_{1},\ldots,T_{k}\) and \(j\in[k]\) be as in the statement of the lemma. Let \(I\subset[k]\) be the set of those \(i\in[k]\) for which \((\star)\) does not hold. We want to show that \(I\) is not all of \([k]\). For each \(\ell\in I\) there is some \(\varepsilon_{\ell}\) for which \((\star)\) fails. Take \(\varepsilon>0\) smaller than all the \(\varepsilon_{\ell}\). We will find some \(i\in[k]\) for which \((\star)\) holds with this choice of \(\varepsilon\). By construction, such \(i\) will not be in \(I\), finishing the proof. With the choice of \(\varepsilon\) from the previous paragraph, take a finite set \(\tilde{F}\subset\mathbb{P}\cap\tau^{-1}(T_{j})\) with \(\log(\tilde{F})>k/\varepsilon\). Pick \(\delta>0\) sufficiently small so that the ratio between any two distinct elements of \(\tilde{F}\) is outside \(S_{10\delta}\). We may also require that \(\delta<\min\{1/3,\varepsilon\}\). Consider the coloring \(\chi=\chi_{\delta}:\mathbb{G}^{*}\to\{0,\ldots,k\}\) described in Section 3.4. Recall that \(\chi(n)=0\) for only finitely many \(n\in\mathbb{G}^{*}\). Therefore, for each sufficiently large \(n\in\mathbb{P}\cap\tau^{-1}(T_{j})\), we may find a subset \(F_{n}\subset\tilde{F}\) with \(\log(F_{n})>1/\varepsilon\) and such that \(nF_{n}\) is monochromatic with a color in \([k]\). Since there are only finitely many possible subsets of \(\tilde{F}\), we can find a subset \(F\subset\tilde{F}\) with \(\log(F)>1/\varepsilon\) and a divergent set \(D_{1}\subset\mathbb{P}\cap\tau^{-1}(T_{j})\) such that \(F_{n}=F\) for every \(n\in D_{1}\). We may then find a color \(i\in[k]\) and a divergent subset \(D_{2}\subset D_{1}\) such that \(\chi(Fd)=\{i\}\) for all \(d\in D_{2}\). In other words, \(\chi(FD_{2})=\{i\}\) for this choice of \(D_{2}\). We can now apply Lemma 3.11 to find a subset \(D\subset D_{2}\) which is divergent and such that \(FD\) is \(\delta\)-sparse. Finally, we apply Lemma 3.9 to find a \(\delta\)-map \(\alpha:FD^{\prime}\to\mathbb{P}\cap\tau^{-1}(T_{i})\) for some co-finite subset \(D^{\prime}\) of \(D\). We conclude that \((\star)\) holds with the choice of \(\varepsilon\) in the first paragraph, which implies that \(i\notin I\) and hence finishes the proof. ### Proof of Lemma 3.5 In this section, we fix a function \(\lfloor\cdot\rfloor:\mathbb{C}\to\mathbb{G}\) satisfying \(\big{\lfloor}\lfloor z\rfloor-z\big{\rfloor}<1\) for all \(z\in\mathbb{C}\). Given a set \(S\subset\mathbb{C}\), we denote by \(\lfloor S\rfloor\) the set \(\{\lfloor s\rfloor:s\in S\}\). **Lemma 3.13**.: _Let \(\delta>0\) and let \(D\subset\mathbb{G}\) be a \(\delta\)-sparse set. If \(z\in\mathbb{C}\) with \(\left|z\right|\geq 1\), then \(\left\lfloor zD\right\rfloor\) is also \(\delta\)-sparse._ Proof.: For each \(n\in\mathbb{G}^{*}\), note that \[\left\{u\in\mathbb{C}:\left\lfloor u\right\rfloor\in nS_{3\delta}\right\} \subset nS_{3\delta}+\mathbb{D},\] and hence for all \(d\in D\) with \(\left\lfloor zd\right\rfloor\in nS_{3\delta}\), we have that \[d\in\frac{n}{z}S_{3\delta}+\frac{1}{\left|z\right|}\mathbb{D}.\] Since the number of lattice points in \[\left(\frac{n}{z}S_{3\delta}+\frac{1}{\left|z\right|}\mathbb{D}\right)\setminus \left\lfloor\frac{n}{z}\right\rfloor S_{3\delta}\] can be bounded from above by \(C\sqrt{\mathcal{N}(n)}\) for some constant \(C\) that only depends on \(\delta\), we conclude that \[\left|\lfloor zD\rfloor\cap nS_{3\delta}\right| \leq \left|D\cap\left\lfloor\frac{n}{z}\right\rfloor S_{3\delta}\right| +C\sqrt{\mathcal{N}(n)}\] \[\leq \frac{\mathcal{N}\left(\lfloor n/z\rfloor\right)}{\log\mathcal{N }\left(\lfloor n/z\rfloor\right)}\left(\frac{3\delta^{2}}{k}+o_{\mathcal{N}(n )\to\infty}(1)\right)\] \[\leq \frac{\mathcal{N}(n)}{\log\mathcal{N}(n)}\left(\frac{3\delta^{2}} {k}+o_{\mathcal{N}(n)\to\infty}(1)\right).\] Proof of Lemma 3.5.: Let \(i,j\in[k]\) satisfy the hypothesis of Lemma 3.5. Fix an empirical measure \(\nu\) and denote by \(K=\{\ell\in[k]:T_{\ell}\nu=T_{j}\nu\}\); we want to show that \(i\in K\). For each pair \((\ell,\ell^{\prime})\in[k]^{2}\) with \(\ell\in K\) and \(\ell^{\prime}\notin K\) we have \(T_{\ell}\nu=T_{j}\nu\neq T_{\ell^{\prime}}\nu\). Therefore, in view of Lemma 3.2, for each such pair, there exists some \(\varepsilon_{\ell,\ell^{\prime}}>0\) for which the conditions in Lemma 3.2 do not hold. Let \[\varepsilon=\min\{\varepsilon_{\ell,\ell^{\prime}}:\ell\in K,\ell^{\prime}\in [k]\setminus K\}.\] Using the fact that there are only finitely many such pairs, we deduce that \(\varepsilon>0\). Moreover, in view of Lemma A.2, \(\varepsilon\) satisfies the following property: \[\begin{array}{l}\text{Whenever }\ell\in K\text{ and }\ell^{\prime}\in[k],\text{ if there is an injective }\varepsilon\text{-map between a divergent}\\ \text{subset of }\mathbb{P}\cap\tau^{-1}(R_{\ell})\text{ and }\mathbb{P}\cap\tau^{-1}(R_{\ell^{ \prime}})\text{, then also }\ell^{\prime}\in K.\end{array}\] (P) Next let \(\delta>0\) be much smaller than \(\varepsilon\) (the exact magnitude of \(\delta\) will be determined later as a function of \(\varepsilon\) and \(k\)). By the assumption of the lemma, there exists a divergent set \(D\subset\mathbb{P}\cap\tau^{-1}(T_{j})\), some \(m\in\mathbb{G}^{*}\) and an injective \(\delta\)-map \(\alpha:mD\to\mathbb{P}\cap\tau^{-1}(T_{i})\). Let \(z\in S_{\delta}\) and \(r\in\mathbb{N}\) be such that \(z^{r}=m\). Note that \(|z|\geq 1\). Also let \(\chi=\chi_{\delta}\) be the coloring introduced in Section 3.4. In view of Corollary 3.12, we can replace \(D\) with a divergent subset which is \(\delta\)-sparse. Lemma 3.13 then implies that \(\lfloor z^{t}D\rfloor\) is \(\delta\)-sparse for all \(t\in\{0,1,\ldots,r\}\). We want that, for each \(t\in\{0,\ldots,r\}\), the map \(\beta_{t}\colon\lfloor z^{t}D\rfloor\to\lfloor z^{t+1}D\rfloor\), \(\lfloor z^{t}n\rfloor\mapsto\lfloor z^{t+1}n\rfloor\) is a well defined, injective map. Since \(\left|\lfloor z_{1}\rfloor-\lfloor z_{2}\rfloor\right|>\lfloor z_{1}-z_{2} \rfloor-2\) for any \(z_{1},z_{2}\in\mathbb{C}\), by replacing \(D\) with a divergent subset satisfying \(|d_{1}-d_{2}|>2r+2\) for every pair \(d_{1},d_{2}\in D\), we make sure that indeed each of the maps \(\beta_{t}\) is injective and well defined. Since \(r\in S_{\delta}\), after removing from \(D\) a finite set if necessary, each of the maps \(\beta_{t}\) is a \((2\delta)\)-map. Using the fact that being divergent is a partition regular property, we can once again replace \(D\) with a divergent subset so that the set \(\lfloor z^{t}D\rfloor\) is \(\chi\)-monochromatic for each \(t\in\{0,1,\ldots,r\}\). Let \(\ell_{t}\in[k]\) be the (single) color of \(\lfloor z^{t}D\rfloor\) for each \(t\in\{1,\ldots r\}\). For convenience of notation, let \(\ell_{0}:=j\), so that \(\ell_{0}\in K\) trivially. We will prove by induction that \(\ell_{t}\in K\) for all \(t\in\{0,\ldots,r\}\). By Lemma 3.9, after passing to a co-finite subset of \(D\) if necessary, for each \(t\in\{0,\ldots,r\}\) there exists an injective \(\delta\)-map \(\tilde{\alpha}_{t}:\lfloor z^{t}D\rfloor\to\mathbb{P}\cap\tau^{-1}(T_{\ell_{ t}})\) (for \(t=0\) we can take \(\tilde{\alpha}_{0}\) to be the identity map). Making \(\delta\) small enough in terms of \(\varepsilon\), it follows that \(\tilde{\alpha}_{t+1}\circ\beta_{t}\circ\tilde{\alpha}_{t}^{-1}\) is an injective \(\varepsilon\)-map between a divergent subset of \(\mathbb{P}\cap\tau^{-1}(T_{\ell_{t}})\) and \(\mathbb{P}\cap\tau^{-1}(T_{\ell_{t+1}})\) (where we only define \(\tilde{\alpha}_{t}^{-1}\) on those numbers which has a (unique) pre-image). In view of (P), it follows by induction that \(\ell_{t}\in K\) for all \(t\in\{0,\ldots,r\}\). In particular \(\ell_{r}\in K\). Since \(z^{r}D=mD\) and \(\alpha:mD\to\mathbb{P}\cap\tau^{-1}(T_{i})\) is an injective \(\delta\)-map, it follows that \(\alpha\circ\tilde{\alpha}_{r}^{-1}\) is an injective \(\varepsilon\)-map between a divergent subset of \(\mathbb{P}\cap\tau^{-1}(T_{\ell_{r}})\) and \(\mathbb{P}\cap\tau^{-1}(T_{i})\) (again we only define \(\tilde{\alpha}_{r}^{-1}\) on those numbers which has a (unique) pre-image). Using (P) one final time, we conclude that \(i\in K\) as desired. ## 4. Applications In this section we deduce from Theorem E other results formulated in the introduction. For a function \(f\colon\mathbb{G}^{*}\to\mathbb{C}\), we say that \(\mathbb{E}(f)\) exists if \(\mathbb{E}_{n\in\Phi_{N}}\,f(n)\) converges to the same limit for every dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\). In this case, we use \(\mathbb{E}(f)\) to denote the common limit. ### Comparing the averages of two multiplicative functions In this section we present some preliminary estimates needed for the proof of Theorem C. **Lemma 4.1**.: _For any dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\), there exist constants \(C,N_{0}>0\) such that if \(f,g:\mathbb{G}^{*}\to\mathbb{C}\) are completely multiplicative functions bounded by \(1\), then for all \(N\geq N_{0}\),_ \[\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)-g(n)\right|<C\sum_{p\in \mathbb{P}}\frac{|f(p)-g(p)|}{\mathcal{N}(p)}.\] Proof.: Since \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence we have \(\Phi_{N}=\mathbb{G}^{*}\cap k_{N}U\) where \(U\) is a Jordan measurable subset of \(\mathbb{C}\) and \((k_{N})_{N\in\mathbb{N}}\) is a sequence of positive real numbers satisfying \(k_{N}\to\infty\) as \(N\to\infty\). Let \(R>0\) be such that the disk \(R\mathbb{D}\) contains \(U\). Note that \(\Phi_{N}\subset\mathbb{G}^{*}\cap k_{N}R\mathbb{D}\) for all \(N\) and \[\lim_{N\to\infty}\frac{|\Phi_{N}|}{|\mathbb{G}^{*}\cap k_{N}R\mathbb{D}|}= \frac{\mathrm{m}(U)}{\mathrm{m}(R\mathbb{D})}>0,\] where \(\mathrm{m}\) denotes the Lebesgue measure. Thus there are positive constants \(\rho,N_{0}\) such that \(|\Phi_{N}|>\rho|\mathbb{G}^{*}\cap k_{N}R\mathbb{D}|\) for all \(N\geq N_{0}\). It follows that for any \(a\in\mathbb{G}^{*}\), \[|\Phi_{N}/a|\leq|(\mathbb{G}^{*}\cap k_{N}R\mathbb{D})/a|\leq\frac{2|\mathbb{ G}^{*}\cap k_{N}R\mathbb{D}|}{\mathcal{N}(a)}\leq\frac{2|\Phi_{N}|}{\rho \mathcal{N}(a)}. \tag{4.1}\] Let \(f,g:\mathbb{G}^{*}\to\mathbb{C}\) be completely multiplicative functions bounded by \(1\). First, assume \(f(q)=g(q)\) for all primes \(q\in\mathbb{P}\) except for \(q=p\). We will show that for all \(N\geq N_{0}\), \[\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)-g(n)\right|<\frac{C|f(p)- g(p)|}{\mathcal{N}(p)}. \tag{4.2}\] Indeed, since \(f(n)=g(n)\) if \(p\nmid n\), we have \[\left|\sum_{n\in\Phi_{N}}\big{(}f(n)-g(n)\big{)}\right| =\left|\sum_{k=0}^{\infty}\sum_{\begin{subarray}{c}n\in\Phi_{N} \\ p^{k}\|n\end{subarray}}\big{(}f(n)-g(n)\big{)}\right|=\left|\sum_{k=0}^{\infty }\sum_{\begin{subarray}{c}n\in\Phi_{N}/p^{k}\\ p^{l}\end{subarray}}f(np^{k})-g(np^{k})\right|\] \[=\left|\sum_{k=0}^{\infty}\big{(}f(p^{k})-g(p^{k})\big{)}\sum_{ \begin{subarray}{c}n\in\Phi_{N}/p^{k}\\ p^{l}\end{subarray}}f(n)\right|\leq\sum_{k=1}^{\infty}\big{|}f(p^{k})-g(p^{k}) \big{|}\cdot\big{|}\Phi_{N}/p^{k}\big{|}\] Using (4.1) and dividing by \(|\Phi_{N}|\), we get that \[\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}\big{(}f(n)-g(n) \big{)}\right| <\sum_{k=1}^{\infty}\frac{|f(p)^{k}-g(p)^{k}|}{\rho\mathcal{N}(p)^ {k}}=\frac{|f(p)-g(p)|}{\rho\mathcal{N}(p)}\sum_{k=1}^{\infty}\frac{|\sum_{j=0 }^{k-1}f(p)^{j}g(p)^{k-1-j}|}{\mathcal{N}(p)^{k-1}}\] \[\leq\frac{|f(p)-g(p)|}{\rho\mathcal{N}(p)}\sum_{k=1}^{\infty} \frac{k}{\mathcal{N}(p)^{k-1}}\leq\frac{|f(p)-g(p)|}{\rho\mathcal{N}(p)}\sum_{ k=1}^{\infty}\frac{k}{2^{k-1}}\leq\frac{4|f(p)-g(p)|}{\rho\mathcal{N}(p)},\] establishing (4.2) with \(C=4/\rho\). For arbitrary functions \(f\) and \(g\), we can apply (4.2) by changing \(f\) at one prime at a time. More precisely, fix \(N>N_{0}\) and let \(\{p_{1},\ldots,p_{s}\}\subset\mathbb{P}\) be a list of pairwise co-primes containing (up to a unit) every prime divisor of an element of \(\Phi_{N}\). Define completely multiplicative functions \(f_{j}:\mathbb{G}^{*}\to\mathbb{C}\) inductively as follows: \(f_{0}=f\) and for each \(1\leq j\leq s\), let \[f_{j}(p)=\begin{cases}f_{j-1}(p)\text{ for }p\in\mathbb{P}\setminus\{p_{j}\}\\ g(p)\text{ for }p=p_{j}.\end{cases}\] Since \(f_{j}(p)=f_{j-1}(p)\) for all primes \(p\) except for \(p=p_{j}\), using (4.2) we deduce that \[\left|\underset{n\in\Phi_{N}}{\mathbb{E}}(f_{j}(n)-f_{j-1}(n))\right|<\frac{C |f(p_{j})-g(p_{j})|}{\mathcal{N}(p_{j})}.\] By construction, \(f_{k}(n)=g(n)\) for all \(n\in\Phi_{N}\). Thus, by the triangle inequality, \[\left|\underset{n\in\Phi_{N}}{\mathbb{E}}(f(n)-g(n))\right|\leq\sum_{j=1}^{s} \left|\underset{n\in\Phi_{N}}{\mathbb{E}}(f_{j-1}(n)-f_{j}(n))\right|<C\sum_{ j=1}^{s}\frac{|f(p_{j})-g(p_{j})|}{\mathcal{N}(p_{j})}<C\sum_{p\in\mathbb{P}} \frac{|f(p)-g(p)|}{\mathcal{N}(p)}.\] **Lemma 4.2**.: _Let \(f,g:\mathbb{G}^{*}\to\mathbb{C}\) be completely multiplicative functions bounded by \(1\). Suppose \(\mathbb{E}(g)\) exists and \(f(q)=g(q)\) for all but one prime \(q=p\) and \(f(\mathrm{i})=g(\mathrm{i})\). Then \(\mathbb{E}(f)\) exists and equals_ \[\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/\mathcal{N}(p)}\cdot\mathbb{E}(g).\] Proof.: Fix an arbitrary dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\). \[\sum_{n\in\Phi_{N}}f(n) = \sum_{n\in\Phi_{N}}f(n)\sum_{k=0}^{\infty}1_{p^{k}\|n}=\sum_{k=0} ^{\infty}f(p)^{k}\left(\sum_{n\in\Phi_{N}/p^{k}}g(n)1_{p\!n}\right)\] \[= \sum_{k=0}^{\infty}f(p)^{k}\left(\sum_{n\in\Phi_{N}/p^{k}}g(n)-g (p)\sum_{n\in\Phi_{N}/p^{k+1}}g(n)\right)\] \[= \sum_{n\in\Phi_{N}}g(n)+\big{(}f(p)-g(p)\big{)}\sum_{k=1}^{ \infty}f(p)^{k-1}\sum_{n\in\Phi_{N}/p^{k}}g(n).\] Since \(|\Phi_{N}/p^{k}|/|\Phi_{N}|\to\frac{1}{\mathcal{N}(p)^{k}}\) as \(N\to\infty\), it follows that, after dividing by \(|\Phi_{N}|\) and taking the limit as \(N\to\infty\) above (and making use of the dominated convergence theorem), \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\mathbb{E}}f(n)=\mathbb{E}(g)+\frac{ f(p)-g(p)}{\mathcal{N}(p)}\cdot\sum_{k=1}^{\infty}\frac{f(p)^{k-1}}{ \mathcal{N}(p)^{k-1}}\,\mathbb{E}(g)=\frac{\mathcal{N}(p)-g(p)}{\mathcal{N}(p )-f(p)}\,\mathbb{E}(g).\] In particular, \(\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}f(n)\) exists and its value is independent of the Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\), so \(\mathbb{E}(f)\) exists. **Lemma 4.3**.: _Let \(f,g:\mathbb{G}^{*}\to\mathbb{C}\) be completely multiplicative functions bounded by \(1\) satisfying \(f(\mathrm{i})=g(\mathrm{i})\). Suppose \(\mathbb{E}(g)\) exists and_ \[\sum_{p\in\mathbb{P}_{1}}\frac{|g(p)-f(p)|}{\mathcal{N}(p)}<\infty. \tag{4.3}\] _Then \(\mathbb{E}(f)\) exists and equals_ \[\mathbb{E}(g)\cdot\prod_{p\in\mathbb{P}_{1}}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/ \mathcal{N}(p)}.\] Proof.: First note that \[\prod_{p\in\mathbb{F}_{1}}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/\mathcal{N}(p)}= \prod_{p\in\mathbb{P}_{1}}\left(1+\frac{f(p)-g(p)}{\mathcal{N}(p)-f(p)}\right) \tag{4.4}\] and hence converges in view of (4.3) and the fact that \(|f(p)|\leq 1\leq\mathcal{N}(p)/2\) for every \(p\in\mathbb{P}_{1}\). Fix \(\varepsilon>0\) and find a finite set \(F\subset\mathbb{P}_{1}\) such that \[\left|\prod_{p\in F}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/\mathcal{N}(p)}-\prod _{p\in\mathbb{P}_{1}}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/\mathcal{N}(p)} \right|<\varepsilon\qquad\text{and}\qquad\sum_{p\in\mathbb{P}_{1}\setminus F} \frac{|g(p)-f(p)|}{\mathcal{N}(p)}<\varepsilon.\] Let \(h:\mathbb{G}^{*}\to\mathbb{C}\) be the completely multiplicative function satisfying \(h(p)=f(p)\) for \(p\in F\), \(h(p)=g(p)\) for \(p\in\mathbb{P}_{1}\setminus F\), and \(h(\mathrm{i})=1\). Applying Lemma 4.2 inductively \(|F|\) times, it follows that \(\mathbb{E}(h)\) exists and equals \[\mathbb{E}(g)\cdot\prod_{p\in F}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p)/\mathcal{ N}(p)}.\] On the other hand, in view of Lemma 4.1, for every dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\), there is a constant \(C>0\) such that \(\big{|}\,\mathbb{E}_{n\in\Phi_{N}}(h(n)-f(n))\big{|}\leq C\varepsilon\) for a sufficiently large \(N\). It follows that \[\limsup_{N\to\infty}\left|\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)- \mathbb{E}(g)\cdot\prod_{p\in\mathbb{P}_{1}}\frac{1-g(p)/\mathcal{N}(p)}{1-f(p )/\mathcal{N}(p)}\right|<\big{(}C+|\,\mathbb{E}(g)|\big{)}\varepsilon.\] The conclusion now follows by taking \(\varepsilon\to 0\). **Corollary 4.4**.: _Let \(g:\mathbb{G}^{*}\to[0,1]\) be a completely multiplicative function such that \(\sum_{p\in\mathbb{P}_{1}}\frac{|1-g(p)|}{\mathcal{N}(p)}=\infty\). Then \(\mathbb{E}(g)=0\)._ Proof.: For each \(M\in\mathbb{N}\), let \(g_{M}:\mathbb{G}^{*}\to[0,1]\) be the completely multiplicative function satisfying \(g_{M}(\mathrm{i})=1,g_{M}(p)=g(p)\) whenever \(\mathcal{N}(p)\leq M\) and \(g_{M}(p)=1\) otherwise. Using Lemma 4.3 to compare \(g_{M}\) with the constant \(1\) function we conclude that \(\mathbb{E}(g_{M})\) exists and satisfies \[\mathbb{E}(g_{M})=\prod_{\begin{subarray}{c}p\in\mathbb{P}_{1}\\ \mathcal{N}(p)\leq M\end{subarray}}\frac{1-1/\mathcal{N}(p)}{1-g(p)/\mathcal{ N}(p)}\leq\prod_{\begin{subarray}{c}p\in\mathbb{P}_{1}\\ \mathcal{N}(p)\leq M\end{subarray}}\left(1-\frac{1-g(p)}{\mathcal{N}(p)} \right).\] Since \(\sum_{p\in\mathbb{P}_{1}}\frac{|1-g(p)|}{\mathcal{N}(p)}=\infty\) it follows that \(\lim_{M\to\infty}\mathbb{E}(g_{M})=0\). On the other hand, for all \(M\) we have \(g_{M}(n)\geq g(n)\geq 0\), so we conclude that \(\mathbb{E}(g)\) exists and equals \(0\). ### Proof of Theorem C Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a bounded completely multiplicative function. Recall from (1.6) the infinite product \(P(f)\): \[P(f)\coloneqq\prod_{p\in\mathbb{P}_{1}}\frac{\mathcal{N}(p)-1}{\mathcal{N}(p)- f(p)}.\] We first show that, if \(\operatorname{Arg}(f(\mathbb{P}))\) is finite, then \(P(f)\) exists. **Lemma 4.5**.: _Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a completely multiplicative function bounded by \(1\) and suppose \(\operatorname{Arg}(f(\mathbb{P}_{1}))\) is finite. Then \(P(f)\) exists and has norm at most 1; moreover, \(P(f)=0\) if and only if_ \[\sum_{p\in\mathbb{P}_{1}}\frac{|1-f(p)|}{\mathcal{N}(p)}=\infty.\] Proof.: We will use the easy to check fact that \(\prod(1+a_{i})\) converges for any bounded complex valued sequence \((a_{i})\) for which \(\sum|a_{i}|\) is finite. Since \[\frac{\mathcal{N}(p)-1}{\mathcal{N}(p)-f(p)}=1-\frac{1-f(p)}{\mathcal{N}(p)-f (p)},\qquad\text{ and }\qquad\left|\frac{1-f(p)}{\mathcal{N}(p)-f(p)}\right|\leq 2 \left|\frac{1-f(p)}{\mathcal{N}(p)}\right|\] it follows that \(P(f)\) exists whenever \(\sum_{p\in\mathbb{P}_{1}}\frac{|1-f(p)|}{\mathcal{N}(p)}<\infty\). On the other hand, if \(\sum_{p\in\mathbb{P}_{1}}\frac{|1-f(p)|}{\mathcal{N}(p)}=\infty\) and \(\operatorname{Arg}(f(\mathbb{P}))\) is finite, there must exist \(r<1\) such that \(Q:=\{p\in\mathbb{P}_{1}:\operatorname{Re}f(p)<r\}\) is divergent. We will show that, in this case, \(P(f)=0\) (and in particular it exists). Since \(\left|\frac{\mathcal{N}(p)-1}{\mathcal{N}(p)-f(p)}\right|\leq 1\) for all \(p\), if suffices to show that \(\prod_{p\in Q}\frac{\mathcal{N}(p)-1}{\mathcal{N}(p)-f(p)}=0\). Note that \[\frac{\mathcal{N}(p)-1}{\mathcal{N}(p)-f(p)}=1-\frac{1-f(p)}{\mathcal{N}(p)-f (p)}\qquad\text{ and }\qquad\operatorname{Re}\frac{1-f(p)}{\mathcal{N}(p)-f(p)}\geq \operatorname{Re}\frac{1-f(p)}{2\mathcal{N}(p)}\geq\frac{1-r}{2\mathcal{N}(p)}\] for every \(p\in Q\). Because \(Q\) is divergent, the desired conclusion is now true thanks to the fact that \(\prod(1-a_{i})=0\) whenever \((a_{i})\) is a sequence of complex number bounded by 1 satisfying \(\sum\operatorname{Re}a_{i}=\infty\). By appealing to Theorem E, we first prove Theorem C in the case when \(f\) takes values in the unit circle. **Lemma 4.6**.: _Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a completely multiplicative function taking values in the unit circle such that \(f(\mathrm{i})=1\) and \(f(\mathbb{P})\) is finite. Then \(\mathbb{E}(f)\) exists and equals \(P(f)\)._ Proof.: Since \(f(\mathbb{P})\) is finite, \(f(\mathbb{P}_{1})\) is also finite. If the set \(\{p\in\mathbb{P}_{1}:f(p)\neq 1\}\) is convergent, then the conclusion follows immediately from applying Lemma 4.3 with \(g\equiv 1\). Otherwise, because \(f(\mathbb{P}_{1})\) is finite, it follows from Lemma 4.5 that \(P(f)=0\). Therefore, our goal is to show that, when the set \(\{p\in\mathbb{P}_{1}:f(p)\neq 1\}\) is divergent, \(\mathbb{E}(f)\) exists and equals 0. We do this by invoking Theorem E. Let \(X_{0}\) be the (finite) set of those points \(z\in S^{1}\) in the unit circle for which the set \(\{p\in\mathbb{P}_{1}:f(p)=z\}\) is divergent and let \(X\) be the closed subgroup of \(S^{1}\) generated by \(X_{0}\). Using the assumption that the set \(\{p\in\mathbb{P}_{1}:f(p)\neq 1\}\) is divergent, we deduce that \(X\) is not the singleton \(\{1\}\). We "remove" the exceptional primes by considering the completely multiplicative function \(g:\mathbb{G}^{*}\to\mathbb{C}\) defined by \(g(p)=f(p)\) for any prime \(p\in\mathbb{P}_{1}\) with \(f(p)\in X_{0}\), \(g(p)=1\) for any other prime \(p\), and \(g(\mathrm{i})=1\). Since \(f(\mathbb{P}_{1})\) is finite, it follows that \(f\) and \(g\) only differ on a convergent set of primes and hence, using again Lemma 4.3, it suffices to show that \(\mathbb{E}(g)\) exists and equals 0. Since \(X\) is the closed group generated by \(g(\mathbb{P}_{1})\), the Haar measure \(\mu\) on \(X\) is the unique measure on \(X\) preserved under all the maps \(\tau(n):x\mapsto g(n)x\) for \(n\in\mathbb{G}^{*}\). Letting \(F:X\to\mathbb{C}\) be the identity function and \(x=1\), it follows from Theorem E that \(\mathbb{E}(g)\) exists and equals \(\int_{X}z\,\mathrm{d}\mu(z)\). Since \(\mu\) is the Haar measure on a subgroup of \(S^{1}\) other than the trivial group \(\{1\}\), the integral is 0, finishing the proof. We are now ready to prove Theorem C. Proof of Theorem C.: Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a bounded completely multiplicative such that the set \(\operatorname{Arg}(f(\mathbb{P}))\) is finite. If \(\sum_{p\in\mathbb{P}_{1}}\frac{1-|f(p)|}{\mathcal{N}(p)}=\infty\), then by Corollary 4.4 we have \(\mathbb{E}(|f|)=0\) which implies that \(\mathbb{E}(f)=0\). Since \(|1-f(p)|\geq 1-|f(p)|\), we also have \(\sum_{p\in\mathbb{P}_{1}}\frac{|1-f(p)|}{\mathcal{N}(p)}=\infty\) and hence by Lemma 4.5, \(P(f)=0\) as well. If \(\sum_{p\in\mathbb{P}_{1}}\frac{1-|f(p)|}{\mathcal{N}(p)}<\infty\), then we can approximate \(f\) by the completely multiplicative function taking values in the unit circle \(g(n):=e^{\mathrm{i}\operatorname{Arg}(f(n))}\). Indeed, \[\sum_{p\in\mathbb{P}_{1}}\frac{|g(p)-f(p)|}{\mathcal{N}(p)}=\sum_{p\in \mathbb{P}_{1}}\frac{|1-f(p)/g(p)|}{\mathcal{N}(p)}=\sum_{p\in\mathbb{P}_{1}} \frac{1-|f(p)|}{\mathcal{N}(p)}<\infty. \tag{4.5}\] Moreover, since \(\operatorname{Arg}(f(\mathbb{P}_{1}))\) is finite, the range \(g(\mathbb{P}_{1})\) is finite, and so Lemma 4.6 implies that \(\mathbb{E}(g)\) exists and equals \(P(g)\). By Lemma 4.3, \(\mathbb{E}(f)\) also exists and equals \(P(f)\). From Theorem C, one can easily deduce an analogous result for arbitrary bounded completely multiplicative function \(f\) without the restriction that \(f(\mathrm{i})=1\). Let \(Q_{1}=\{z\in\mathbb{G}:\operatorname{Re}\left(z\right)>0,\operatorname{Im} \left(z\right)\geq 0\}\) denote the first quadrant of the complex plane \(\mathbb{C}\). Let \(Q_{2}=\mathrm{i}Q_{1},Q_{3}=\mathrm{i}^{2}Q_{1}\) and \(Q_{4}=\mathrm{i}^{3}Q_{1}\) be the other quadrants. Let \(\mathrm{m}\) be the Lebesgue measure on \(\mathbb{C}\) and note that since \(f\) is completely multiplicative, \(f(\mathrm{i})\) must be either \(1,-1,\mathrm{i}\) or \(-\mathrm{i}\). **Theorem 4.7**.: _Let \(f:\mathbb{G}^{*}\to\mathbb{C}\) be a bounded completely multiplicative function such that \(\operatorname{Arg}(f(\mathbb{P}))\) is finite. If \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence corresponding to the Jordan measurable set \(U\), then the limit \(\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}\,f(n)\) exists and equals_ \[P(f)\cdot\sum_{k=0}^{3}\frac{\operatorname{m}(U\cap Q_{k+1})}{\operatorname{ m}(U)}\cdot f(\mathrm{i})^{k}.\] Proof.: If \((\Phi_{N})_{N\in\mathbb{N}}\) is a dilated Folner sequence in the first quadrant \(Q_{1}\), the value of \(f(\mathrm{i})\) does not affect the average \(\mathbb{E}_{n\in\Phi_{N}}\,f(n)\). Therefore, in this case, Theorem C implies that \[\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}\,f(n)=P(f).\] Now, let \((\Phi_{N})_{N\in\mathbb{N}}\) be an arbitrary dilated Folner sequence in \(\mathbb{G}\) and \(U\) be a Jordan measurable set associated to \((\Phi_{N})_{N\in\mathbb{N}}\). For \(k\in\{0,1,2,3\}\), we have \[\mathbb{E}_{n\in\Phi_{N}\cap Q_{k+1}}\,f(n)=\mathbb{E}_{n\in\Phi_{N}\cap Q_{k +1}}\,f(\mathrm{i}^{k})f(-\mathrm{i}^{k}n)=f(\mathrm{i})^{k}\mathbb{E}_{n\in -\mathrm{i}^{k}(\Phi_{N}\cap Q_{k+1})}\,f(n)\] which converges to \(f(\mathrm{i})^{k}P(f)\) as \(N\to\infty\). Here we use the fact that \((-\mathrm{i}^{k}(\Phi_{N}\cap Q_{k+1}))_{N\in\mathbb{N}}\) is a dilated Folner sequence in the first quadrant. Lastly, observe that \[\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}\,f(n)=\sum_{k=0}^{3}\frac{ \operatorname{m}(U\cap Q_{k+1})}{\operatorname{m}(U)}\cdot\lim_{N\to\infty} \mathbb{E}_{n\in\Phi_{N}\cap Q_{k+1}}\,f(n)\] and so our theorem follows. ### Proofs of Theorems A, 1.2 and B The implications Theorem C\(\Rightarrow\) Theorem B\(\Rightarrow\) Theorem 1.2 are easy and have been outlined in the introduction. To prove Theorem A, given a bounded completely multiplicative function \(f:\mathbb{N}\to\mathbb{R}\), we apply Theorem 1.2 to the function \(f\circ\mathcal{N}:\mathbb{G}^{*}\to\mathbb{R}\). As mentioned in Section 2.1, a Gaussian integer \(a+b\mathrm{i}\) is in \(\mathbb{P}_{1}\) if and only if either: * \(b=0\) and \(a=p\) is a prime in \(\mathbb{N}\) of the form \(4n+3\), or * \(b>0\) and \(a^{2}+b^{2}=p\) is a prime number in \(\mathbb{N}\) (which will not be of the form \(4n+3\)). In the first case, the norm of the Gaussian prime \(a+b\mathrm{i}\) is \(p^{2}\) and in the second case, the norm is \(p\). For each integer prime \(p\) of the form \(4n+1\), there are two Gaussian primes in \(\mathbb{P}_{1}\) associated to it: \(a+b\mathrm{i}\) and \(b+a\mathrm{i}\). Now it is simple to check that \(P(f\circ\mathcal{N})\) equals the expression in (1.5). ### Proof of Theorem D Suppose \((X,T)\) is a uniquely ergodic system with the unique invariant measure \(\mu\). Denote by \(\mathcal{T}\) the semigroup of all continuous self-maps of \(X\). For \(n\in\mathbb{G}^{*}\), define \[\tau(n)=T^{\Omega(\mathcal{N}(n))}.\] Then \(\tau:(\mathbb{G}^{*},\times)\to\mathcal{T}\) is a semigroup homomorphism because \[\tau(mn)=T^{\Omega(\mathcal{N}(mn))}=T^{\Omega(\mathcal{N}(m)\mathcal{N}(n))}= T^{\Omega(\mathcal{N}(m))+\Omega(\mathcal{N}(n))}=\tau(m)\circ\tau(n).\] Gaussian primes \(p\in\mathbb{P}\) fall into two categories: either \(\mathcal{N}(p)\) is a prime in \(\mathbb{Z}\) which is \(\equiv 1\bmod 4\) or \(\mathcal{N}(p)\) is the square of a prime in \(\mathbb{Z}\) which is \(\equiv 3\bmod 4\). The latter category forms a convergent set of Gaussian primes (those that are either real or purely imaginary), so for all \(p\in\mathbb{P}\) outside a convergent set we have that \(\mathcal{N}(p)\) is an integer prime, and hence \(\tau(p)=T\). Therefore, by Theorem E, for any \(x\in X\) and \(F\in C(X)\), \[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{m,n=1}^{N}F(T^{\Omega(m^{2}+n^{2})}x)= \lim_{N\to\infty}\frac{1}{N^{2}}\sum_{z\in\Phi_{N}}F(\tau(z)x)=\int_{X}F\ d\mu,\] where \((\Phi_{N})_{N\in\mathbb{N}}\) is the dilated Folner sequence given by \(\Phi_{N}:=\{z\in\mathbb{G}^{*}\colon 0<\operatorname{Re}z,\operatorname{Im}z \leq N\}\). ## 5. Open questions As mentioned in the introduction, one of the main motivations for this paper was the question of Frantzikinakis and Host, Question 1.1, which roughly speaking asks whether the Cesaro average of a real valued bounded completely multiplicative function over a homogeneous polynomial of two variables must always exist. Our Theorem A gives a positive answer for the specific polynomial \(P(m,n)=m^{2}+n^{2}\), and while our approach might be adaptable to handle other norm forms, the case of general polynomials remains out of reach. In a similar vein, using Theorem D as motivation, we ask whether the polynomial \(P(m,n)=m^{2}+n^{2}\) can be replaced with more general polynomials: **Question 5.1**.: Let \((X,T)\) be a uniquely ergodic system. Let \(P\in\mathbb{Z}[x,y]\) be a homogeneous polynomial taking values on the positive integers and let \(F\in C(X)\). Does the limit \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}F(T^{\Omega(P(m,n ))}x)\] exist? In [22], Loyd showed that an analogue of Birkhoff pointwise ergodic theorem along \(\Omega(n)\) is false. More precisely, in every non-atomic ergodic system \((X,\mu,T)\), there is a measurable set \(A\subset X\) such that for \(\mu\)-almost every \(x\in X\), \[\limsup_{N\to\infty}\,\operatorname*{\mathbb{E}}_{1\leq n\leq N}1_{A}(T^{ \Omega(n)}x)=1\text{ and }\liminf_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq n\leq N}1_{A}(T^{ \Omega(n)}x)=0.\] In light of Theorem D, we can ask a similar question regarding the averages along \(\Omega(m^{2}+n^{2})\): **Question 5.2**.: Let \((X,\mu,T)\) be a non-atomic ergodic system. Is it true that there exists a measurable set \(A\subset X\) such that for \(\mu\)-almost every \(x\in X\), \[\limsup_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}1_{A}(T^{ \Omega(m^{2}+n^{2})}x)=1\text{ and }\liminf_{N\to\infty}\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}1_{A}(T^{ \Omega(m^{2}+n^{2})}x)=0?\] In the same paper, Loyd [22] proved that Bergelson and Richter's result (Theorem 1.5) would be false in general if we removed the unique ergodicity assumption. It was shown that there exists an ergodic system \((X,\mu,T)\), a \(\mu\)-generic point \(x\in X\) and a continuous function \(F\in C(X)\) such that \[\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in[N]}F(T^{\Omega(n)}x)\] does not exist. This result suggests the following question: **Question 5.3**.: Does there exist an ergodic system \((X,\mu,T)\), a \(\mu\)-generic point \(x\in X\) and a continuous function \(F\in C(X)\) such that \[\operatorname*{\mathbb{E}}_{1\leq m,n\leq N}F(T^{\Omega(m^{2}+n^{2})}x)\] does not converge as \(N\to\infty\)? Theorems B and C are true for dilated Folner sequences. However, as discussed in Section 2.3 and also Appendix B, the conclusions of these theorems do not hold for every additive Folner sequence \((\Phi_{N})\) in \(\mathbb{G}\). This raises the following open-ended question: **Question 5.4**.: Which additive Folner sequences \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{G}^{*}\) are such that for every bounded real-valued completely multiplicative function \(f:\mathbb{G}^{*}\to\mathbb{R}\), the averages \(\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)\) converge? The sequence \(\Phi_{N}=N+\{n\in\mathbb{G}:|\operatorname*{\mathrm{Re}}n|,|\operatorname*{ \mathrm{Im}}n|<N\}\) is a dilated Folner sequence corresponding to the open set \(U=\{z\in\mathbb{G}:|\operatorname*{\mathrm{Re}}z-1|,|\operatorname*{\mathrm{Im }}z|<1\}\). On the other hand, if we shift each square by \(N^{1+\varepsilon}\) we destroy this property. Thus, a concrete instance of Question 5.4 is following: **Question 5.5**.: Is there \(\varepsilon>0\) such that, letting \(\Phi_{N}=N^{1+\varepsilon}+\{n\in\mathbb{G}:|\operatorname*{\mathrm{Re}}n|,| \operatorname*{\mathrm{Im}}n|<N\}\), for every bounded completely multiplicative function \(f:\mathbb{G}^{*}\to\mathbb{R}\) the average \(\operatorname*{\mathbb{E}}_{n\in\Phi_{N}}f(n)\) converges? In a similar direction, and in light of Theorem B and a recent result on averages of multiplicative functions on short intervals of Matomaki and Radziwill [23], we ask the following question: **Question 5.6**.: Let \(f:\mathbb{G}^{*}\to\mathbb{R}\) be a bounded completely multiplicative function and let \(\Phi_{N}=\{n:|\operatorname*{\mathrm{Re}}n|,|\operatorname*{\mathrm{Im}}n| \leq N\}\). Define \(P(f)\) as in (1.6). Is it true that \[\lim_{H\to\infty}\lim_{N\to\infty}\operatorname*{\mathbb{E}}_{n\in\Phi_{N}} \left|\operatorname*{\mathbb{E}}_{h\in n+\Phi_{H}}f(h)-P(f)\right|=0?\] More generally, does this hold for any dilated Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\)? In the last three questions, it also makes sense to ask whether the restriction that the completely multiplicative functions involved are real valued can be relaxed to allow completely multiplicative functions \(f:\mathbb{G}^{*}\to\mathbb{C}\) satisfying \(\operatorname*{\mathrm{Arg}}(f(\mathbb{P}))<\infty\). ## Appendix A Some estimates **Lemma A.1** (Turan-Kubilius).: _Let \((\Phi_{N})_{N\in\mathbb{N}}\) be a Folner sequence in \((\mathbb{G},+)\) and let \(B\subset\mathbb{G}^{*}\) be finite and non-empty. For any function \(a\colon\mathbb{G}\to\mathbb{C}\) bounded by 1, we have that_ \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}}a(n)-\mathop{ \mathbb{E}}_{p\in B}^{\log}\mathop{\mathbb{E}}_{n\in\Phi_{N}/p}a(pn)\right| \leq\left(\mathop{\mathbb{E}}_{p\in B}^{\log}\mathop{\mathbb{E}}_{q\in B} \mathcal{N}\big{(}\gcd(p,q)\big{)}-1\right)^{1/2}.\] Proof.: Denote by \(d(A):=\lim_{N\to\infty}\mathbb{E}_{n\in\Phi_{N}}\,1_{A}(n)\) when the limit exists. Note that, for every \(p,q\in\mathbb{G}^{*}\), \[d(p\mathbb{G})=\frac{1}{\mathcal{N}(p)}\qquad\text{ and }\qquad d(p\mathbb{G}\cap q \mathbb{G})=\frac{\mathcal{N}(\gcd(p,q))}{\mathcal{N}(p)\mathcal{N}(q)}.\] For any finite set \(B\subset\mathbb{G}^{*}\) and any \(n\in\mathbb{G}\), \[\left|\mathop{\mathbb{E}}_{p\in B}^{\log}1-\mathcal{N}(p)\cdot 1_{p \mathbb{G}}(n)\right|^{2} = \mathop{\mathbb{E}}_{p\in B}^{\log}\mathop{\mathbb{E}}_{q\in B }^{\log}(1-\mathcal{N}(p)\cdot 1_{p\mathbb{G}}(n))(1-\mathcal{N}(q)\cdot 1_{q \mathbb{G}}(n))\] \[= \mathop{\mathbb{E}}_{p\in B}^{\log}\mathop{\mathbb{E}}_{q\in B }^{\log}1+\mathcal{N}(p)\mathcal{N}(q)1_{p\mathbb{G}\cap q\mathbb{G}}(n)- \mathcal{N}(p)1_{p\mathbb{G}}(n)-\mathcal{N}(q)1_{q\mathbb{G}}(n)\] so averaging over \(n\) we deduce that \[\lim_{N\to\infty}\mathop{\mathbb{E}}_{n\in\Phi_{N}}\left|\mathop{\mathbb{E}} _{p\in B}^{\log}1-\mathcal{N}(p)\cdot 1_{p\mathbb{G}}(n)\right|^{2}=\mathop{ \mathbb{E}}_{p\in B}^{\log}\mathop{\mathbb{E}}_{q\in B}\mathcal{N}(\gcd(p,q) )-1\] An application of the Cauchy-Schwarz inequality then yields the estimate \[\limsup_{N\to\infty}\left|\mathop{\mathbb{E}}_{n\in\Phi_{N}}a(n)-\mathop{ \mathbb{E}}_{p\in B}^{\log}\mathcal{N}(p)\mathop{\mathbb{E}}_{n\in\Phi_{N}}a (n)1_{p\mathbb{G}}(n)\right|\leq\left(\mathop{\mathbb{E}}_{p\in B}^{\log} \mathop{\mathbb{E}}_{q\in B}\mathcal{N}\big{(}\gcd(p,q)\big{)}-1\right)^{1/2}.\] (A.1) Finally notice that \[\mathop{\mathbb{E}}_{n\in\Phi_{N}}a(n)1_{p\mathbb{G}}(n)=\frac{1}{|\Phi_{N}|} \sum_{n\in\Phi_{N}\cap p\mathbb{G}}a(n)=\frac{|\Phi_{N}\cap p\mathbb{G}|}{| \Phi_{N}|}\mathop{\mathbb{E}}_{n\in\Phi_{N}/p}a(np),\] and since \(d(p\mathbb{G})=1/\mathcal{N}(p)\), (A.1) is equivalent to the desired conclusion. To achieve the last condition in Lemma 3.2 we use the following estimates. **Lemma A.2**.: _Let \(F_{1},F_{2}\subset\mathbb{P}\) be finite sets of primes. Then_ 1. \(\mathop{\mathbb{E}}_{n,m\in F_{1}}^{\log}\mathcal{N}(\gcd(n,m))<1+\frac{4}{ \log(F_{1})}\)_._ 2. \(\mathop{\mathbb{E}}_{n,m\in F_{1}F_{2}}^{\log}\mathcal{N}(\gcd(n,m))<\left(1+ \frac{8}{\log(F_{1})}\right)\left(1+\frac{8}{\log(F_{2})}\right)\)_._ Proof.: 1. Denote by \(U=\{1,i,-1,-i\}\) the group of units on \(\mathbb{G}\). Note that for \(m,n\in\mathbb{P}\), \[\mathcal{N}(\gcd(m,n))=\begin{cases}\mathcal{N}(m)\text{ if }m\in nU\\ 1\text{ otherwise.}\end{cases}\] Therefore, for every \(n\in F_{1}\), \[\mathop{\mathbb{E}}_{m\in F_{1}}^{\log}\mathcal{N}(\gcd(n,m))=\frac{1}{\log( F_{1})}\sum_{m\in F_{1}}\frac{\mathcal{N}(\gcd(n,m))}{\mathcal{N}(m)}\leq\frac{1}{ \log(F_{1})}\left(4+\sum_{m\in F_{1}}\frac{1}{\mathcal{N}(m)}\right)=1+\frac{4 }{\log(F_{1})}\] Averaging in \(n\in F_{1}\) yields the desired result. 2. Given \(n,m\in F_{1}F_{2}\) we can decompose \(n=n_{1}n_{2}\) and \(m=m_{1}m_{2}\) with \(n_{i},m_{i}\in F_{i}\). Note that \[\mathcal{N}(\gcd(n,m))=\begin{cases}\mathcal{N}(m)&\text{if }\{m_{1},m_{2}\} \subset n_{1}U\cup n_{2}U\\ \mathcal{N}(m_{i})&\text{if }m_{i}\in n_{1}U\cup n_{2}U\text{ and }m_{3-i}\notin n_{1}U\cup n_{2}U\\ 1&\text{otherwise}\end{cases}\] For a fixed \(n\in F_{1}F_{2}\), partition \(F_{1}F_{2}\) into the following 3 sets: \[A_{1}=\{m\in F_{1}F_{2}:\mathcal{N}(\gcd(n,m))=\mathcal{N}(m)\},\qquad A_{2}: =\{m\in F_{1}F_{2}:\mathcal{N}(\gcd(n,m))=\mathcal{N}(m_{i})\},\] \[A_{3}:=\{m\in F_{1}F_{2}:\mathcal{N}(\gcd(n,m))=1\}.\] We can then estimate the sum \[\sum_{m\in F_{1}F_{2}}\frac{\mathcal{N}(\gcd(m,n))}{\mathcal{N}(m)} = \left(\sum_{m\in A_{1}}+\sum_{m\in A_{2}}+\sum_{m\in A_{3}} \right)\frac{\mathcal{N}(\gcd(m,n))}{\mathcal{N}(m)}\] \[\leq 64+8\big{(}\log(F_{1})+\log(F_{2})\big{)}+\log(F_{1}F_{2}).\] Dividing by \(\log(F_{1}F_{2})=\log(F_{1})\log(F_{2})\) we conclude that for each \(n\in F_{1}F_{2}\) we have \[\mathop{\mathbb{E}}_{m\in F_{1}F_{2}}\mathcal{N}(\gcd(m,n))\leq\left(1+\frac{ 8}{\log(F_{1})}\right)\left(1+\frac{8}{\log(F_{2})}\right).\] Averaging over \(n\in F_{1}F_{2}\) yields the desired result. The following estimate is needed in the main text; we include its short proof for completeness. **Lemma A.3**.: _Let \(F\) be a non-empty finite set and let \(v,w:F\to\mathbb{R}^{>0}\) and \(f,g:F\to\mathbb{C}\). Suppose that \(|f(p)|\leq 1\) for all \(p\in F\) and that there exists \(\delta>0\) such that for each \(p\in F\),_ \[\left|\frac{w(p)}{v(p)}-1\right|<\delta,\qquad\text{ and }\qquad\big{|}f(p)-g(p) \big{|}<\delta.\] _Then_ \[\left|\frac{\sum_{p\in F}w(p)f(p)}{\sum_{p\in F}w(p)}-\frac{\sum_{p\in F}v(p )g(p)}{\sum_{p\in F}v(p)}\right|<3\delta.\] Proof.: All sums in the proof are over \(p\in F\), so we will omit this subscript. First note that \[\left|\frac{\sum v(p)g(p)}{\sum v(p)}-\frac{\sum_{p\in F}v(p)f(p)}{\sum v(p)} \right|\leq\frac{1}{\sum v(p)}\sum v(p)\big{|}g(p)-f(p)\big{|}<\delta.\] (A.2) Second we have \(\big{|}w(p)-v(p)\big{|}\leq\delta v(p)\) and hence \[\left|\frac{\sum v(p)f(p)}{\sum v(p)}-\frac{\sum w(p)f(p)}{\sum v(p)}\right| =\frac{\left|\sum\big{(}v(p)-w(p)\big{)}f(p)\big{|}}{\sum v(p)}\leq\frac{ \sum\delta v(p)\big{|}f(p)\big{|}}{\sum v(p)}\leq\delta.\] (A.3) Finally we have \[\left|\frac{\sum w(p)f(p)}{\sum w(p)}-\frac{\sum w(p)f(p)}{\sum v(p)}\right| = \left|\sum w(p)f(p)\right|\cdot\left|\frac{1}{\sum w(p)}-\frac{1}{ \sum v(p)}\right|\] \[\leq \left(\sum w(p)\right)\cdot\left|\frac{1}{\sum w(p)}-\frac{1}{ \sum v(p)}\right|\] \[= \frac{\left|\sum v(p)-w(p)\right|}{\sum v(p)}\leq\frac{\delta\sum v (p)}{\sum v(p)}=\delta.\] Together with (A.2) and (A.3) this yields the desired conclusion. ## Appendix B A counterexample with non-dilated Folner sequences A function \(f:\mathbb{G}^{*}\to\{-1,1\}\) is called _normal_ if every finite pattern of \(-1\) and \(1\) appears in \(f\) at the correct frequency; that is for all \(k\in\mathbb{N}\), all distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\), and \(\varepsilon_{1},\ldots,\varepsilon_{k}\in\{-1,1\}\), the set \[S:=\left\{n\in\mathbb{G}:f(n+h_{j})=\varepsilon_{j}\text{ for all }j=1,\ldots,k\right\}\] satisfies \[\lim_{N\to\infty}\frac{\left|S\cap N\mathbb{D}\right|}{\left|\mathbb{G}\cap N \mathbb{D}\right|}=\frac{1}{2^{k}},\] where we write \(f(0)=0\) for convenience. In this section, to demonstrate the necessity of dilated Folner sequences in Theorems B, C, and E, we show that a "random" completely multiplicative function \(f:\mathbb{G}^{*}\to\{-1,1\}\) is almost surely normal, which implies that along some additive Folner sequences it is indistinguishable from the constant \(1\) sequence. The analogous result for multiplicative functions on \(\mathbb{Z}\) was proved by Fish [10] and that proof extends to the Gaussian integer setting without major difficulties; we provide a proof here for completeness. **Lemma B.1** ([10, Lemma 2.2]).: _Let \((a_{n})_{n\in\mathbb{N}}\) be a bounded sequence. Let \(T_{N}=\frac{1}{N}\sum_{n=1}^{N}a_{n}\) and \(t\in\mathbb{C}\). The followings are equivalent:_ 1. \(\lim_{N\to\infty}T_{N}=t\)_,_ 2. _There exists a sequence of increasing indices_ \((N_{j})_{j\in\mathbb{N}}\) _such that_ \(N_{j+1}/N_{j}\to 1\) _and_ \(\lim_{j\to\infty}T_{N_{j}}=t\)_._ **Proposition B.2**.: _Define a random completely multiplicative function \(f:\mathbb{G}^{*}\to\{-1,1\}\) as follows: let \(f(i)=1\), for every \(p\in\mathbb{P}\), let \(f(p)=1\) or \(-1\) with probability \(1/2\) each. For convenience write \(f(0)=0\). Then almost surely,_ \[\lim_{N\to\infty}\underset{\begin{subarray}{c}n\in\mathbb{G}\\ \mathcal{N}(n)<N\end{subarray}}{\mathbb{E}}f(n+h_{1})\cdots f(n+h_{k})=0\] _for all \(k\in\mathbb{N}\) and pairwise distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\)._ Proof.: In this proof, we use \(\mathbb{E}\) for averages and \(E\) for expected values of random variables. Fix \(k\) and distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\). For \(n\in\mathbb{G}\), define \[\xi(n):=(n+h_{1})\cdots(n+h_{k})\] and \[\phi(n):=f(\xi(n)),\] and for \(N\in\mathbb{N}\), define \[T_{N}=\mathop{\mathbb{E}}_{\begin{subarray}{c}n\in\mathbb{G}\\ \mathcal{N}(n)<N\end{subarray}}\phi(n).\] Our goal is to show \(T_{N}\to 0\) almost surely. In order to do this, we will show \[\sum_{N=1}^{\infty}E(T_{N^{40}}^{2})<\infty.\] Note that \[T_{N}^{2}=\mathop{\mathbb{E}}_{\begin{subarray}{c}x,y\in\mathbb{G}\\ \mathcal{N}(x)<N\\ \mathcal{N}(y)<N\end{subarray}}\phi(x)\phi(y).\] Therefore, by the linearity of expectation, we get \[E(T_{N}^{2})=\mathop{\mathbb{E}}_{\begin{subarray}{c}x,y\in\mathbb{G}\\ \mathcal{N}(x)<N\\ \mathcal{N}(y)<N\end{subarray}}E(\phi(x)\phi(y)).\] We have that \[E(\phi(x)\phi(y))=\begin{cases}1\text{ if }\xi(x)\xi(y)\text{ is a square}\\ 0\text{ otherwise.}\end{cases}\] Thus, in order to bound \(E(T_{N}^{2})\), we need to bound the number of pairs \((x,y)\in B_{N}(0)\) such that \(\xi(x)\xi(y)\) is a square. For any \(x\in\mathbb{G}^{*}\cap N\mathbb{D}\), write \(\xi(x)=q_{1}q_{2}\cdots q_{\ell}m^{2}\) where \(q_{1},q_{2},\ldots,q_{\ell}\) are distinct primes. Define \(h(x)=\ell\), the number of prime divisors of the square-free component of \(\xi(x)\). Let \(D\) be the set of all possible Gaussian integers that divide at least two numbers in \(\{n+h_{1},\ldots,n+h_{k}\}\). Thus \(D\) is the set of all possible divisors of \(h_{j_{1}}-h_{j_{2}}\) for \(1\leq j_{1}<j_{2}\leq k\). For a finite subset \(S\subset\mathbb{G}\), denote by \(m(S)\) the product of all elements in \(S\) and define \(m(\varnothing)=1\). With the above \(x\), if \(\xi(x)\xi(y)\) is a square, there exists \(S_{1}\subset D\) and \(S_{2}\subset\{q_{1},\ldots,q_{\ell}\}\) such that \(y=m(S_{1})m(S_{2})n^{2}\) for some \(n\in\mathbb{G}\). Thus, the number of \(y\in B_{N}(0)\) such that \(\xi(x)\xi(y)\) is a square is at most \[2^{|D|}\cdot 2^{h(x)}|B_{\sqrt{N}}(0)|\ll 2^{h(x)}\sqrt{N}.\] Thus \[E(T_{N}^{2})\ll\frac{1}{N^{3/2}}\sum_{x\in\mathbb{G}\cap N\mathbb{D}}2^{h(x)}.\] For any positive integer \(M\), if all prime divisors of \(\xi(x)\) has norms greater than \(M\), then \[h(x)<\log_{M}(\mathcal{N}(x+h_{k}))^{k}=k\log_{M}\mathcal{N}(x+h_{k}).\] Fix an \(M\) such that \(M>2^{k/0.45}\) and let \(C\) be the number of primes whose norms do not exceed \(M\). Then for any \(x\in\mathbb{G}\), \[h(x)<C+k\log_{M}\mathcal{N}(x+h_{k}).\] It follows that \[2^{h(x)}<2^{C}\cdot 2^{k\log_{M}\mathcal{N}(x+h_{k})}<2^{C}\cdot\mathcal{N}(x+h_{ k})^{0.45}.\] Therefore, \[E(T_{N}^{2})\ll\frac{1}{N^{3/2}}\sum_{x\in N\mathbb{D}}\mathcal{N}(x+h_{k})^{0.4 5}\ll\frac{1}{N^{0.05}},\] and so \[\sum_{N=1}^{\infty}E(T_{N^{40}}^{2})\ll\sum_{n=1}^{\infty}\frac{1}{N^{2}}<\infty.\] Thus \(T_{N^{40}}\to 0\) almost surely and by Lemma B.1, \(T_{N}\to 0\) almost surely. The next lemma is about the equivalence of Chowla's conjecture and the normality of the Liouville function. This result is well-known in \(\mathbb{Z}\) and its proof in \(\mathbb{G}\) is the same. However, since we could not locate the exact proof in the literature, we include it for completeness. **Lemma B.3**.: _If \(f:\mathbb{G}\to\{-1,1\}\) satisfies_ \[\lim_{N\to\infty}\mathop{\mathbb{E}}_{\begin{subarray}{c}n\in\mathbb{G}\\ \mathcal{N}(n)<N\end{subarray}}f(n+h_{1})\cdots f(n+h_{k})=0\] (B.1) _for every \(k\in\mathbb{N}\) and distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\), then \(f\) is normal._ Proof.: Let \(X=\{-1,1\}^{\mathbb{G}}\) and \(T\) be the \(\mathbb{G}\)-action on \(X\) defined by \(T_{g}(x(n))=x(n+g)\) for all \(g\in\mathbb{G}\) and \(x\in X\). Let \(\mu\) be a weak\({}^{*}\)-limit of the sequence of measures \[\mathop{\mathbb{E}}_{\begin{subarray}{c}g\in\mathbb{G}\\ \mathcal{N}(g)<N\end{subarray}}\delta_{T_{g}f}\] where \(\delta_{T_{g}f}\) is the Dirac measure at \(T_{g}f\). Define a function \(F:X\to\{-1,1\}\) by \(F(x)=x(0)\) for all \(x\in\mathbb{G}\). Then by (B.1) and the definition of \(\mu\), for all \(k\in\mathbb{N}\) and distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\), \[\int_{X}T^{h_{1}}F\cdots T^{h_{k}}F\ d\mu=0=\left(\int_{X}F\ d\mu\right)^{k}.\] Since \(C(X)\) contains a dense subset which is generated by the family of functions of the form \(T^{h_{1}}F\cdots T^{h_{k}}F\), we deduce that \(\mu\) is the Bernoulli measure on \(X\). Thus for any \(k\in\mathbb{N}\), distinct \(h_{1},\ldots,h_{k}\in\mathbb{G}\), and \(\varepsilon_{1},\ldots,\varepsilon_{k}\in\{-1,1\}\), the cylinder set \[C:=\{x\in X:x(h_{j})=\varepsilon_{j}\text{ for all }j\in[k]\}\] has measure \(\mu(C)=1/2^{k}\). By the definition of \(\mu\), it means that that pattern of \(-1,1\) appears in \(f\) at the correct frequency. Proposition B.2 and Lemma B.3 imply the proposition: **Proposition B.4**.: _Define a random completely multiplicative function \(f:\mathbb{G}^{*}\to\{-1,1\}\) as follows: let \(f(\mathrm{i})=1\) and, for every prime \(p\in\mathbb{P}_{1}\), let \(f(p)=1\) or \(-1\) with probability \(1/2\) each. Then almost surely, \(f\) is normal._
2309.16344
Epistemic Logic Programs: a study of some properties
Epistemic Logic Programs (ELPs), extend Answer Set Programming (ASP) with epistemic operators. The semantics of such programs is provided in terms of world views, which are sets of belief sets, i.e., syntactically, sets of sets of atoms. Different semantic approaches propose different characterizations of world views. Recent work has introduced semantic properties that should be met by any semantics for ELPs, like the Epistemic Splitting Property, that, if satisfied, allows to modularly compute world views in a bottom-up fashion, analogously to ``traditional'' ASP. We analyze the possibility of changing the perspective, shifting from a bottom-up to a top-down approach to splitting. We propose a basic top-down approach, which we prove to be equivalent to the bottom-up one. We then propose an extended approach, where our new definition: (i) is provably applicable to many of the existing semantics; (ii) operates similarly to ``traditional'' ASP; (iii) provably coincides under any semantics with the bottom-up notion of splitting at least on the class of Epistemically Stratified Programs (which are, intuitively, those where the use of epistemic operators is stratified); (iv) better adheres to common ASP programming methodology.
Stefania Costantini, Andrea Formisano
2023-09-28T11:08:37Z
http://arxiv.org/abs/2309.16344v1
[ ###### Abstract Epistemic Logic Programs (ELPs), extend Answer Set Programming (ASP) with epistemic operators. The semantics of such programs is provided in terms of _world views_, which are sets of belief sets, i.e., syntactically, sets of sets of atoms. Different semantic approaches propose different characterizations of world views. Recent work has introduced semantic properties that should be met by any semantics for ELPs, like the _Epistemic Splitting Property_, that, if satisfied, allows to modularly compute world views in a bottom-up fashion, analogously to "traditional" ASP. We analyze the possibility of changing the perspective, shifting from a bottom-up to a top-down approach to splitting. We propose a basic top-down approach, which we prove to be equivalent to the bottom-up one. We then propose an extended approach, where our new definition: (i) is provably applicable to many of the existing semantics; (ii) operates similarly to "traditional" ASP; (iii) provably coincides under any semantics with the bottom-up notion of splitting at least on the class of _Epistemically Stratified Programs_ (which are, intuitively, those where the use of epistemic operators is stratified); (iv) better adheres to common ASP programming methodology. Under consideration in Theory and Practice of Logic Programming]Epistemic Logic Programs: a study of some properties \(*\)]Stefania Costantini DISIM - Universita dell'Aquila, via Vetoio, L'Aquila, Italy Gruppo Nazionale per il Calcolo Scientifico - INdAM, Roma, Italy [email protected]]Andrea Formisano DMIF - Universita di Udine, via delle Scienze 206, Udine, Italy Gruppo Nazionale per il Calcolo Scientifico - INdAM, Roma, Italy [email protected]]Andrea Formisano [email protected] Answer Set Programming, Epistemic Logic Programs, Epistemic Splitting ## 1 Introduction Epistemic Logic programs (ELPs, in the following just _programs_, if not explicitly stated differently), were first introduced in [1, 2], and extend Answer Set Programs, defined under the Answer Set Semantics [1], with _epistemic operators_ that are able to introspecitively "look inside" a program's own semantics, which is defined in terms of its _answer sets_ (cf. (Fandinno et al., 2022) for a historical review of research on this topic). In fact, \(\mathbf{K}A\) means that (ground) atom \(A\) is true in every answer set of the program \(\Pi\) where \(\mathbf{K}A\) occurs. Related operators that can be defined in terms of \(\mathbf{K}\) are the _possibility operator_\(\mathbf{M}\) (not treated in this paper) where \(\mathbf{M}A\) means that \(A\) is true in some of the answer sets of \(\Pi\), and the _epistemic negation operator_**not**, where **not**\(A\) expresses that \(A\)_is not provably true_, meaning that \(A\) is false in at least one answer set of \(\Pi\). The semantics of ELPs is provided in terms of _world views_: instead of a unique set of answer sets (a unique "world view" in the new terminology) like in Answer Set Programming (ASP), there is now a set of such sets. Each world view consistently satisfies (according to a given semantics) the epistemic expressions that appear in a given program. Many semantic approaches for ELPs have been introduced beyond the seminal work of Gelfond and Przymusinska (Gelfond and Przymusinska, 1991), among which we mention (Gelfond, 2011; Truszczynski, 2011; Farinas del Cerro et al., 2015; Shen and Eiter, 2016; Kahl and Leclerc, 2018; Su, 2019; Cabalar et al., 2019; Costantini and Formisano, 2022; Su, 2021). Recent work extends to Epistemic Logic Programming notions that have already been defined for ASP, and that might prove useful in ELPs as well. In particular, Cabalar et al. consider _splitting_ (introduced for ASP in (Lifschitz and Turner, 1994)), which allows a program to be seen as divided ("split") into two parts, the "top" and "bottom" in a principled way, i.e., atoms occurring in the bottom can occur only in the body of rules in the top. This allows the answer sets of the program to be computed incrementally, in the following way: compute the answer sets of the bottom part, and use them (one by one) to simplify the top part; then, compute the answer sets of the simplified top part; finally, the answer sets of the overall program are obtained as the union of each answer set of the bottom with the corresponding answer sets of the simplified top (such a procedure can be iterated, i.e., the top and the bottom could in turn be split). Cabalar et al. then extend to ELPs the concept of splitting and the method of incremental calculation of the semantics (here, it is the world views that must be calculated). This is achieved by defining a notion of _Epistemic Splitting_, where top and bottom are defined with respect to the occurrence of epistemic operators, and a corresponding _Epistemic Splitting Property_, which is fulfilled by a semantics if it allows the world views to be computed bottom-up (a precise definition is seen below). Further, Cabalar et al. adapt properties of ASP to ELPs, which are implied by this property, namely, the fact that adding constraints leads to reduce the number of answer sets (_Subjective Constraint Monotonicity_), and _Foundedness_, meaning that atoms composing answer sets cannot have been derived through cyclic positive dependencies. Finally, they define the class of _Epistemically Stratified Programs_ that, according to (Cabalar et al., 2021, Th. 2), admit a unique world view (these programs are those where, intuitively, the use of epistemic operators is stratified). In substance, Cabalar et al. establish the properties that in their view a semantics should fulfill, and then they compare the existing semantics with respect to these properties. In this paper, we explore a different stance: we analyze the possibility of changing the perspective about how to exploit a splitting, shifting from a bottom-up to a top-down approach. This applies in the first place to the Epistemic Splitting Property, of which we propose a reformulation allowing world views to be computed top-down. We then propose a substantial extension of the Epistemic Splitting Property, leading to a new approach that: 1. is applicable to many of the existing semantics, while few of them fulfill the Epistemic Splitting Property as originally formulated; 2. operates similarly to splitting in "traditional" ASP; 3. provably coincides under any semantics with the bottom-up notion of splitting on a significant class of programs, including at least those which are _epistemically stratified_; 4. is compatible with common ASP programming practice, where one defines a problem solution (that would constitute the top) that will be merged with a problem instance (that would constitute the bottom). The paper is organized as follows. In Sections 2 and 3 we recall ASP and ELPs. Section 4 reports some definitions from (Cabalar et al. 2021) concerning useful properties of ELPs. In Section 5 we introduce some observations on ELPs that lead to formulate our proposal, treated in detail in Section 6. In Section 7 we state our main theorem and a relevant corollary. Finally, in Section 8 we conclude. ## 2 Answer Set Programming and Answer Set Semantics One can see an answer set program (for short, ASP program) as a set of statements that specify a problem, where each answer set represents a solution compatible with this specification. A _consistent_ ASP program has one or more answer sets, while an _inconsistent_ one has no answer sets, meaning that no solution can be found. Several well-developed freely available _answer set solvers_ exist that compute the answer sets of a given program. Syntactically, an ASP program \(\Pi\) is a collection of _rules_ of the form \[A_{1}|\ldots|A_{g}\leftarrow\ L_{1},\ldots,L_{n}.\] where each \(A_{i}\), \(0\leq i\leq g\), is an atom and \(|\) indicates disjunction, and the \(L_{i}\)s, \(0\leq i\leq n\), are literals (i.e., atoms or negated atoms of the form _not_\(A\)). The left-hand side and the right-hand side of the rule are called _head_ and _body_, respectively. A rule with an empty body is called a _fact_. As usual, the symbols \(\top\) and \(\bot\) denote the true and the false Boolean constants, respectively. The notation \(A\,|\,B\) indicates disjunction, usable only in rule heads and, so, in facts. A rule with an empty head (or, equivalently, with head \(\bot\)), of the form \(\,\gets L_{1},...,L_{n}\). or \(\bot\gets L_{1},...,L_{n}\)., is a _constraint_, stating that literals \(L_{1},\ldots,L_{n}\) are not allowed to be simultaneously true in any answer set; the impossibility of fulfilling such kind of requirement is one of the reasons that makes a program inconsistent. All extensions of ASP not explicitly mentioned above are not considered in this paper. We implicitly refer to the _ground_ version of \(\Pi\), which is obtained by replacing in all possible ways the variables occurring in \(\Pi\) with the constants occurring in \(\Pi\) itself, and is thus composed of ground atoms, i.e., atoms that contain no variables. The _answer set_ (or _stable model_) semantics can be defined in several ways (Lifschitz 2010; Costantini and Formisano 2015). However, answer sets of a program \(\Pi\), if any exists, are the supported minimal classical models of the program interpreted as a first-order theory in an obvious way. The original definition from (Gelfond and Lifschitz 1988), introduced for programs where rule heads were limited to be single atoms, was in terms of the _GL-Operator_\(\Gamma\). Given set of atoms \(I\) and pro gram \(\Pi\), \(\Gamma_{\Pi}(I)\) is defined as the least Herbrand model of the program \(\Pi^{I}\), namely, the Gelfond-Lifschitz reduct of \(\Pi\) w.r.t. \(I\). The program \(\Pi^{I}\) is obtained from \(\Pi\) by: 1. removing all rules which contain a negative literal \(\mathit{not}\,A\,\) such that \(A\in I\); and 2. removing all negative literals from the remaining rules. Since \(\Pi^{I}\) is a positive program, the least Herbrand model is guaranteed to exist and can be computed via the standard immediate consequence operator (Lloyd 1987). Then, \(I\) is an answer set whenever \(\Gamma_{\Pi}(I)=I\). This definition is then extended to the general case, involving disjunctive heads, by defining \(I\) to be an answer set of \(\Pi\) if it is a minimal model (w.r.t. set inclusion) of \(\Pi^{I}\). ## 3 Epistemic Logic Programs Epistemic Logic Programs (ELPs) extend the syntax of ASP programs by introducing, in the body of rules, so-called _subjective literals_ (w.r.t. the usual _objective literals_).1 Such new literals are constructed via the _epistemic operator_\(\mathbf{K}\) (disregarding without loss of generality the other epistemic operators). An ELP program is called _objective_ if no subjective literals occur therein, i.e., it is an ASP program. A constraint involving (also) subjective literals is called a _subjective constraint_, whereas one involving objective literals only is an _objective constraint_. Footnote 1: Nesting of subjective literals is not considered here. Let \(At\) be the set of atoms occurring (within either objective or subjective literals) in a given program \(\Pi\), and \(\mathit{Atoms}(r)\) be the set of atoms occurring in rule \(r\). By some abuse of notation, we denote by \(\mathit{Atoms}(X)\) the set of atoms occurring in \(X\), whatever \(X\) is (a rule, a program, an expression, etc.). Let \(\mathit{Head}(r)\) be the head of rule \(r\) and \(\mathit{Body}_{obj}(r)\) (resp., \(\mathit{Body}_{subj}(r)\)) be the (possibly empty) set of objective (resp., subjective) literals occurring in the body of \(r\). For simplicity, we often write \(\mathit{Head}(r)\) and \(\mathit{Body}_{obj}(r)\) in place of \(\mathit{Atoms}(\mathit{Head}(r))\) and \(\mathit{Atoms}(\mathit{Body}_{obj}(r))\), respectively, when the intended meaning is clear from the context. We call _subjective rules_ those rules whose body is made of subjective literals only. Literal \(\mathbf{K}A\) intuitively means that the (ground) atom \(A\) is true in every answer set of the given program \(\Pi\) (it is a _cautious consequence_ of \(\Pi\)). Since, as it turns out, whatever the semantic account one will choose there can be several sets of answer sets (called _world views_), the actual meaning of \(\mathbf{K}A\) is that \(A\) is true in every answer set of some world view of \(\Pi\). Each world view thus determines the truth value of all subjective literals in a program. There are several semantic approaches to ELPs, dictating in different ways how one finds the world views of a given program. Although all such approaches provide the same results in a set of basic examples, they (obviously) differ in others. Formally, a semantics \(\mathcal{S}\) is a function mapping an ELP program into sets of world views, i.e., sets of sets of objective literals, where if \(\Pi\) is an objective program, then the unique member of \(\mathcal{S}(\Pi)\) is the set of stable models of \(\Pi\). Otherwise, each member of \(\mathcal{S}(\Pi)\) is an \(\mathcal{S}\)_-world view_ of \(\Pi\). (We will often write "world view" in place of "\(\mathcal{S}\)-world view" whenever mentioning the specific semantics will be irrelevant.) For an \(\mathcal{S}\)-world view \(W\) and a literal \(\mathbf{K}L\), we write \(W\models\mathbf{K}L\) if \(L\) is true in all elements of \(W\). For instance, for program \(\{a\!\leftarrow\!not\,b,\ b\!\leftarrow\!not\,a,\ e\!\leftarrow\!not\,{\bf K}f,\ f\! \leftarrow\!not\,{\bf K}e\}\), every semantics returns two world views: \(\{\{a,e\},\{b,e\}\}\), where \({\bf K}e\) is true and \({\bf K}f\) is false, and \(\{\{a,f\},\{b,f\}\}\) where \({\bf K}f\) is true and \({\bf K}e\) is false. The presence of two answer sets in each world view is due to the cycle on objective atoms, whereas the presence of two world views is due to the cycle on subjective atoms (in general, the existence and number of world views are related to such cycles, see (Costantini 2019) for a detailed discussion). ## 4 Epistemic Logic Programs: Useful Properties As argued by Cabalar et al., it would be useful if ELPs would enjoy, _mutatis mutandis_, properties similar to those of ASP programs. Hence, in their works, such useful properties are outlined and adapted, as we report (almost literally) below. Drawing inspiration from the _Splitting Theorem_(Lifschitz and Turner 1994), an analogous property is defined for ELPs: Definition 4.1 (Epistemic splitting set (Cabalar et al. 2021, Def. 4)): A set of atoms \(U\subseteq At\) is said to be an epistemic splitting set of a program \(\Pi\) if for any rule \(r\) in \(\Pi\) one of the following conditions hold: 1. \(\mathit{Atoms}(r)\subseteq U\); 2. \((\mathit{Body}_{obj}(r)\cup\mathit{Head}(r))\ \cap\ U=\emptyset\). An epistemic splitting of \(\Pi\) is a pair \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\) such that \(B_{U}(\Pi)\cap T_{U}(\Pi)=\emptyset\) and \(B_{U}(\Pi)\cup T_{U}(\Pi)=\Pi\), and also, such that all rules in \(B_{U}(\Pi)\) satisfy condition (1) and all rules in \(T_{U}(\Pi)\) satisfy condition (2). Intuitively, condition (2) means that the top program \(T_{U}(\Pi)\) may refer to atoms in \(U\) which occur as heads of rules in the bottom \(B_{U}(\Pi)\), only through epistemic operators. Epistemic splitting can be used, similarly to "traditional" Lifschitz&Turner splitting, for iterative computation of world views. Indeed, Cabalar et al. (2021) propose to compute first the world views of the bottom program \(B_{U}(\Pi)\) and, for each of them, simplify the corresponding subjective literals in the top part. Given an epistemic splitting set \(U\) for \(\Pi\) and a set of interpretations \(W\), they define the subjective reduct of the top with respect to \(W\) and signature \(U\), denoted by \(E_{U}(\Pi,W)\). This operator considers all subjective literals \(L\) occurring in \(T_{U}(\Pi)\), such that the atoms occurring in them belong to \(B_{U}(\Pi)\). In particular, \(L\) will be substituted by \(\top\) in \(E_{U}(\Pi,W)\) if \(W\models L\), and by \(\bot\) otherwise. Thus, \(E_{U}(\Pi,W)\) is a version of \(T_{U}(\Pi)\) where some subjective literal, namely those referring to the bottom part of the program, have been simplified as illustrated. Definition 4.2 ((Cabalar et al. 2021, Def. 5)): Given a semantics \(\mathcal{S}\), a pair \(\langle W_{b},W_{t}\rangle\) is said to be an \(\mathcal{S}\)-solution of \(\Pi\) with respect to an epistemic splitting set \(U\) if \(W_{b}\) is an \(\mathcal{S}\)-world view of \(B_{U}(\Pi)\) and \(W_{t}\) is an \(\mathcal{S}\)-world view of \(E_{U}(\Pi,W_{b})\). The definition is parametric w.r.t. \(\mathcal{S}\), as each different semantics \(\mathcal{S}\) will define in its own way the \(\mathcal{S}\)-solutions for a given \(U\) and \(\Pi\). Definition 4.3: The WBT operation \(W_{b}\sqcup W_{t}\) on sets of propositional interpretations \(W_{b}\) and \(W_{t}\) is defined as follows: \[W_{b}\sqcup W_{t}=\{I_{b}\cup I_{t}|I_{b}\in W_{b}\wedge I_{t}\in W_{t}\}.\] We report from [1] the definition of the following property: Property 4.1 (Epistemic Splitting Property (ESP)): A semantics \(\mathcal{S}\) satisfies the epistemic splitting property if for any epistemic splitting set \(U\) of any program \(\Pi\): \(W\) is an \(\mathcal{S}\)-world view of \(\Pi\) iff there is an \(\mathcal{S}\)-solution \(\langle W_{b},W_{t}\rangle\) of \(\Pi\) w.r.t. \(U\) such that \(W=W_{b}\sqcup W_{t}\). Then, under a semantics that satisfies ESP, world views of the entire program are obtainable as the union of world views of the bottom with world views of a simplified version of the top. The Epistemic Splitting Property implies _Subjective Constraint Monotonicity_, i.e., for any epistemic program \(\Pi\) and any subjective constraint \(r\), \(W\) is a world view of \(\Pi\cup\{r\}\) iff both \(W\) is a world view of \(\Pi\) and \(W\) satisfies \(r\). As discussed in [1], many semantics do not satisfy the ESP property, which is in fact satisfied only by the very first semantics of ELPs, proposed in [1] and thus called G91 (and in some of its generalizations), and by Founded Autoepistemic Equilibrium Logic (FAEEL), defined in [1]. Another interesting property is _foundedness_. Again, such a notion has been extended from objective programs (see [1, Def. 15]). Intuitively, a set \(X\) of atoms is _unfounded_ w.r.t. an (objective) program \(\Pi\) and an interpretation \(I\), if for every \(A\in X\) there is no rule \(r\) in \(\Pi\) by which \(A\) might be derived, without incurring in positive circularities and without forcing the derivation of more than one atom from the head of a disjunctive rule (see, e.g., [1] for a formal definition). For ELPs, one has to consider that unfoundedness can originate also from positive dependencies on positive subjective literals, like, e.g., in the program \(A\leftarrow\mathbf{K}A\). Among the existing semantics, only FAEEL satisfies foundedness. An interesting class of programs admitting a unique world view is characterized by the following definition. Definition 4.4 (Epistemic Stratification [1, Def. 6]): We say that an ELP \(\Pi\) is epistemically stratified if we can assign an integer mapping \(\lambda:At\to N\) to each atom (occurring in the program) such that: * \(\lambda(a)=\lambda(b)\) for any rule \(r\in\Pi\) and atoms \(a,b\in(\mathit{Atoms}(r)\setminus\mathit{Body}_{subj}(r))\), and * \(\lambda(a)>\lambda(b)\) for any pair of atoms \(a,b\) for which there exists a rule \(r\in\Pi\) with \(a\in(\mathit{Head}(r)\cup\mathit{Body}_{obj}(r))\) and \(b\in\mathit{Body}_{subj}(r)\). ## 5 Observations The subdivision of an ELP into layers suggests that, in the upper layer, epistemic literals referring to the lower layer may be aimed at performing some kind of meta-reasoning about that layer. If the epistemic splitting property is enforced, however, meta-level reasoning is in practice prevented. This is so because if the semantics satisfies such property, then, it is the lower layer that determines the truth value of the subjective literals that connect the two layers. In fact, according to Property 4.1, through the simplification w.r.t. the answer sets of the lower layer, the upper layer is strongly (maybe sometimes too strongly) constrained. For instance, let us consider the program \(\Pi_{0}=\{a\,|\,b,\ \bot\gets not\,\mathbf{K}a\}\). We can see that, while the lower level \(\{a\,|\,b\}\), considered as a program _per se_, has the unique world view \(\{\{a\},\{b\}\}\), the overall program has no world views. In fact, \(\mathbf{K}a\) does not hold in \(\{\{a\},\{b\}\}\), thus the constraint is violated. Notice, however, that the world view \(\{\{a\}\}\) is instead accepted by some semantics, such as those defined in [11] and in [10], that do not satisfy the epistemic splitting property. This world view may be seen as corresponding to an approach where the upper layer, in order to retain consistency, "requires" the lower layer to entail \(a\), which is absolutely feasible by choosing \(a\) over \(b\) in the disjunction. From this perspective, the knowledge modeled by the upper layer is not just used to reject potential world views of the bottom level, but, instead, can affect the way in which they are composed, by filtering out some of the answer sets. This situation is reminiscent of what actually happens for ASP: consider the plain ASP program \(\{a\,|\,b,\ c\gets a,\ \gets not\,c\}\), which has unique answer set \(\{a,c\}\), originating from the answer set \(\{a\}\) of the lower layer \(\{a\,|\,b\}\). We follow (for a long time) the line, amply represented in the literature, in which meta-reasoning is aimed not only at "observing" lower layer(s) but also at trying to influence them (cf. [11] for a survey on meta-reasoning in Computational Logic); this by suitably enlarging and/or restricting, as an effect of meta-rules application, the set of possible consequences of such layer(s). We discuss at length this point of view, also proposing technical solutions and several examples, in [11]. In addition, let us notice that a common approach in logical declarative modeling of a problem consists of formalizing the problem domain as the "top" part of a program/theory. Then, such top part will be joined with a specific "bottom", representing the problem instance at hand, that may vary and might be, in general, unknown while defining the top. Below is an example of what we mean (over-simplified and in "skeletal form" for the sake of conciseness), taken from the realm of digital investigations, that the authors have been studying in the context of the Action COST CA17124 DIGital FORensics: evidence Analysis via intelligent Systems and Practices (DigForASP). In the example, an investigation **must** be concluded with a judgment, that can be: * of innocence if in no plausible scenario (i.e., in no answer set) evidence can be found of an involvement; * of demonstrable guilt if in every possible scenario, the evidence of guilt can be found; * of presumed innocence otherwise. Clearly, the specification of the legal rules that can be used to draw conclusions, and then the details of each specific case will be modularly added whenever needed to this general "top" part. Thus, one can see a program composed of three layers: the top, and a bottom that can be further split into a middle layer containing legal rules, and the lowest layer with details of the case (see (Costantini, 2019) for more examples taken from this field). The top layer is as follows: \[\begin{array}{l}\textit{judgement}\leftarrow\textit{guilty}.\\ \textit{judgement}\leftarrow\textit{presumed\_innocent}.\\ \textit{judgement}\leftarrow\textit{innocent}.\\ \leftarrow\textit{not}\ \mathbf{K}\ \textit{judgement}.\\ \textit{guilty}\leftarrow\textit{provably\_guilty}.\\ \textit{presumed\_innocent}\leftarrow\textit{not\ provably\_guilty}.\\ \textit{provably\_guilty}\leftarrow\mathbf{K}\ \textit{sufficient\_evidence\_against}.\\ \textit{innocent}\leftarrow\mathbf{K}\ \textit{not\ sufficient\_evidence\_against}.\\ \end{array}\] Hence, a study of how the semantics of any resulting overall program might be built is in order here, as in many other practical cases: think, for example, of a top part comprising ontological definitions reusable in several application contexts. In fact, being able to compute and check a program's semantics only in dependence on each specific instance, does not seem to be elaboration-tolerant. Therefore, we tried to understand whether the concept of splitting might be applied top-down, and how the existing semantics would behave in the new perspective. ## 6 Our Proposal Let us proceed step by step towards the new definition of _Top-down Epistemic Splitting Property_. We first reformulate definitions related to ESP so that it can be applied also top-down, to obtain what we call Top-down Epistemic Splitting Property - Basic (TDESPB), showing that a semantics satisfies TDESPB if and only if it satisfies ESP. Thus, TDESPB provides a way of coping with incremental computation of world views more suitable to the examples mentioned earlier. We then perform some extensions, to obtain a more general Top-down Epistemic Splitting Property (TDESP) that holds for a wider range of semantic approaches. ### Preliminaries and Key Definitions In our approach, the notion of splitting set remains the same, save for some details concerning subjective constraints. We need, in fact, to introduce preliminary assumptions on constraints. Notice that subjective literals may either occur in a subjective constraint directly or affect constraint's satisfaction through indirect dependencies, such as, e.g., in the program \(\bot\gets a.\ a\leftarrow\mathbf{K}p\)(see (Dix, 1995) for a formal definition of direct and indirect dependencies). Without loss of generality, we exclude here indirect dependencies concerning subjective literals involved in constraints. Also, notice that, as it is well-known, a constraint can be represented as a unary odd cycle, that, e.g., for \(\bot\leftarrow\mathbf{K}p\) would be of the form \(a\gets\textit{not}\,a,\mathbf{K}p\)(with \(a\) introduced as a fresh atom), or even (as discussed in depth in (Costantini, 2006)) as an odd cycle of any arity, of which \(\mathbf{K}p\) is the unique _handle_. For the sake of simplicity, we consider subjective constraints in their plain form, namely, as in \(\bot\leftarrow\mathbf{K}p\). Notice also that, according to the definition of splitting provided in (Cabalar et al., 2021), subjective constraints can be placed at either of two adjacent levels. For convenience concerning definitions that will be introduced later, we impose, again without loss of generality, that both subjective rules satisfying condition (2) of the definition of Epistemic Splitting Set (Definition 4) and subjective constraints are put in \(T_{U}(\Pi)\). We now proceed to introduce the key definitions on which our approach is based. **Definition 6**: _Let be given a semantics \(\mathcal{S}\), a program \(\Pi\), and an epistemic splitting \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\) of \(\Pi\), according to the definition of Epistemic Splitting Set. Let \(F_{U}(\Pi)\) denote the set of all subjective literals \(\mathbf{K}L\) occurring in \(T_{U}(\Pi)\) (even in negative form \(not\,\mathbf{K}L\)) and referring to \(B_{U}(\Pi)\) (in the sense that the atom involved in \(\mathbf{K}L\) occurs in \(B_{U}(\Pi)\) but not in \(T_{U}(\Pi)\)), together with their negations \(not\,\mathbf{K}L\)._ Intuitively, subjective literals in \(F_{U}(\Pi)\) constitute the "interface" between the top and bottom parts. Notice that \(\mathit{Atoms}(F_{U}(\Pi))\subseteq U\). **Definition 6**: _Let \(\Pi\) be a program and let \(F_{U}(\Pi)=\{\mathbf{K}L_{1},\ldots,\mathbf{K}L_{z},not\,\mathbf{K}L_{1}, \ldots,not\,\mathbf{K}L_{z}\}\). Let, moreover, \(f_{U}(\Pi)=\{kl_{1},\ldots,kl_{z},nkl_{1},\ldots,nkl_{z}\}\) be a set of fresh atoms. The detached version \(T^{\prime}_{U}(\Pi)\) of \(T_{U}(\Pi)\) is the program consisting of:_ * _the rules obtained from rules in_ \(T_{U}(\Pi)\) _by substituting each occurrence of the subjective literal_ \(\mathbf{K}L_{i}\in F_{U}(\Pi)\) _or_ \(not\,\mathbf{K}L_{i}\in F_{U}(\Pi)\) _by the corresponding fresh atom_ \(kl_{i}\in f_{U}(\Pi)\) _or_ \(nkl_{i}\in f_{U}(\Pi)\)_, for each_ \(i\in\{1,\ldots,z\}\) _(where_ \(kl_{i}\) _and_ \(nkl_{i}\) _are in turn called the detached form of_ \(\mathbf{K}L_{i}\) _and_ \(not\,\mathbf{K}L_{i}\)_, resp.);_ _and_ * _the facts_ \(kl_{i}\mid nkl_{i}\)_, for each_ \(i\in\{1,\ldots,z\}\)_._ We introduced \(T^{\prime}_{U}(\Pi)\) in order to model the connection between \(T_{U}(\Pi)\) and \(B_{U}(\Pi)\) w.r.t. the top-down perspective. Thus, we need to define the notion of world views of the detached version \(T^{\prime}_{U}(\Pi)\) of a program under the assumption that the fresh atoms \(kl_{i}\) and \(nkl_{i}\) represent the epistemic literals connecting the top and bottom parts of the program. As seen below, these world views not necessarily coincide with the world views of \(T^{\prime}_{U}(\Pi)\) if considered as an epistemic program by itself. Recall that a disjunction between an epistemic literal \(\mathbf{K}L\) and its negation \(not\,\mathbf{K}L\) determines, as discussed in [10], two world views, one entailing \(\mathbf{K}L\) and the other one entailing \(not\,\mathbf{K}L\). With respect to the subjective literals in \(F_{U}(\Pi)\), in defining the detached version \(T^{\prime}_{U}(\Pi)\) of a program \(T_{U}(\Pi)\) we encoded the potential existence of such alternative world views by means of the disjunctions \(kl_{i}\mid nkl_{i}\), for \(i\in\{1,\ldots,z\}\). In computing the world views of the detached version \(T^{\prime}_{U}(\Pi)\), we start by considering \(T^{\prime}_{U}(\Pi)\) as a regular epistemic program (forgetting for the moment that the fresh atoms \(kl_{i}\) and \(nkl_{i}\) stand for epistemic literals) thus obtaining the corresponding collection of world views \(\mathcal{W}\). Note in fact that \(T^{\prime}_{U}(\Pi)\) does not contain subjective literals referring to the bottom \(B_{U}(\Pi)\), but it may contain "local" epistemic literals that may determine the existence of several world views (or just one if there are no such local epistemic literals). The answer sets in each \(W\in\mathcal{W}\) might however contain some of the atoms \(kl_{i}\)s and \(nkl_{i}\)s. In this case, each \(W\in\mathcal{W}\) has to be split into two world views, say \(W_{1}\) and \(W_{2}\), the former composed of the answer sets in \(W\) that contain \(kl_{1}\), and the latter composed by those answer sets of \(W\) that contain \(nkl_{1}\). This step must be repeated by considering the pair \(kl_{2}/nkl_{2}\) in order to split both \(W_{1}\) and \(W_{2}\), and so on, for each \(i\in\{1,\ldots,z\}\). (Observe that the order of splits does not matter.) We consider the resulting collection of sets of atoms as the world views of the detached version \(T^{\prime}_{U}(\Pi)\). An example of this process will be given at the end of Section 6.2. In summary: **Definition 6.3** (_World views of \(T^{\prime}_{U}(\Pi)\), or Interface World Views_): Let \(W^{1},\ldots,W^{n}\) be the world views of \(T^{\prime}_{U}(\Pi)\) according to a given semantics \(\mathcal{S}\). The Interface World Views of \(T^{\prime}_{U}(\Pi)\) are obtained as follows: for every \(W^{j}\), \(j\leq n\), \(W^{j}=\{S^{j}_{1},\ldots,S^{j}_{v}\}\) for some \(v\geq 0\), and for every disjunction \(kl_{i}\mid nkl_{i}\), \(i\in\{1,\ldots,z\}\) occurring in \(T^{\prime}_{U}(\Pi)\), split \(W^{j}\) into \(W^{j}_{1}\) and \(W^{j}_{2}\), the former composed of the sets \(S^{j}_{h}\in W^{j}\) such that \(kl_{i}\in S^{j}_{h}\), the latter composed of the of the sets \(S^{j}_{f}\in W^{j}\) such that \(nkl_{i}\in S^{j}_{f}\), \(f\in\{1,\ldots,v\}\). Repeat the splitting over the resulting world views, and iterate the process until splitting is no longer possible, i.e., no resulting world view contains both \(kl_{r}\) and \(nkl_{r}\), for some \(r\in\{1,\ldots,z\}\). The denomination "Interface World Views" indicates that they have been obtained in the perspective of a merge with world views of the bottom, as seen below. For the sake of conciseness though by some abuse of notation, we will call Interface World Views simply 'world views'. **Proposition 6.1**: There exists a bijection between world views of \(T_{U}(\Pi)\) and world views of \(T^{\prime}_{U}(\Pi)\). _Proof_ Given a world view (Interface World View, to be precise) \(W^{\prime}_{j}\) of the epistemic program \(T^{\prime}_{U}(\Pi)\), a world view \(W_{j}\) for \(T_{U}(\Pi)\) is equal to \(W_{j}=\{X\setminus f_{U}(\Pi)\,|\,X\in W^{\prime}_{j}\}\). In fact, the procedure for obtaining Interface World Views takes into account the fact that each epistemic literal represented by an atom in \(f_{U}(\Pi)\) can be potentially either true or false. Vice versa, \(W^{\prime}_{j}\) is obtained from \(W_{j}\) by adding to it some subset of \(f_{U}(\Pi)\). For each of such world views \(W_{j}\) of \(T_{U}(\Pi)\), Def. 6.4 below identifies the set of subjective literals that are relevant in extending \(W_{j}\) to a world view of the entire \(\Pi\). These are those that in the detached version of \(T_{U}(\Pi)\) have been assumed to be true to obtain \(W_{j}\) as a world view. **Definition 6.4** (_Epistemic Top-down Requisite Set_): Let \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\) be an epistemic splitting for a program \(\Pi\), \(W^{\prime}_{j}\) be a world view of \(T^{\prime}_{U}(\Pi)\), and let \(W_{j}=\{X\setminus f_{U}(\Pi)\,|\,X\in W^{\prime}_{j}\}\). The set \(ES_{T_{U}(\Pi)}(W_{j})=\{\mathbf{K}L_{h}\,|\,W^{\prime}_{j}\models kl_{h}\} \cup\{not\mathbf{K}L_{h}\,|\,W^{\prime}_{j}\not\models kl_{h}\}\) is the _(epistemic top-down) requisite set_ for \(W_{j}\) (w.r.t. \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\)). Now we partition the _requisite set_, identifying two relevant subsets (technical reasons for doing so will be seen below). **Definition 6.5**: Given \(f_{U}(\Pi)=\{kl_{1},\ldots,kl_{z},nkl_{1},\)\(\ldots,nkl_{z}\}\) and the above definition of requisite set \(ES_{T_{U}(\Pi)}(W_{j})\), w.r.t. an epistemic splitting \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\), let set \(S\) include those \(kl_{i}/nkl_{i}\) that occur in some constraints in \(T^{\prime}_{U}(\Pi)\). We split the requisite set \(ES_{T_{U}(\Pi)}(W_{j})\) as the union of the following two (disjoint) sets: * the _epistemic top-down constraint set_: \[EC_{T_{U}(\Pi)}(W_{j})=\left(\{\mathbf{K}L_{i}\ |kl_{i}\in S\}\cup\{not\, \mathbf{K}L_{i}\ |nkl_{i}\in S\}\right)\cap ES_{T_{U}(\Pi)}(W_{j})\] * the _requirement set_: \[RQ_{T_{U}(\Pi)}(W_{j})=\left(\{\mathbf{K}L_{i}\,|kl_{i}\in f_{U}(\Pi)\backslash S \}\cup\{not\,\mathbf{K}L_{i}\ |nkl_{i}\in f_{U}(\Pi)\backslash S\}\right)\cap ES_{T_{U}(\Pi)}(W_{j}).\] There is an important reason for distinguishing these two subsets. Namely, the literals in \(EC_{T_{U}(\Pi)}(W_{j})\), if not entailed in some world view of the bottom part of the program, lead to a constraint violation and cause the non-existence of world views of \(\Pi\) extending \(W_{j}\). Thus, \(EC_{T_{U}(\Pi)}(W_{j})\) expresses prerequisites on which epistemic literals must be entailed in a world view of \(B_{U}(\Pi)\), so that such world view can be merged with \(W_{j}\) in order to obtain a world view of \(\Pi\). Instead, literals in \(RQ_{T_{U}(\Pi)}(W_{j})\), can be usefully exploited, as seen below, to drive the selection of which world view of the bottom can be combined with a given world view of the top. For all the three sets (requisite set, constraint set, and requirement set) one can possibly list only the epistemic literals of \(F_{U}(\Pi)\) required to be true, all the others implicitly required to be false. Given a world view \(W\) of \(T_{U}(\Pi)\) and considering literals belonging to \(EC_{T_{U}(\Pi)}(W)\) which occur in the bodies of rules in \(B_{U}(\Pi)\), we introduce a simplification that can be performed and will turn out to be useful later on. **Definition 6.6** (_Top-down Influence_): Given a world view \(W\) of \(T_{U}(\Pi)\), and its corresponding top-down constraint set \(EC_{T_{U}(\Pi)}(W)\), the \(W\)-tailored version \(B_{U}^{W}(\Pi)\) of \(B_{U}(\Pi)\) is obtained by substituting in \(B_{U}(\Pi)\) all literals \(\mathbf{K}L\in EC_{T_{U}(\Pi)}(W)\) by \(L\). The intuition behind the above definition is that, if \(\mathbf{K}A\) is in \(EC_{T_{U}(\Pi)}(W)\), then \(A\) must necessarily belong to every answer set of a world view of the bottom that can be possibly merged with \(W\) in order to obtain a world view of the overall program \(\Pi\). Hence, it is indifferent that in the body of rules of \(B_{U}(\Pi)\) it occurs \(A\) rather than \(\mathbf{K}A\), if \(\mathbf{K}A\in EC_{T_{U}(\Pi)}(W)\). Substituting \(\mathbf{K}A\) with \(A\) can, however, be useful, as discovered during the development of the G11 (Gelfond, 2011) and K15 semantics (Kahl et al., 2015), to "break" unwanted positive cycles among subjective literals, that might lead to _unfounded_ world views (cf. (Cabalar et al., 2021, Def. 15)). In our approach, the notion of top-down influence provides, as seen by examples in the next section, an alternative perspective on how a world view of the bottom is obtained, and, in a sense, a re-interpretation of the notion of foundedness (to be formally elaborated in future work). In the top-down approach that we are going to propose, the world views of a given program \(\Pi\) are obtained as a combination of world views of the top and world views of the bottom, like in the bottom-up approach. In the basic version of the Top-down Epistemic Splitting Property, presented in Section 6.2, there is only a change of perspective and a simple condition to drive the combination via the WBT operation (cf. Definition 4.3). In the definition of the more general Top-down Epistemic Splitting Property, presented in Section 6.3, one can notice two relevant changes: (i) the notion of top-down influence is exploited in the definition of candidate world views; (ii) a subset of a world view of the bottom (i.e., some of the answer sets occurring therein) may be cut out, so as to enable the merging via WBT with a "compatible" world view of the top. Preliminarily: **Definition 6.7**: Given a set \(E\) of epistemic literals and a set of sets of atoms \(W\), we say that \(W\)_fulfills_\(E\) iff \(\forall\,\mathbf{K}L\in E,W\models L\) and \(\forall\,not\,\mathbf{K}L\in E,W\not\models L\). ### Top-down Epistemic Splitting Property - Basic (Tdespb) **Definition 6.8** (Candidate World View - Basic Version): Given an epistemic splitting \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\) for a program \(\Pi\), let \(W_{T}\) be a world view of \(T_{U}(\Pi)\) and let \(W_{B}\) be a world view of \(B_{U}(\Pi)\) that fulfills \(EC_{T_{U}(\Pi)}(W_{T})\) such that \(W_{B}\) also fulfills \(RQ_{T_{U}(\Pi)}(W_{T})\) (overall, \(W_{B}\) fulfills the requisite set \(ES_{T_{U}(\Pi)}(W_{T})\)). Then, \[W=W_{B}\sqcup W_{T}=\{I_{b}\cup I_{t}|I_{b}\in W_{B}\wedge I_{t}\in W_{T}\}\] is a _candidate world view_ for \(\Pi\) (obtained from \(W_{T}\) and \(W_{B}\)). It is possible that no world views of the bottom comply with the conditions posed by world views of the top: in such case, \(\Pi\) has no candidate world views. We can now state a property that, if satisfied by a semantics, allows world views to be computed top-down: **Definition 6.9** (Top-down Epistemic Splitting Property - Basic Version (Tdespb)): A semantics \(\mathcal{S}\) satisfies _basic top-down epistemic splitting_ if any candidate world view of \(\Pi\) according to Definition 6.8 is indeed a world view of \(\Pi\) under \(\mathcal{S}\). Below we show that TDESPB is equivalent to the Epistemic Splitting Property by (Cabalar et al., 2021), in the sense that both definitions are satisfied by the same semantic approaches and thus characterize the same world views. **Theorem 6.1** (Equivalence ESP - Tdespb): A semantics \(\mathcal{S}\) satisfies TDESPB if and only if \(\mathcal{S}\) satisfies the Epistemic Splitting Property ESP as defined in Definition 4.1. **Proof** _If part._ Assume that a given semantics \(\mathcal{S}\) satisfies TDESPB. To show that \(\mathcal{S}\) satisfies ESP as well, it suffices to observe that the couple \(\langle W_{B},W_{T}\rangle\) according to Definition 6.8 is a \(\mathcal{S}\)-solution as required by the definition of ESP. In fact, \(W_{B}\) is an \(\mathcal{S}\)-world view of the bottom \(B_{U}(\Pi)\). It remains to be seen that \(W_{T}\) is an \(\mathcal{S}\)-world view of \(E_{U}(\Pi,W_{B})\), i.e., that, after simplifying \(T_{U}(\Pi)\) w.r.t. the subjective literals entailed by \(W_{B}\), one would have \(W_{T}\) among the world views. By Definition 6.8, \(W_{B}\) fulfills the requisite set \(ES_{T_{U}(\Pi)}(W_{T})\), leading \(W_{B}\sqcup W_{T}\) to be a world view of the overall program. This means, according to Definition 6.4, that \(W_{B}\) entails all the subjective literals of the form \(\mathbf{K}A\) and \(not\,\mathbf{K}A\), that, in the detached version of \(T_{U}(\Pi)\) (Definition 6.2) have been assumed to be true (in their detached form) in order to obtain \(W_{T}\) as a world view (according to \(\mathcal{S}\)). Thus, if one would simplify \(T_{U}(\Pi)\) into \(E_{U}(\Pi,W_{B})\) by considering exactly those subjective literals as true and all the others as false, one would trivially obtain \(W_{T}\) as the world view of \(E_{U}(\Pi,W_{B})\). _Only if part._ Assume that a given semantics \(\mathcal{S}\) satisfies ESP. This means that there exists a \(\mathcal{S}\)-solution \(\langle W_{B},W_{T}\rangle\) that, via WBT, gives rise to the world views of the program. To be an \(\mathcal{S}\)-solution, \(W_{B}\) must be a world view of the bottom, and \(W_{T}\) a world view of \(E_{U}(\Pi,W_{B})\), i.e., of the top simplified w.r.t. \(W_{B}\). To find the correspondence with TDESPB, we have to ascertain that \(\langle W_{B},W_{T}\rangle\) gives rise to candidate world views in the sense of Definition 6.8. To do so, we put into \(ES_{T_{U}(\Pi)}(W_{T})\) the subjective literals, among those entailed by \(W_{B}\), that are employed to perform such simplification, so as to exactly fulfill the conditions posed in Definition 6.4. The equivalence stated by Theorem 6.1 implies that the world views of a program can be determined by composing the world views of the various layers into which the program can be split, by proceeding either bottom-up, according to the original definition, or top-down, according to our new definition. We will now illustrate the approach, and its similarities and differences w.r.t. ASP, by means of an example. Consider the following sample ASP program. \[\begin{array}{l}f\gets a.\\ e\gets c.\\ \bot\gets not\,p.\\ a\gets p.\\ a\gets q.\\ p\gets not\,q.\\ q\gets not\,p.\\ c.\end{array}\] A possible split according to Lifschitz & Turner can be: \[\begin{array}{l}\mbox{\it Top part}\\ f\gets a.\\ e\gets c.\\ \bot\leftarrow\mbox{\it not}\,p.\\ \\ a\gets p.\\ a\gets q.\\ p\gets not\,q.\\ q\gets not\,p.\\ c.\end{array}\] Notice that the unique answer set of this program is \(S=\{c,p,a,e,f\}\). The answer sets of the bottom part are: \(S1=\{c,p,a\}\), \(S2=\{c,q,a\}\). The answer set of the top part, assuming \(p\) true (otherwise the constraint is violated), is \(S3?=\{e?,f?\}\), the question mark meaning that any of the two atoms can be true, according to the selected answer set of the bottom. In this simple case, we have to choose \(S2\), which makes \(p\) true, and, by imagining adding atoms in \(S2\) as new facts in the top part, we get both \(e\) and \(f\), thus obtaining the answer set \(S\). Let us now consider the top part as a standalone program: \[\begin{array}{l}f\gets a.\\ e\gets c.\\ \bot\leftarrow not\,p.\end{array}\] This program in itself is inconsistent, but knowing that it is intended as the top part of a wider program, we can set the requirements for any bottom part, in the form of what we can call _Epistemic top-down Constraint set_\(EC=\{p\}\), i.e., \(p\) must be true in an answer set of the bottom, for the top to be consistent. If we enrich the top as follows: \[\begin{array}{l}f\gets a.\\ e\gets c.\\ \bot\leftarrow not\,p.\end{array}\] \[\begin{array}{l}p\ |\ nop.\\ a\ |\ noa.\\ c\ |\ noc.\end{array}\] We can compute all possible answer sets for the top part, by simulating possible values for atoms coming from the (still unknown) bottom. Each such simulation, e.g., assuming \(a\) true and \(c\) false, gives rise to a _Requisite Set RQ_. Then, given a specific bottom program that one intends to add to the top, each answer set \(M\) of the bottom that fulfills \(EC\) can be combined with all the answer sets of the top that are compatible, in the sense that \(M\) entails all literals in the corresponding \(RQ\). Let us now consider an ELP with a very similar structure. \[\begin{array}{l}\textit{Top part}\\ f\leftarrow\mathbf{K}a.\\ e\leftarrow\mathbf{K}c.\\ \bot\leftarrow not\,\mathbf{K}p\end{array}\] \[\begin{array}{l}\textit{Bottom part}\\ a\gets p.\\ a\gets q.\\ p\gets not\,\mathbf{K}q.\\ q\gets not\,\mathbf{K}p.\\ c.\end{array}\] Let us first proceed bottom-up, as dictated by the ESP definition. The world views of the bottom, according to any existing semantics, are: \(W1=\{\{c,p,a\}\}\), \(W2=\{\{c,q,a\}\}\). Below is the top part simplified w.r.t. \(W1\), with a unique resulting world view \(\{\{e,f\}\}\). \[\begin{array}{l}f.\\ e.\end{array}\] The top part simplified w.r.t. \(W2\) is reported below, with no world views as the con straint is violated: \[\begin{array}{l}\mbox{\it Top part $w.r.t.$ \it W2$}\\ \mbox{$f$}.\\ \mbox{$e$}.\\ \mbox{$\bot$}\leftarrow\top\end{array}\] Therefore, the unique world view of the overall program is, by the WBT operation which reduces here to a simple union, \(W=\{\{c,p,a,e,f\}\}\). Let us now apply the notions related to the top-down splitting property TDESPB that we presented above. We have the following detached version of the top part: \[\begin{array}{l}\mbox{$f$}\leftarrow ka.\\ \mbox{$e$}\leftarrow\mbox{$kc$}.\\ \mbox{$\bot$}\leftarrow\mbox{$nkp$}\\ \mbox{$ka$}\ |\ nka.\\ \mbox{$kc$}\ |\ nkc.\\ \mbox{$kp$}\ |\ nkp.\end{array}\] Seen as an epistemic program by itself, this program has a unique world view (indeed, it is a standard ASP program), which is \[\{\{kp,nka,nkc\},\ \{kp,ka,nkc,f\},\ \{kp,nka,kc,e\},\ \{kp,ka,kc,e,f\}\}.\] By splitting this set of sets three times (w.r.t. the pairs \(ka/nka\), \(kc/nkc\), and \(kp/nkp\)) as described in Section 6.1, Definition 6.3, we obtain the world views of the detached version: \(\{\{kp,nka,nkc\}\}\), \(\{\{kp,nka,kc,e\}\}\), \(\{\{kp,ka,nkc,f\}\}\), and \(\{\{kp,ka,kc,e,f\}\}\). From them, one determines the _epistemic top-down constraint set_ which is, clearly, \(EC=\{\mbox{\bf K}p\}\), stating that the unique constraint must be satisfied. Any **compatible** world view of a bottom should satisfy one of the \(RQ^{i}\)'s, \(i\leq 4\), i.e., the requirement sets, listed below (cf. Definitions 6.4 and 6.5). To each \(RQ^{i}\) it corresponds a world view of the top (indicated on the right) to be united to those world views of the bottom that satisfy \(RQ^{i}\) (if any). \[\begin{array}{l}\mbox{$RQ^{1}$}=\emptyset,\\ \mbox{$RQ^{2}$}=\{\mbox{\bf K}c\},\\ \mbox{$RQ^{3}$}=\{\mbox{\bf K}a\},\\ \mbox{$RQ^{4}$}=\{\mbox{\bf K}c,\mbox{\bf K}a\},\\ \mbox{$RQ^{4}$}=\{\mbox{\bf K}c,\mbox{\bf K}a\},\\ \end{array}\] \(W1=\{\{c,p,a\}\}\), \(W2=\{\{c,q,a\}\}\), we can see that \(W2\) does not fulfill \(EC\) and so must be discarded, while \(W1\) fulfills \(EC\) and also \(RQ^{4}\), thus leading, by the WBT operation which reduces here to a simple union, the to (unique) world view of the overall program \(W=\{\{c,p,a,e,f\}\}\). It is immediate to verify that the result obtained via the bottom-up and the top-down approach is indeed the same. ### Top-down Epistemic Splitting Property (TDESP) In this subsection, we will extend previous definitions to a more general form, so as to be able to characterize in a top-down fashion the world views obtained according to many semantic approaches presented in the literature, other than G91 and FAAEL, such as, e.g., those proposed by [22, 23]; in fact, they do not enjoy the basic property TDESPB illustrated above. We introduce a different way of computing candidate world views, where, in the absence of a world view of the bottom that fulfills the set \(EC\) relative to the top, one can select a subset of such a world view. This, as we will demonstrate in our running example, is analogous to what is customarily done in ASP. **Definition 6.10** (_Candidate World View_): Given an epistemic splitting \(\langle B_{U}(\Pi),T_{U}(\Pi)\rangle\) for a program \(\Pi\), let \(W_{T}\) be a world view of \(T_{U}(\Pi)\) and let \(W_{B}\) be a subset of a world view of \(B_{U}^{W_{T}}(\Pi)\) that fulfills \(EC_{T_{U}(\Pi)}(W_{T})\) (where, if \(EC\) is empty, \(W_{B}\) is the entire world view of the bottom) such that \(W_{B}\) fulfills \(RQ_{T_{U}(\Pi)}(W_{T})\). Then, \[W=W_{B}\sqcup W_{T}=\{I_{b}\cup I_{t}|I_{b}\in W_{B}\wedge I_{t}\in W_{T}\}\] is a _candidate world view_ for \(\Pi\) (obtained from \(W_{T}\) and \(W_{B}\)). Note that, candidate world views are now computed after applying top-down influence. It is possible that no subset of any world view of the bottom complies with the conditions posed by world views of the top. In such case, \(\Pi\) has no candidate world views. We can now state another property concerning top-down epistemic splitting that a semantics might obey: **Definition 6.11** (_Top-down Epistemic Splitting Property (TDESP)_): A semantics \(\mathcal{S}\) satisfies _top-down epistemic splitting_ if any candidate world view of \(\Pi\) according to Definition 6.10 is indeed a world view of \(\Pi\) under \(\mathcal{S}\). We can state the relationship among TDESP and ESP/TDESPB (that, as seen, are equivalent). **Theorem 6.2**: Given a semantics \(\mathcal{S}\) satisfies both foundedness and ESP/TDESPB, then \(\mathcal{S}\) satisfies TBDESP. If \(\mathcal{S}\) satisfies TDESPB, this means that for every world view of given program \(\Pi\) obtained via the WBT operation, and thus composed of a world view \(W_{T}\) of the top and a world view \(W_{B}\) of the bottom, every \(\mathbf{K}L\in EC_{T_{U}(\Pi)}(W)\) is entailed by \(W_{B}\) and, if \(\mathcal{S}\) satisfies foundedness, this equates to say that \(L\) is entailed by the bottom part of the program. Thus, the application of Top-down Influence is irrelevant. We then notice that, according to Definitions 6.10 and 6.3 a candidate world view for TDESP can be obtained from an entire world view of the bottom, as done for TDESPB. This concludes the proof, showing that for this class of semantics TDESP and TDESPB are indeed equivalent. The above theorem is immediately applicable to the FAAEL semantics. For semantics which do not enjoy foundedness things are different, as seen in the examples below. We will now, in fact, experiment with our methodology on some relevant examples proposed in recent literature. Consider program \(\Pi_{1}\), taken from [22]: \[\begin{array}{ll}p\ |\ q&(r1)\\ \bot\gets not\,\mathbf{K}p&(C)\end{array}\] Here, \(B_{U}(\Pi_{1})\) consists of rule (r1), and \(T_{U}(\Pi_{1})\) consists of constraint (C). So, \(T^{\prime}_{U}(\Pi_{1})\) is (where \(kp\) and \(nkp\) are fresh atoms): \[kp\ |\ nkp\ \ \ \ \ (r1)\] \[\perp\gets nkp\] whose unique world view is \(\{\{kp\}\}\). After canceling \(kp\), we obtain \(W_{T}=\{\emptyset\}\) for \(T_{U}(\Pi_{1})\), with \(ES_{T_{U}(\Pi_{1})}(W_{T})=EC_{T_{U}(\Pi_{1})}(W_{T})=\{\mathbf{K}p\}\) and \(RQ_{T_{U}(\Pi_{1})}(W_{T})=\emptyset\). Regardless of the epistemic semantics \(\mathcal{S}\), as no subjective literals occur therein, the unique world view of \(B_{U}(\Pi_{1})\) is \(\hat{W}=\{\{p\},\{q\}\}\). Since \(W_{B}=\{\{p\}\}\) is the only subset of \(\hat{W}\) fulfilling \(EC_{T_{U}(\Pi_{1})}(W_{T})\) (cf. Definition 6.10), then it is the one selected. It is also a world view for \(\Pi_{1}\), as the unique world view of the top part is empty. This world view violates subjective constraint monotonicity, still, it is the one delivered by the semantics proposed in [20] and, as noticed in [20], by those proposed in [19, 21]. In our opinion the world view \(\{\{p\}\}\) captures the "intended meaning" of the program \(\Pi_{1}\), where the top layer "asks" the bottom layer to support, if possible, \(\mathbf{K}p\) (in order not to make the overall program inconsistent). Let us, in fact, introduce a simple variation, by adding a fact, say \(c\), to the program, where \(c\) also occurs in the constraint, obtaining: \[p\ |\ q (r1)\] \[c. (f1)\] \[\perp\gets c,not\,\mathbf{K}p\ \ \ (C)\] We would obtain, in this case, the world view \(\{\{c,p\}\}\). Let us now reinterpret this program within the work of the COST Action DigForASP, i.e., in the realm of investigations. A rephrasing could be the following: \[at\_crime\_scene\ |\ not\_at\_crime\_scene (r1)\] \[reliable\_witness\_recognizes. (f1)\] \[\perp\gets reliable\_witness\_recognizes,not\,\mathbf{K}\,at\_crime\_ scene (C)\] The meaning underlying the schematic formulation is that it is uncertain whether a suspect was or not at the crime scene. However, if a reliable witness recognized the suspect, then investigators can be certain that the suspect was indeed at the crime scene. The constraint could in fact be rephrased (although this is not legal syntax) into: \[\mathbf{K}\,at\_crime\_scene\gets reliable\_witness\_recognizes.\] The use of the \(\mathbf{K}\) is crucial here because one wants to distinguish between facts collected by the investigators and reliable conclusions derived by these facts. Thus, the world view \(\{\{reliable\_witness\_recognizes,at\_crime\_scene\}\}\) makes perfect sense here. In addition, one might consider the very similar ASP program: \[p\ |\ q (r1)\] \[c. (f1)\] \[\perp\gets c,not\,p (C)\] with unique answer set \(\{c,p\}\). The "bottom" program fragment consisting of (r1)+(f1) would also have answer set \(\{c,q\}\), which is discarded since it would lead to violating the constraint. We may consider this program as an ELP, with unique world view obtained from a subset of the world view \(\{\{c,p\},\{c,q\}\}\) of the bottom (union the empty world view of the top), exactly as specified in Definition 6.10. Consider now the following program \(\Pi_{2}\). \[\begin{array}{ll}p\ |\ q&(r1)\\ \bot\gets not\,\mathbf{K}p&(C)\\ p\leftarrow\mathbf{K}q&(r2)\\ q\leftarrow\mathbf{K}p&(r3)\end{array}\] Here, \(B_{U}(\Pi_{2})\) consists of rules (r1-r3), and \(T_{U}(\Pi_{2})\) consists of constraint (C). So, \(T^{\prime}_{U}(\Pi_{2})\) is (where \(kp\) and \(nkp\) are fresh atoms): \[\begin{array}{ll}kp\ |\ nkp\\ \bot\gets nkp\end{array}\] whose unique world view is \(\{\{kp\}\}\). After canceling \(kp\), we obtain world view \(W_{T}=\{\emptyset\}\) for \(T_{U}(\Pi_{2})\) where \(ES_{T_{U}(\Pi_{2})}(W_{T})=EC_{T_{U}(\Pi_{2})}(W_{T})=\{\mathbf{K}p\}\) and set \(RQ\) is empty. Regardless of the semantics \(\mathcal{S}\), the potential world views of \(B_{U}(\Pi_{2})\) are \(W_{1}=\{\{p\}\}\), \(W_{2}=\{\{q\}\}\), \(W_{3}=\{\{p\},\{q\}\}\), \(W_{4}=\{\{p,q\}\}\). Actually, \(W_{4}\) is the only one fulfilling \(ES_{T_{U}(\Pi_{2})}(W_{T})\); \(W_{1}\) has the problem that, having \(p\) and fulfilling \(\mathbf{K}p\), (r3) might be applied thus getting \(q\). Note that \(W_{4}\) is in fact the world view returned by semantics proposed, for instance, in (Kahl et al., 2015) and (Shen and Eiter, 2016). It is easy to see that \(W_{4}\) violates foundedness. However, in our approach \(q\) is not derived via the positive cycle (extended to subjective literals), but from the \(\mathbf{K}p\) "forced" by the upper layer via top-down influence, which substitutes \(\mathbf{K}p\) with \(p\) in rule (r3) of \(B_{U}(\Pi_{2})\). This in a sense guarantees a form of foundedness, though not the formal one introduced in (Cabalar et al., 2021, Def. 15). Since the unique world view for the top is empty, then the unique world view of the overall program is, indeed, according to our method, \(W=W_{4}=\{\{p,q\}\}\). Let us now consider \(\Pi_{3}\) to be the seminal example introduced in (Gelfond and Przymusinska, 1991), which is discussed in virtually every paper on ELP. \(\Pi_{3}\) is epistemically stratified (see Definition 4.4 and (Cabalar et al., 2021, Def. 6)). This formulation (variations have appeared over time) is from (Cabalar et al., 2021). \[\begin{array}{ll}\mathit{eligible}(X)\ \leftarrow\ \mathit{high}(X)&(r1)\\ \mathit{eligible}(X)\ \leftarrow\ \mathit{minority}(X),\mathit{fair}(X)&(r2)\\ \mathit{noeligible}(X)\ \leftarrow\ not\,\mathit{fair}(X),\mathit{not\,high}(X)&(r3)\\ \mathit{fair}(\mathit{mike})\ |\ \mathit{high}(\mathit{mike})&(f1)\\ \mathit{interview}(X)\ \leftarrow\ not\,\mathbf{K}\,\mathit{eligible}(X),\mathit{ not\,\mathbf{K}\,\mathit{noeligible}(X)}&(r4)\\ \mathit{appointment}(X)\ \leftarrow\,\mathbf{K}\,\mathit{interview}(X)&(r5)\end{array}\] Since in this version of the program we have only _mike_ as an individual, we may obtain the following ground abbreviated version: \[\begin{array}{ll}e\gets h&(r1)\\ e\gets m,f&(r2)\\ ne\gets not\,f,not\,h&(r3)\\ f\ |\ h&(f1)\\ in\gets not\,{\bf K}e,\ not\,{\bf K}ne&(r4)\\ a\leftarrow{\bf K}in&(r5)\end{array}\] Here, we consider (r5) as the top \(T_{U}(\Pi_{3})\), and (r1-r4) plus (f1) as the bottom, which can be however in turn divided into the top \(T1_{U}(\Pi_{3})\) including (r4), and the bottom \(B_{U}(\Pi_{3})\), made of (r1-r3) and (f1). So, \(T^{\prime}_{U}(\Pi_{3})\) is (with fresh atoms \(kin\), \(nkin\)): \[\begin{array}{ll}a\gets kin&(r5^{\prime})\\ kin\ |\ nkin\end{array}\] with two answer sets: \(\{a,kin\},\{nkin\}\). As explained in Section 6.1, \(kin\ |\ nkin\) stands for a disjunction between the epistemic literal \({\bf K}in\) and its negation \(not\,{\bf K}in\). This determines the existence of two world views, each entailing only one of these atoms, i.e. epistemic literals, where atom \(a\) can, however, be derived only from the former. Thus, we have \(W_{11}=\{\{a\}\}\) with \(ES_{T_{U}(\Pi_{3})}(W_{11})=\{{\bf K}in\}\), and \(W_{12}=\{\emptyset\}\) with \(ES_{T_{U}(\Pi_{3})}(W_{12})=\{not\,{\bf K}in\}\). \(EC_{T1_{U}(\Pi_{3})}\) is empty for all world views, as no constraint is present in \(\Pi_{3}\). Then, \(T1^{\prime}_{U}(\Pi_{3})\) is (with \(ke,nke,kne,nkne\) fresh atoms): \[\begin{array}{ll}in\gets nke,nkne&(r4^{\prime})\\ ke\ |\ nke&\\ knee\ |\ nkne.\end{array}\] By the same reasoning as above, since there are two disjunctions among fresh atoms representing epistemic literals, four world views can be found. After canceling the fresh atoms, in fact we have \(W_{21}=\{\{in\}\}\), with \(ES_{T1^{\prime}_{U}(\Pi_{3})}(W_{21})=\{not\,{\bf K}e,not\,{\bf K}ne\}\), and three empty world views \(W_{22}=W_{23}=W_{24}=\{\emptyset\}\), with requisite sets \(\{{\bf K}e,{\bf K}ne\}\), \(\{{\bf K}ne,not\,{\bf K}e\}\), and \(\{not\,{\bf K}ne,{\bf K}e\}\), respectively. Clearly, also \(EC_{T1^{\prime}_{U}(\Pi_{3})}\) is empty. Finally, \(B_{U}(\Pi_{3})\), which is made of the rules (r1-r3) and (f1), has the world view \(W_{3}=\{\{h,e\},\{f\}\}\). Since the requirement set relative to world view \(W_{21}\) for the immediately upper level is satisfied in both answer sets of \(W_{3}\), we can obtain an intermediate world view \(W_{213}=\{\{h,e,in\},\{f,in\}\}\) for the part of the program including (r1-r4). Considering also the top, it is easily seen that \(W_{213}\) is compliant with the requirement set of \(W_{11}=\{a\}\). So, we can obtain for the overall program the unique candidate world view \(W=\{\{h,e,in,a\},\{f,in,a\}\}\), which is indeed a world view. Notice that, in fact, the world views that are part of the union, corresponding to the various sub-programs, would be the same under all known semantics for ELPs. Assume now that, instead of \(f\ |\ h\), the program contains the bare fact \(h\). Then, the world view of the bottom becomes \(W_{3}=\{\{h,e\}\}\). This world view implies \({\bf K}e\), so it can be combined with a world view \(\{\emptyset\}\) of the middle layer, and since it also implies \(not\,{\bf K}in\), the further combination is with world view \(W_{12}=\{\emptyset\}\) of the top. So, \(W_{3}=\{\{h,e\}\) is in this case the unique world view of the overall program. ## 7 Main Result It is at this point interesting to try to assess formally which semantics (if any) satisfy the top-down epistemic splitting property TDESP. We examine now the case of the semantics introduced in (Kahl et al. 2015), that we call for short K15. The reason for choosing K15 is that in (Cabalar et al. 2021) it is noticed that K15 slightly generalizes the semantics proposed in (Gelfond 2011) (called G11 for short) and can be seen as a basis for the semantics proposed in (Shen and Eiter 2016) (called S16 for short). In particular, S16 (which considers instead of \(\mathbf{K}\) the operator **not**\(A\) which means \(not\,\mathbf{K}A\)) treats K15 world views as candidate solutions, to be pruned in a second step, where some unwanted world views are removed by maximizing what is not known. Thus, should K15 satisfy the top-down epistemic splitting property, S16 would do as well, and so would G11, the latter however only for the (wide) class of programs where its world views coincide with those of K15. **Definition 7.1** (K15-world views): The K15-reduct of \(\Pi\) with respect to a non-empty set of interpretations \(W\) is obtained by: 1. replacing by \(\bot\) every subjective literal \(L\in\mathit{Body}_{subj}(r)\) such that \(W\not\models L\), and 2. replacing all other occurrences of subjective literals of the form \(\mathbf{K}L\) by \(L\). A non-empty set of interpretations \(W\) is a K15-world view of \(\Pi\) iff \(W\) is the set of all stable models of the K15-reduct of \(\Pi\) with respect to \(W\). We are able to prove the following: **Theorem 7.1** (K15 Tdesp): The K15 semantics satisfies the Top-down Epistemic Splitting Property. I.e., given an ELP \(\Pi\), and set of sets \(W\), where each set is composed of atoms occurring in \(\Pi\), \(W\) is a K15 world view for \(\Pi\) if and only if it is a candidate world view for \(\Pi\) according to Definition 6.10. **Proof** Assume an Epistemic Splitting of given program \(\Pi\) into two layers, top \(T_{U}(\Pi)\) and bottom \(B_{U}(\Pi)\) (where the reasoning below can, however, be iterated over a subdivision into an arbitrary number of levels). Notice that, given a K15 world view \(W\), since each atom \(A\) that occurs in the sets composing \(W\) is derived in the part of the program including rules with head \(A\), then \(W\) can be divided into two parts, \(W_{T}\), and \(W_{B}\) which are world views of \(T_{U}(\Pi)\) and \(B_{U}(\Pi)\), resp., each one composed of stable models of the K15-reduct of that part of the program. _If part._ Given a K15 world view \(W\), let \(Sl^{T}\) be the subjective literals occurring in \(T_{U}(\Pi)\) which are entailed by the bottom, i.e., either of the form \(\mathbf{K}A\), for which \(W_{B}\models A\), or of the form \(not\,\mathbf{K}A\), for which \(W_{B}\not\models A\). Let such a set of literals form the set \(ES_{T_{U}(\Pi)}(W_{T})\). (As mentioned, the subset of \(Sl^{T}\) that consists of literals involved in constraints in \(T_{U}(\Pi)\) will form set \(EC_{T_{U}(\Pi)}(W_{T})\), and the remaining ones will form set \(RQ_{T_{U}(\Pi)}(W_{T})\).) Therefore, we can conclude that \(W\), which is a K15 world view, is indeed a candidate world view according to Definition 6.10. _Only if part._ Consider a candidate world view \(W\) w.r.t. the K15 semantics, obtained by combining a subset \(W_{B}\) of a K15 world view of \(B_{U}(\Pi)\) with a K15 world view \(W_{T}\) of \(T_{U}(\Pi)\) after top-down influence. According to Definition 6.10, the combination is possible only if for each epistemic literal \(\mathbf{K}A\in ES_{T_{U}(\Pi)}(W_{T})\), \(W_{B}\models A\), and for each epistemic literal \(not\,\mathbf{K}A\in ES_{T_{U}(\Pi)}(W_{T})\), \(W_{B}\not\models A\). If any such literal belongs to \(EC_{T_{U}(\Pi)}(W_{T})\), if this is not the case then there would be a constraint violation in \(T_{U}(\Pi)\), so there would be no world views for \(T_{U}(\Pi)\), and for the overall program \(\Pi\). Considering a subjective literal in \(RQ_{T_{U}(\Pi)}(W_{T})\), if it were not the case that \(W_{B}\) entails such literal, then by definition of K15 it would have been substituted by \(\bot\), so \(W_{T}\) would have been a different set. The top-down influence step can be disregarded since it performs in advance on elements of \(ES_{T_{U}(\Pi)}(W_{T})\), that are required to be entailed by \(W_{B}\) anyway, the same transformation performed by K15, step (ii). Then, a candidate world view \(W\) obtained according to Definition 6.10 is indeed a K15 world view. In [10, Th. 2] it is proved that, for any semantics obeying epistemic splitting, an epistemically stratified program has a unique world view. Actually, it can be seen that epistemically stratified programs admit one (and the same) world view under any existing semantics, and in particular under those considered here: as it is well-known (see, e.g. [11, 12, 13]), multiple world views arise in consequence of negative cycles involving epistemic literals, clearly not present in such programs. So, the unique world view of an epistemically stratified program is, in particular, a K15 world view. Thus, we have the following. **Corollary 7.1**: Epistemically Stratified Programs satisfy both the Top-down and Bottom-up Epistemic Splitting Properties under any semantics. ## 8 Conclusions In this paper, we have provided a way of exploiting the splitting of Epistemic Logic Programs in a top-down fashion, adequate for those situations where the top part of a program is well-established as it represents a problem formulation, where the bottom part (representing a problem instance) may vary and is in general not known in advance. We defined formal conditions for the combination of world views of the top with world views of the bottom into world views of the overall program. In addition, potential world views of the top can be pre-computed, thus simplifying the combination with the world views of each problem instance. We provide a version that is the top-down declination of the well-established approach by Cabalar et al., and a more general version that is applicable to a wider range of semantic approaches. A question that may arise concerns the efficiency of the top-down approach, even though in many cases it will be an almost inevitable choice. If the subjective literals "connecting" adjacent layers are in small numbers (as it seems reasonable), then efficiency might not be a concern. It remains to be seen in more depth for which kinds of applications the different approaches to splitting (top-down and bottom-up) might be most profitably exploited. As an example, we can go back to the suggestion proposed in [14] to encode the problem of finding a conformant plan as the task of obtaining a world view. As emphasized in [10], splitting allows one to separate the planner definition (the "top") from the generation of alternative plans (an intermediate layer, we might say "the top of the bottom") from, in turn, the domain description (the "bottom"). The top-down perspective would allow one to analyze the top part independently from the other layers, so as to identify in advance the prerequisites it poses to them. An investigation of which other semantics might satisfy the Top-down Epistemic Splitting Property is also a subject of future work.
2306.17807
Surrogate Modeling of Urban Boundary-Layer Flow
Surrogate modeling is a viable solution for applications involving repetitive evaluations of expensive computational fluid dynamics models, such as uncertainty quantification and inverse problems. This study proposes a multi-layer perceptron (MLP) based machine-learning surrogate for canopy flow statistics accommodating any approaching mean-wind angle. The training and testing of the surrogate model is based on results from large-eddy simulations of open-channel flow over and within surface-mounted cubes under neutral ambient stratification. The training dataset comprises flow statistics from various approaching mean-wind angles, and the surrogate is asked to "connect between the dots," i.e., to predict flow statistics for unseen values of the approaching mean-wind angle. The MLP performance is compared against a more traditional spline-based interpolation approach for a range of training data. In terms of relative mean absolute errors on individual flow statistics, the proposed MLP surrogate consistently outperforms the spline interpolation, especially when the number of training samples is reduced. The MLP model accurately captures mean profiles and three-dimensional flow variability, offering robust predictions, even when trained with as few as four approaching wind angles. The model is $10^4 \times$ faster than large-eddy simulations, thus proving effective for multi-query tasks in the context of urban canopy flow modeling.
Gurpreet S. Hora, Marco G. Giometto
2023-06-30T17:08:44Z
http://arxiv.org/abs/2306.17807v4
# Surrogate Modeling of Urban Boundary-Layer Flow ###### Abstract Surrogate modeling is a viable solution for applications involving repetitive evaluations of expensive computational fluid dynamics (CFD) models, such as uncertainty quantification and inverse problems. This study proposes two machine-learning surrogates for canopy flow statistics accommodating any approaching mean-wind angle. The first model is based on a K-nearest neighbors (KNN) approach, while the second utilizes a more advanced multi-layer perceptron (MLP) technique. The training and testing of these models are based on results from large-eddy simulation of open-channel flow over and within an array of surface-mounted cuboids under neutral ambient stratification. Training datasets comprise flow statistics from various approaching wind angles, and the surrogates are asked to "connect between the dots", i.e., to predict flow statistics for unseen values of the approaching wind angle. The KNN- and MLP-based surrogates are orders of magnitude faster than the LES algorithm and use only a fraction of the computational resources. KNN and MLP can reconstruct time-averaged three-dimensional flow statistics with a coefficient of determination \(R^{2}>0.96\) for combined flow statistics when trained using many training samples (big-data regime). As the number of training samples is reduced, the accuracy of the MLP model deteriorates more gradually, featuring a superior performance overall. Examination of model performance in capturing individual flow statistics and their spatial variability highlights that ML-specific metrics (e.g., \(R^{2}\)) may lead to misplaced confidence in model performance from a physical standpoint. U rban Canopy, Machine Learning, Turbulence, Urban Climate [email protected] ## 1 Introduction The physical structure of cities controls and modifies the exchange of momentum, heat, water, and air pollutants between the land surface and the atmosphere (Belcher, 2005). Accurately predicting these exchanges is crucial across a wide range of applications, including local weather forecasting (Skamarock _et al._, 2008), climate projections (Murakami _et al._, 1999; Mochida & Lun, 2008; Toparlar _et al._, 2015), air-quality monitoring (Lee _et al._, 1997; Vardoulakis _et al._, 2003; Li _et al._, 2006; Boppana _et al._, 2010), and urban climate studies (Chen _et al._, 2012; Krayenhoff _et al._, 2020), to name but a few. The intricate interplay between turbulent airflow and urban geometry governs these complex processes. This interaction has been the subject of extensive research in recent decades, encompassing computational (Bouezid _et al._ 2004, 2009; Li _et al._ 2016\(b\); Li & Bou-Zeid 2019; Omidvar _et al._ 2020; Auvinen _et al._ 2020; Cheng _et al._ 2021), experimental (Raupach _et al._ 1980; Rotach 1994; Brown _et al._ 2001; Castro _et al._ 2006; Gromke _et al._ 2008; Pascheke _et al._ 2008), and observational (Rotach 1993, 1999; Kastner-Klein & Rotach 2004; Rotach _et al._ 2005; Christen _et al._ 2009; Gubler _et al._ 2021) approaches. With the ever-increasing availability of spatially-distributed data (Kumar _et al._ 2015; Middel _et al._ 2022) and modeling capabilities (Maronga _et al._ 2015), researchers are presented with a unique opportunity to address open challenges in the field of sustainable and resilient urban development. Data assimilation and uncertainty quantification techniques will become increasingly needed to fully leverage synergies between computational fluid dynamic (CFD) models and observations in microscale climate studies (Fletcher 2022). For example, in the context of wind engineering, it has been shown that spatially-distributed wind field observations at the neighborhood scale can enable a meaningful calibration of model parameters and quantification of parameter uncertainty (Sousa _et al._ 2018). These multi-query techniques typically require a large number of simulations (Smith 2013), thus making CFD approaches prohibitively expensive from a computational perspective. This is mainly due to the high operational complexity of these algorithms and inherent limitations in their ability to scale to high core counts (Choi & Moin 2012). To overcome this challenge, cost-effective approximations of the CFD solution are necessary. Surrogate models are designed specifically for this specific purpose. Surrogate models are simplified representations of complex physical systems that can serve as computationally efficient alternatives to CFD. According to Razavi _et al._ (2012), there are two types of surrogates: response surface modeling and lower-fidelity modeling. Response surface surrogates, known as metamodels (Blanning 1975; Kleijnen 2009), model emulation (O'Hagan 2006), and proxy models (Bieker _et al._ 2007) utilize data-driven techniques to approximate either the entire computational model or just a part of it. Examples of these models include techniques such as kriging (Webster & Oliver 2007), polynomial chaos expansion (Wiener 1938), and neural networks (Goodfellow _et al._ 2016). These approaches are typically non-intrusive and aim to efficiently approximate the input-output relation of a complex physical system without modifying the model formulation (Razavi _et al._ 2012; Garzon _et al._ 2022). Successful studies involved response surface surrogates include Bau & Mayer (2006); Zhu _et al._ (2019_a_); Enderle _et al._ (2020); Maulik _et al._ (2021); Yang _et al._ (2022). An alternative to using a non-intrusive surrogate model is to use lower-fidelity surrogates. These approaches rely on order reduction techniques that are grounded in physics and mathematics to enable an efficient evaluation of the system response (Razavi _et al._ 2012; Garzon _et al._ 2022). Lower-fidelity surrogates are intrusive and typically require in-depth modifications of the CFD solver, but typically retain reasonable physical characteristics and may be better suited for extrapolation tasks, such as emulating unseen regions of the parameter space (Razavi _et al._ 2012; Garzon _et al._ 2022). Based on this definition, approaches such as large-eddy simulation (LES), Reynolds-averaged Navier-Stokes, proper orthogonal decomposition, and dynamic mode decomposition techniques can be understood as low-fidelity surrogates of direct numerical simulations (Sagaut 2006; Wilcox _et al._ 1998; Schmid 2010; Berkooz _et al._ 1993). Recently, machine learning (ML), a subset of artificial intelligence, has emerged as a promising approach for solving forward and inverse problems in CFD. For instance, ML algorithms have been used to accelerate direct numerical simulations (Bar-Sinai _et al._ 2019; Kochkov _et al._ 2021; Jeon _et al._ 2022), improve turbulence modeling (Ling _et al._ 2016; Wang _et al._ 2017; Duraisamy _et al._ 2019; Beck _et al._ 2019), enrich turbulence fields (Liu _et al._ 2020; Fukami _et al._ 2021; Kim _et al._ 2021), and reconstructing three-dimensional flow fields from two-dimensional images (Hora _et al._, 2022; Yousif _et al._, 2023). In the context of surrogate modeling, ML-based response-surface models are an appealing approach for solving multi-output regression problems, given their ability to capture non-linear relationships between inputs and outputs. These models are typically supervised using a discrete subset of CFD output data, and once trained and validated, they can be used to efficiently evaluate the system response. Successful examples of ML-based surrogate models in the field of CFD include studies such as Moonen & Allegrini (2015); Zhu _et al._ (2019, 2020); Ganti & Khare (2020); Maulik _et al._ (2020); Tang _et al._ (2020); Palar _et al._ (2021); Maulik _et al._ (2021); Nikolopoulos _et al._ (2022); Yang _et al._ (2022). For instance, Maulik _et al._ (2021) developed a non-intrusive surrogate model based on a multi-layer perceptron (MLP) architecture to predict the eddy viscosity field in the context of Reynolds-averaged Navier-Stokes equations. They showed that surrogate models offer a viable approach to exploring vast parameter spaces efficiently. Zhu _et al._ (2019) proposed physics-constrained Convolutional Neural Networks for surrogate modeling of partial differential equation systems. They demonstrated that the proposed framework could be trained without labeled or training data by integrating fundamental physics laws and domain knowledge through constraint learning (Stewart & Ermon, 2017). In general, fusing physics and data enables models to make reliable predictions under both interpolative (training and test inputs are from the same distribution) and extrapolative (test input is out-of-distribution) conditions (Karniadakis _et al._, 2021). Given the vast range of available ML approaches, when designing ML-based surrogates, one should weigh various factors in deciding between traditional ML and more advanced algorithms, such as those based on deep learning (DL). These include the size of the dataset (big data or small data?), the mapping complexity between input and output pairs, and the computational cost. DL algorithms, especially MLP, are known as universal approximators and are particularly suitable for learning complex, nonlinear mappings, primarily when relationships between inputs and outputs are poorly understood (Hornik _et al._, 1989). However, they generally require a large amount of training data to achieve good performance, as they have many parameters and require more data to prevent overfitting (Goodfellow _et al._, 2016). In contrast, traditional ML algorithms are often more appropriate for straightforward, structured mappings and require relatively less training data to perform well (Hastie _et al._, 2009). Depending on the problem, these algorithms can also provide more interpretable results, making them suitable for applications where conceptual understanding is critical. Moreover, due to their complex architecture, DL algorithms typically require advanced software and hardware resources to learn effectively, including specialized software platforms (Nickolls _et al._, 2008; Abadi _et al._, 2016) and powerful graphical processing units. In contrast, traditional ML algorithms can be trained and deployed with modest computational resources, making them more accessible and easier to implement. This study explores the potential of ML and DL algorithms for developing surrogate models of turbulent flow in urban environments. Specifically, we focus on designing, developing, and contrasting the predictive abilities of a traditional-ML and a DL surrogate for three-dimensional flow statistics in urban canopies. Due to the broad parameter space of flow in urban canopies, this study is necessarily limited in scope, focusing specifically on characterizing the ability of these approaches to capture variabilities in flow statistics resulting from variations in the approaching wind angle under neutral ambient stratification. In real-world environments, the approaching wind angle is continuously changing, and these variations significantly impact spatially-distributed flow statistics. As a result, the approaching wind angle presents a suitable candidate for testing the predictive abilities of these models. For this task, we consider a K-nearest neighbor (KNN) and an MLP model. Based on our literature review, these approaches were deemed suitable for the problem under consideration. These surrogates are trained and evaluated using an extensive high-fidelity LES dataset of flow over and within idealized urban canopies featuring a range of approaching wind angles. Three training datasets are considered for the training of the models, each comprising an increasing number of data (small-, moderate-, and big-data regimes). The performance of the models is then evaluated against unseen data (test dataset) using ML-specific metrics, such as relative mean squared and mean absolute error, as well as turbulent flow diagnostics. This paper is structured as follows. The numerical algorithm and dataset are described in SS2.1 and 2.2, respectively. SS2.3 introduces the details of the KNN and MLP surrogate. Model predictions are examined in SS3 and further discussed in SS4. Concluding remarks are drawn in SS5. ## 2 Methodology ### Numerical Setup The filtered Navier-Stokes equations for incompressible and Newtonian fluids are solved in their rotational form (Orszag & Pao 1975) to ensure the conservation of energy in the inviscid limit, i.e., \[\begin{cases}\frac{\partial u_{i}}{\partial t}+u_{j}(\frac{\partial u_{i}}{ \partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}})=-\frac{\partial\pi}{ \partial x_{i}}-\frac{\partial\pi^{SGS}_{ij}}{\partial x_{j}}-\Pi_{i}+f^{ \Gamma_{\rm b}}_{i}&\mbox{in}\;\Omega\times[0,T]\;,\\ \frac{\partial u_{i}}{\partial x_{i}}=0&\mbox{in}\;\Omega\times[0,T]\;,\\ \frac{\partial u_{i}}{\partial z}=\frac{\partial v}{\partial z}=w=0&\mbox{in} \;\Gamma_{\rm t}\times[0,T]\;,\\ u_{i}=0&\mbox{in}\;\Gamma_{\rm b}\times[0,T]\;,\end{cases} \tag{1}\] where \(t\) is time, \(x_{i}\) (\(i=1,2,3\)) denotes the \(i^{th}\) coordinate direction, \(x=x_{1}\), \(y=x_{2}\), and \(z=x_{3}\) denote the streamwise, cross-stream, and vertical coordinate directions, respectively, \(u_{i}\) is the \({\rm i}^{th}\) filtered velocity component, \(\pi\) is a modified filtered pressure field, namely \(\pi=\frac{\rho}{\rho}+\frac{1}{3}\overline{\dot{t}}_{ii}^{SGS}+\frac{1}{2}u_{ i}u_{i}\), \(\rho\) is a reference constant density, \(\tau^{SGS}_{ij}\) is the subgrid-scale (SGS) tensor, \(\Pi_{i}=\frac{1}{\rho}\frac{\partial p_{\infty}}{\partial x_{1}}\delta_{i1}+ \frac{1}{\rho}\frac{\partial p_{\infty}}{\partial x_{2}}\delta_{i2}\) is a constant pressure gradient introduced to drive the flow, and \(\widetilde{f}^{\Gamma_{\rm b}}_{i}\) is a forcing term that is used to impose the desired boundary condition at the surface location. \(\widetilde{f}^{\Gamma_{\rm b}}_{i}\) has a finite value at the buildings interface (\(\Gamma_{\rm b}\)) and is zero elsewhere. The LES algorithm was initially developed in Albertson & Parlange (1999\(a\),_b_). Equations are solved in strong form on a regular domain \(\Omega\), a pseudo-spectral collocation approach (Orszag 1969, 1970) based on truncated Fourier expansions is used in the \(x,y\) coordinate directions. In contrast, a second-order accurate centered finite differences scheme is adopted in the vertical direction, requiring a staggered grid approach for the \(\vec{u},\vec{v}\), \(\vec{p}\) variables (these are stored at \((j+1/2)\delta_{z}\), with \(j=1,nz\)). Time integration is performed via a fully explicit second-order accurate Adams-Bashforth scheme, and a fractional step method is adopted to compute the pressure field (Chorin 1968; Kim & Moin 1985). In addition, nonlinear terms are fully dealiased via the \(3/2\) rule, to avoid piling up energy in the high wavenumber range (Kravchenko & Moin 1997; Canuto _et al._ 2006). The computational boundary is partitioned as \(\partial\Omega=\Gamma_{\rm b}\cup\Gamma_{\rm t}\cup\Gamma_{I}\), where \(\Gamma_{\rm t}\) and \(\Gamma_{I}\) denote the top and lateral boundaries respectively. A free-lid boundary condition applies at \(\Gamma_{\rm t}\), and a no-slip boundary condition is prescribed at \(\Gamma_{\rm b}\) (see equation 1). An algebraic wall-layer model based on the equilibrium logarithmic law assumption is also applied at \(\Gamma_{\rm b}\) to evaluate tangential SGS stresses at the solid-fluid interface (Chester _et al._ 2007_a_). Periodic boundary conditions apply at \(\Gamma_{I}\) due to the Fourier spatial representation. SGS stresses in the bulk of the flow are parameterized using the Bou-Zeid _et al._ (2005) scale-dependent Lagrangian dynamic Smagorinsky model. To model the urban canopy, a discrete forcing approach immersed boundary method (IBM) is adopted (Mohd-Yusof 1997; Mittal & Iaccarino 2005; Chester _et al._ 2007_b_). Over the past two decades, this solver has been used to develop a series of algebraic SGS closure models for the bulk of turbulent flows (Meneveau _et al._ 1996; Porte-Agel _et al._ 2000; Porte-Agel 2004; Bou-Zeid _et al._ 2005; Lu & Porte-Agel 2010), wall-layer models (Hultmark _et al._ 2013), and immersed-boundary methods to accurately represent solid-gas interfaces (Tseng _et al._ 2006; Chester _et al._ 2007\(b\); Fang _et al._ 2011; Li _et al._ 2016_a_). It has also been extensively validated against field and laboratory measurements and used to gain insight into a range of applications involving different flow phenomena, spanning from atmospheric boundary layer flow over flat surfaces to flow over urban areas and forests (Tseng _et al._ 2006; Bou-Zeid _et al._ 2005, 2009; Fang _et al._ 2011; Shah & Bou-Zeid 2014; Pan _et al._ 2014; Fang & Porte-Agel 2015; Anderson _et al._ 2015; Giometto _et al._ 2016; Pan _et al._ 2016; Giometto _et al._ 2017\(a\),_b_). ### High-Fidelity Dataset To generate the training data, a series of LESs of open channel flow over two surface-mounted cubes with constant height \(h\) are performed, as illustrated in figure 1. The computational domain has a size of \(\Omega=[0,6h]\times[0,3h]\times[0,3h]\). This is a relatively modest domain dimension but will suffice based on the scope of this study. The Reynolds number based on friction velocity \(u_{\star}\), cube height \(h\), and air kinematic viscosity \(\nu\) is \(Re_{\tau}=10^{6}\). Under these conditions, the flow is in a fully rough aerodynamic regime, and viscous stresses can be safely neglected. The aerodynamic roughness length in the equilibrium wall-layer model at the solid-fluid interface is set to \(z_{0}^{IBM}=10^{-5}h\). The domain is discretized using \(96\times 48\times 48\) collocation nodes in the streamwise, cross-stream, and vertical directions. This results in 16 collocation nodes per cube edge, which is sufficient to yield resolution-independent results (Tseng _et al._ 2006; Yang & Anderson 2018). Simulations are integrated in time for over \(200T_{e}=200(3h)/u_{\star}\), where \(T_{e}\) is the eddy turnover time based on the height of the computational domain and \(u_{\star}\). Instantaneous velocity snapshots are collected every \(T_{e}/20\) during the last \(100T_{e}\) to evaluate statistically steady-state flow statistics. Each simulation took \(\approx 6.5\) hours on 48 cores of Intel Xeon Platinum 8160 "Skylake" compute node; LES model performance will be compared against those from surrogate models in SS3. In this study, we investigate the range of approaching wind angles \(\alpha\in[0^{\circ},90^{\circ}]\), which encompasses the entire range of flow variability for the considered domain. The value of \(\alpha\) is controlled via the individual components of the pressure gradient \(\Pi_{i}\). The parametric space is uniformly discretized using a step size \(\delta\alpha=5^{\circ}\) to generate training and test Figure 1: Side (a) and planar view (b) of the computational domain. datasets for our models. We then consider three training datasets: \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\), and a test dataset \(\alpha_{test}\). The \(\alpha_{10}\) dataset contains statistics corresponding to \(\delta\alpha=10^{\circ}\), i.e., \(\alpha=\{0^{\circ},10^{\circ},20^{\circ},30^{\circ},40^{\circ},50^{\circ},60^{ \circ},70^{\circ},80^{\circ},90^{\circ}\}\), \(\alpha_{15}\) includes statistics corresponding to \(\delta\alpha=15^{\circ}\), i.e., \(\alpha=\{0^{\circ},15^{\circ},30^{\circ},45^{\circ},60^{\circ},75^{\circ},90^{ \circ}\}\), and \(\alpha_{30}\) denotes \(\alpha=\{0^{\circ},30^{\circ},60^{\circ},90^{\circ}\}\). The number of training samples, i.e., input-output pairs in \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) are 2,211,840, 1,548,288, and 884,736, respectively. Moving forward, we will refer to the three training datasets as the big-data (\(\alpha_{10}\)), moderate-data (\(\alpha_{15}\)), and small-data (\(\alpha_{30}\)) regimes. There is no specific rule of thumb for the number of training samples required to train an MLP network, as the success of this task depends on various factors such as the complexity of the problem, the size of the network, the quality of the data, and the desired level of accuracy. In practice, it is recommended to start with a small number of training samples and gradually increase the dataset's size while monitoring the model's performance on a validation/test set until the desired level of accuracy is achieved (Goodfellow _et al._, 2016). The three training datasets allow us to explore the optimal number of training samples required to train our surrogate models under different data regimes. To evaluate the performance of the surrogate model on unseen data (test cases), we utilize \(\alpha_{test}\). The \(\alpha_{test}\) comprises of statistics corresponding to \(\alpha=\{5^{\circ},25^{\circ},35^{\circ},55^{\circ},65^{\circ},85^{\circ}\}\) and it has 1,327,104 test samples. In order to accelerate the training process, it is recommended to scale data using preprocessing techniques such as min-max normalization or standardization, typically within the range \([-1,1]\) or \([0,1]\)(Goodfellow _et al._, 2016). In this study, the considered datasets are scaled using standardization techniques that adjust the datasets to have a zero mean and a unit standard deviation. Throughout the study, \(\overline{(\cdot)}\) denotes the time-averaging operation, and \(\langle\cdot\rangle\) denotes superficial volumetric averaging over horizontal slabs of thickness \(\delta_{z}\)(Schmid _et al._, 2019). ### Surrogate Modeling The objective of the work is to learn a mapping from the parametric space \(\mathbf{X}\in\mathcal{R}^{4}\) of three-dimensional geometric locations and wind angle to the solution space, i.e., the time-averaged turbulent flow statistics \(\mathbf{y}\in\mathcal{R}^{9}\). In particular, we aim to develop a mapping \(\mathcal{M}\) such that. \[\mathcal{M}:(x_{i},\alpha)\rightarrow(\overline{u_{i}},\overline{u^{\prime}_ {i}u^{\prime}_{j}})\, \tag{2}\] where \(x_{i}\) is the spatial location of a point in space, \(\overline{u_{i}}\) is the time-averaged mean velocity field in the \(i^{\text{th}}\) direction and \(\overline{u^{\prime}_{i}u^{\prime}_{j}}\) denotes the resolved Reynolds stress tensor. Two surrogate models are introduced in the next sections to achieve this objective. The first model is based on the KNN technique, a relatively traditional ML approach. The second is a more advanced model based on an MLP approach. The predictive abilities of these models will be intercompared in SS3. #### 2.3.1 K-Nearest Neighbors Regression Multi-output regression tasks involve predicting multiple output variables as a function of input quantities. Traditional ML algorithms such as linear regression, KNN, decision trees, and random forests are often used for this purpose (Murphy, 2012; Fix, 1985; Breiman, 2017, 2001). The \(\mathcal{M}\) mapping in this task is rooted in turbulent dynamics, which are highly nonlinear (Pope, 2000). As a result, linear regression models may struggle to accurately capture the \(\mathcal{M}\) mapping. The KNN model is a step up in ML model complexity and may be more suitable for such a task. The KNN algorithm is a supervised and non-parametric ML model that predicts the value of a query point by identifying the predefined number of training samples closest to such a point. This approach resembles standard interpolation techniques commonly used in the scientific community to estimate the value of a new data point based on a discrete set of known data points. The KNN model's simplicity and ease of implementation make it a desirable candidate for the multi-output regression task of this study. Moreover, the KNN algorithm requires only two hyperparameters - the number of neighbors \(k\) and the distance metric \(\mathcal{D}\) - making it relatively straightforward to fine-tune and optimize. In contrast, decision tree and random forest algorithms have more complex structures and require more tuning hyperparameters, making them less intuitive and more challenging to implement. For these reasons, our study used the KNN algorithm as a reference traditional ML approach to approximate the \(\mathcal{M}\) mapping. To obtain an optimal representation of \(\mathcal{M}\) using the KNN algorithm, we constructed five models with \(k\in\{1,2,3,4,5\}\) and evaluated their performance on \(\alpha_{test}\). We choose a statistical metric, the coefficient of determination (\(R^{2}\)), to evaluate the model performance. \(R^{2}\) determines how well a model predicts its outcome, and mathematically, it is defined as \[R^{2}=1-\frac{\sum_{i}^{N}(Y_{i}-\widehat{Y}_{i})^{2}}{\sum_{i}^{N}(Y_{i}- \widehat{Y})^{2}}\, \tag{3}\] where, \(Y_{i}\) and \(\widehat{Y}_{i}\) are the actual and predicted output, and \(N\) being the number of points in the dataset. Results from this evaluation procedure are reported in table 1. The behavior of \(R^{2}\) increasing and suddenly decreasing with increasing values of \(k\) for all three training datasets can be explained by the bias-variance tradeoff, which is a fundamental concept in ML and statistical modeling (Briscoe & Feldman, 2011). Maulik _et al._ (2021) also observed a similar behavior while using polynomial model regression to develop a mapping between initial conditions, i.e., two-dimensional spatial location and velocity field generated using a low-fidelity numerical algorithm such as a potential solver for Reynolds-averaged Navier-Stokes simulations. The bias-variance tradeoff refers to the tradeoff between the model's ability to fit the training data (bias) and generalize to new, unseen data (variance). A model with high bias is typically simplistic and cannot capture the complexity of the data, resulting in underfitting and poor predictive performance on both the training and test data. In contrast, a model with high variance is overly complex and typically overfits the training data, resulting in poor predictive performance on the test data. To achieve the right balance between bias and variance, ML methods typically involve selecting a model that is neither too simple nor too complex. This can be achieved through regularization, model selection, and ensemble methods (Briscoe & Feldman, 2011; Goodfellow _et al._, 2016). In the context of KNN algorithms, the value of \(k\) controls the complexity of the model. In general, larger values of \(k\) lead to simpler models with higher bias and lower variance, while smaller values of \(k\) lead to more complex models with lower bias and higher variance. To prevent underfitting and overfitting, we adopted the KNN algorithm with \(k=2\) and \begin{table} \begin{tabular}{c c c c c c} \(k\) & 1 & 2 & 3 & 4 & 5 \\ \(\alpha_{10}\) & 0.84 & 0.96 & 0.93 & 0.92 & 0.94 \\ \(\alpha_{15}\) & 0.90 & 0.88 & 0.89 & 0.87 & 0.87 \\ \(\alpha_{30}\) & 0.90 & 0.89 & 0.89 & 0.88 & 0.87 \\ \end{tabular} \end{table} Table 1: Test coefficient of determination (\(R^{2}\)) values for different numbers of neighbors (\(k\)) in a K-nearest neighbor (KNN) algorithm fits for the proposed experiments on test data (\(\alpha_{test}\)). The \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) are the big-, moderate-, and small-data regime training datasets. Euclidean distance as the \(\mathcal{D}\) and implemented it using the scikit-learn ML library in Python [20]. #### 2.3.2 Deep Neural Networks MLPs are a popular type of Deep Neural Networks (DNNs) and are widely utilized for performing supervised learning tasks such as classification or regression. They are primarily composed of dense layers. Each dense layer comprises several perceptrons, also called neurons, which are densely connected with perceptrons from the preceding layer, i.e., each perceptron in the layer is connected to every perceptron in the previous layer [1]. The input to a dense layer is a vector of size \(R^{d}\) representing the previous layer's output. The output is a vector of size \(R^{l}\) representing the activation of the current layers' perceptrons. The transformation performed by a dense layer can be described mathematically as follows: given an input vector \(\mathbf{x}\in\mathbf{R}^{d}\) and a weight matrix \(\mathbf{W}\in\mathbf{R}^{l\times d}\), the output \(\mathbf{y}\in\mathbf{R}^{l}\) is computed as: \[\mathbf{y}=\sigma(\mathbf{W}\mathbf{x}+\mathbf{b}) \tag{4}\] where \(\sigma\) is an activation function, \(\mathbf{b}\in\mathbf{R}^{l}\) is a bias vector, and the operator (+) represents element-wise addition. The weight matrix \(\mathbf{W}\) contains the learnable parameters of the dense layer. Each row of \(\mathbf{W}\) represents the weights connecting a neuron in the current layer to the perceptrons in the previous layer, and the bias vector \(\mathbf{b}\) represents the bias term added to the weighted sum of the inputs. During the forward pass, the input is first transformed linearly through the dot product of \(\mathbf{W}\) and \(\mathbf{x}\), and then non-linearly through the activation function \(\sigma\), producing the output \(\mathbf{y}\)[1]. The weights and biases of a dense layer determine the strength of the connections and the threshold at which the neuron activates. During training, the neural network iteratively learns the optimal values of \(\mathbf{W}\) by minimizing the discrepancy between predicted and actual values [1]. The choice of \(\sigma\) is critical when designing a DNN as it can significantly impact the network's training process and performance on a given task or objective [1]. \(\sigma\) introduces nonlinearity into the network, allowing it to learn and represent complex and nonlinear relationships between input and output data. The sigmoid and hyperbolic tangent (tanh) are commonly used \(\sigma\)'s but can suffer from the vanishing gradient problem [1]. This issue arises when the gradient of the loss function with respect to the network's weights approaches zero, resulting in slow weight updates. As a result, the network may be unable to learn effectively and may converge to suboptimal solutions, negatively impacting its performance in learning input-output pair relations. To address the limitations of the sigmoid and tanh \(\sigma\)'s, researchers have proposed alternative \(\sigma\)'s like the rectified linear units (ReLU) [15] and its variants, including leaky ReLU [14] and parametric ReLU (PReLU) [1]. While these \(\sigma\)'s have shown enhanced performance in DNNs, they are not exempt from issues. One common problem is the "dying ReLU" phenomenon, where a large number of perceptrons become inactive during training, negatively impacting the model's performance [15]. The Swish \(\sigma\) has recently gained attention as a promising alternative to widely-used \(\sigma\)'s like ReLU and its variants due to its ability to efficiently train DL models while balancing nonlinearity and smoothness effectively [17]. The Swish function is mathematically defined as \(f(x)=x\cdot\text{sigmoid}(\beta x)\), where \(\text{sigmoid}(\beta x)=[1+\exp(-\beta x)]^{-1}\). To evaluate the performance of the Swish function, Ramachandran _et al._ [1] conducted extensive experiments using standard neural network architectures such as ResNet, DenseNet, and Inception, [16, 17, 18] and datasets including CIFAR-10 and CIFAR-100 [15] and ImageNet [14]. The results show that the Swish function performs similarly or better than ReLU regarding accuracy and training efficiency. Additionally, Swish's non-monotonicity enables greater model expressiveness than ReLU or its variants, allowing it to represent a broader range of functions and patterns, making it an invaluable tool for modeling complex phenomena [15]. It is also found that Swish is less prone to the "dying ReLU" phenomenon. Given these properties, Swish is an appropriate choice for modeling complex phenomena in CFD, where efficient training and improved generalization performance are critical. To extract meaningful, complex, and hierarchical representations from the data, we stack multiple dense layers together and refer to them as a DNN [16, 17]. The dense layers between the input and output layers are called hidden layers because they transform the input to extract relevant features for the final output predictions. MLPs are trained by a backpropagation algorithm proposed in [14]. Backpropagation computes the gradient of the loss function with respect to trainable parameters using the automatic differentiation technique described in [15] and iteratively updates weights to minimize the loss. The most common choices of the loss function (\(\mathcal{L}\)) are the mean squared error (MSE) and mean absolute error (MAE). These errors can be expressed as \[\mathcal{L}=\frac{1}{N}\sum_{i}^{N}||Y_{i}-\widehat{Y}_{i}||_{n}\, \tag{5}\] with \(||\cdot||_{n}\) denoting the \(L^{n}-\) norm, \(n=1\) denoting the MAE, \(n=2\) the MSE, \(Y_{i}\) and \(\widehat{Y}_{i}\) the actual and predicted output, and \(N\) the number of points in the dataset. In the context of the \(\mathcal{M}\) mapping, we create an MLP-based network of 5 hidden layers, each with the same number of perceptrons. We use Swish non-linearity in all the hidden layers, while the output layer uses a linear activation function. To find the optimal architecture, we adopt grid search hyperparameter optimizations scheme [16] and only vary the number of perceptrons, considering the set of values \(\{32,64,128,256,512\}\). We choose \(R^{2}\), a statistical metric described in SS2.3.1, to evaluate the model performance. Further, the performance of the networks is evaluated on \(\alpha_{test}\) and reported in table 2. The analysis reveals that a shallower network, i.e., a network with five hidden layers and each layer with 32 perceptrons, has the least \(R^{2}\) value. We can also observe a significant improvement in the \(R^{2}\) as we increase the number of perceptrons in each layer, with the performance saturating beyond a certain point. This phenomenon can again be explained using the bias-variance tradeoff property [15]. The number of perceptrons determines the model's complexity, and increasing its complexity helps it learn complex patterns in the \begin{table} \begin{tabular}{l c c c c c} \# Perceptrons & 32 & 64 & 128 & 256 & 512 \\ \(\alpha_{10}\) & 0.89 & 0.95 & 0.97 & 0.97 & 0.97 \\ \(\alpha_{15}\) & 0.85 & 0.92 & 0.95 & 0.96 & 0.96 \\ \(\alpha_{30}\) & 0.76 & 0.87 & 0.90 & 0.91 & 0.91 \\ \end{tabular} \end{table} Table 2: Test \(R^{2}\) values for a different number of perceptrons in a five hidden layer multi-layer perceptron (MLP) model fits for the proposed experiments on \(\alpha_{test}\). The \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) are the considered training datasets. data. However, increasing the model's complexity also increases the risk of overfitting the training data, which results in a model that does not generalize well to new, unseen data. Beyond a certain number of perceptron, further increasing the model's complexity does not lead to any significant improvements in performance, resulting in saturation or diminishing returns. Based on the hyperparameter optimization analysis and bias-variance tradeoff, we adopt an MLP network with five hidden layers containing 128 perceptrons to represent the \(\mathcal{M}\) mapping. This architecture achieves the best performance on our dataset without evidence of overfitting or underfitting. The optimal MLP surrogate's architecture is detailed in table 3, which has 67,849 trainable and 0 non-trainable parameters. We acknowledge that more sophisticated hyperparameter tuning methods, such as random search, Bayesian optimization, and search with different network architecture, and regularisation schemes recommended in Goodfellow _et al._ (2016) could be explored to further improve the model's performance. As expected, the performance of DNNs depends on the number of training samples used. In our experiments, we observe that MLPs trained on a smaller number of training samples, such as \(\alpha_{30}\), exhibit lower performance compared to MLPs trained on larger datasets, such as \(\alpha_{10}\) and \(\alpha_{15}\), regardless of the specific MLP architecture used. This behavior is consistent with findings from the literature, where it was shown that a larger number of training samples provides a more comprehensive and representative set of information about the underlying distribution of the data (Shalev-Shwartz & Ben-David 2014; Goodfellow _et al._ 2016). In particular, a larger dataset reduces the risk of overfitting and improves the out-of-sample generalization. Moreover, a more extensive dataset size helps reduce the model's bias by providing a more representative sample of the parent data distribution. The MLP networks mentioned above are implemented using the TensorFlow machine learning library proposed by Abadi _et al._ (2016). These networks are trained end-to-end using backpropagation with the Adam optimizer (Kingma & Ba 2014), a stochastic gradient descent method. All trainable parameters are initialized randomly using values sampled from a uniform distribution, following the approach of Glorot & Bengio (2010). The learning rate is set to a constant value of \(1\times 10^{-3}\) throughout the training process, and the effective mini-batch size is 1024. The maximum number of epochs is set to 150. To introduce non-linear transformations, we relied on a Swish function (Ramachandran _et al._ 2017). As we worked on a supervised regression task, we adopted MAE as our loss function. ## 3 Results As shown in SS2.3.1 and SS2.3.2, both the KNN and MLP models yield relatively high \(R^{2}\) scores for the considered \(\alpha_{test}\) dataset. Specifically, as shown in tables 1 and 2, the MLP (KNN) model achieved a \(R^{2}=0.97\), \(0.95\), and \(0.90\) (\(R^{2}=0.96\), \(0.88\), and \(0.89\)) in the big-, moderate- and small-data regime, respectively, when evaluated against the \(\alpha_{test}\) dataset. These results imply that the models' ability to capture the \(\mathcal{M}\) mapping improves with an increasing amount of training data. However, while such a metric is frequently used for assessing the performance of ML models, it does not provide a direct interpretation of how well the model reproduces particular flow statistics or their spatial variations. This is particularly relevant from a fluid dynamics and atmospheric boundary layer perspective, where the ability to capture specific features and patterns is of great importance. Therefore, alternative metrics should be considered to fully assess model performance for these applications. The following sections will address this issue by examining ML model predictions in terms of variable-specific error metrics and spatially-distributed flow statistics. In this section, we evaluate the performance of the MLP- and the KNN surrogates by comparing predicted time-averaged flow statistics against corresponding reference LES values from the \(\alpha_{test}\) dataset. We analyze the convergence history of the MLP-based surrogate model in SS3.1 and present a quantitative analysis of ML-specific error metrics in SS3.2. SS3.3 focuses on model performance based on pseudocolor maps of the time-averaged flow field and velocity probability distribution functions, and SS3.4 evaluates the predictive ability of the surrogate models in terms of double-averaged turbulent flow profiles and aerodynamic coefficients of the underlying surface. Finally, in SS3.5, we discuss the computational efficiency of these models when compared to LES. ### Error Convergence for the MLP Model A MLP-based model is trained until convergence is reached, which is defined as the situation when the error between the predicted and actual output values reaches a plateau and the loss function no longer decreases with additional training epochs (Goodfellow _et al._, 2016). At this stage, the weights and biases in the network have been optimized to the point where further iterations no longer lead to significant improvement in performance. Figure 2 illustrates the convergence history of the MLP-based surrogate model for a range of pre-defined \(\delta\alpha\). The figure demonstrates that the MAE progressively decreases (figure 2 a), and \(R^{2}\) improves as the training progresses (figure 2 b). The error curve plateaus approximately after 150 epochs, implying that 150 epochs are sufficient to learn the input-output relation. After 150 epochs, the MAE values for each training dataset are \(\approx 5.0\times 10^{-2}\). Moreover, the \(R^{2}\) values for the \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) sets are 0.985, 0.982, and 0.976, respectively, suggesting that the MLP surrogate should accurately capture the desired input-output relation, regardless of the amount of training data. ### Relative Error Metrics The surrogate models were trained using the MAE error metric, which minimized the discrepancy between the model predictions and corresponding LES results for first- and second-order flow statistics. To evaluate the models' predictive abilities and determine their effectiveness in replicating the error metrics used during the training phase, we examine the RMAE relative error of each model for individual flow statistics based on the \(\alpha_{test}\) set. The RMAE relative error is defined as \[RMAE=\frac{||Y-\widehat{Y}||_{1}}{||Y||_{1}}\,, \tag{3.1}\] where, \(\widehat{Y}\) denotes the predicted value and \(Y\) is the corresponding "ground-truth" LES. Table 4 displays the RMAE values for both first- and second-order flow statistics based on the \(\alpha_{test}\) sets. Additionally, figure 3 provides a comparison of the RMAE trends for predicted time-averaged velocity and velocity variance components against those from the corresponding training datasets. What is apparent from table 4 is that both models feature relatively smaller RMAE values for the \(\overline{u},\overline{v}\) and normal resolved Reynolds stress components when compared to \(\overline{w}\) and resolved shear stress components. In the big-data regime (\(\alpha_{10}\)), the RMAE for first-order statistics from both ML models is \(\leqslant 5\%\) for the \(\overline{u}\) and \(\overline{v}\) components, whereas it increases up to \(28\%\) for \(\overline{w}\). Similarly, the normal resolved Reynolds stresses are predicted with an RMAE \(<9\%\), whereas errors are more significant for shear stresses, reaching up to \(30\%\) for the \(u^{\prime}v^{\prime}\) term. Further, in the moderate- (\(\alpha_{15}\)) and small-data (\(\alpha_{30}\)) regimes, the quality of KNN predictions substantially degrades, with larger percentage variations occurring for the \(\overline{u},\overline{v}\) (\(\text{RMAE}<15\%\)) and normal stresses components (\(\text{RMAE}<15\%\)), whereas that of the MLP surrogate decreases more gradually and features overall better performance. This behavior \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline S.No. & \(\overline{u}\) & \(\overline{v}\) & \(\overline{w}\) & \(\overline{u^{\prime}u^{\prime}}\) & \(v^{\prime}v^{\prime}\) & \(\overline{w^{\prime}w^{\prime}}\) & \(\overline{u^{\prime}v^{\prime}}\) & \(\overline{u^{\prime}w^{\prime}}\) & \(\overline{v^{\prime}w^{\prime}}\) \\ \(\delta\alpha=10^{MLP}\) & 0.028 & 0.033 & 0.265 & 0.063 & 0.080 & 0.067 & 0.274 & 0.160 & 0.182 \\ \(\delta\alpha=15^{MLP}\) & 0.039 & 0.053 & 0.335 & 0.078 & 0.096 & 0.082 & 0.342 & 0.192 & 0.214 \\ \(\delta\alpha=30^{MLP}\) & 0.096 & 0.115 & 0.437 & 0.130 & 0.129 & 0.106 & 0.485 & 0.245 & 0.287 \\ \(\delta\alpha=10^{KNN}\) & 0.028 & 0.048 & 0.283 & 0.068 & 0.086 & 0.062 & 0.302 & 0.156 & 0.185 \\ \(\delta\alpha=15^{KNN}\) & 0.116 & 0.145 & 0.479 & 0.140 & 0.145 & 0.103 & 0.460 & 0.261 & 0.296 \\ \(\delta\alpha=30^{KNN}\) & 0.116 & 0.145 & 0.475 & 0.140 & 0.143 & 0.101 & 0.456 & 0.257 & 0.295 \\ \hline \hline \end{tabular} \end{table} Table 4: Relative mean absolute error (RMAE) on \(\overline{u_{i}},\overline{u_{i}^{\prime}u_{j}^{\prime}}\) across different parametric space discretizations (\(\delta\alpha\) sets) using KNN- and MLP-based surrogate models. Figure 2: Convergence of mean absolute error (MAE) loss (a) and coefficient of determination (\(R^{2}\)) (b) for the multi-layer perceptron (MLP)-based surrogate during the training phase. Epochs indicate the number of iterations used in the learning process. The \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) are the big-, moderate-, and small-data regime training datasets. is especially apparent from profiles in figure 3. These profiles also highlight that the KNN model performs similarly in the moderate- and small-data regimes. The poor performance of the MLP model in the small-data regime can be attributed to the limited number of training samples characterizing this dataset, which is insufficient to fully represent the parent data distribution (Hastie _et al._, 2009; Goodfellow _et al._, 2016). Findings from this section highlight that the high \(R^{2}\) values shown in SS2.3.1 and SS2.3.2 are not sufficient to capture variations in individual flow statistics for the problem at hand. Relying solely on this metric may result in overestimating the model performance for specific applications. ### Time-Averaged Velocity Fields and PDFs Figures 4 and 5 depict the \(\overline{u}\) field over the \(z=0.5h\) plane. These maps compare the solutions obtained using KNN and MLP models, respectively, against the reference LES one. Based on visual inspection, one can easily conclude that predictions of the surrogate models in the big-data regime (figures 4 and 5 b) are in excellent agreement with LES results, especially for the MLP, whose predictions are virtually indistinguishable from the reference LES. In the moderate-data regime, discrepancies between the ML models and LES become more apparent. Specifically, the KNN model overpredicts \(\overline{u}\) in the shear layers forming on the side of the cubes, whereas the MLP model overpredicts \(\overline{u}\) in the wake regions. Visually, the MLP surrogate (figure 5 c) outperforms the KNN one (figure 4 c) by better capturing the observed flow variability in the selected plane. In the small-data regime, the KNN solution exhibits a qualitative similarity to the one in the moderate-data regime. This consistency is congruent with results shown in table 4 and figure 3, which demonstrated that the KNN model features similar relative RMAE errors regarding flow statistics for the moderate- and small-data regimes. In this data regime, the MLP solution substantially degrades when compared to the \(\alpha_{15}\) one, with local overpredictions of \(\overline{u}\) in the shear layers and spurious modes of variability throughout the considered plane. Figures 6 and 7 depict the \(\overline{u}\) field from the KNN and MLP surrogates, respectively, over Figure 3: Relative mean absolute error (RMAE) on test dataset (\(\alpha_{test}\)) for first-order flow statistics (\(\overline{u_{i}}\)) (a) and normal resolved Reynolds stresses (\(\overline{u_{i}^{\prime}u_{j}^{\prime}}\)) (b) using MLP and K-nearest neighbors (KNNs) models trained on considered training datasets, namely, \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\). Solid (dashed) lines denote MLP (KNN) model predictions. a vertical cross-section at \(y=1.5h\). In the big-data regime, predictions from both surrogate models (figures 6 and 7 b) again show good agreement with the reference LES results (figures 6 and 7 a). In the moderate-data regime, the KNN model overpredicts the \(\overline{u}\) in the upper portion of the computational domain and yields a more rapid reattachment of the flow in the wake region (figure 6 c). Based again on visual inspection, the MLP surrogate (figure 7 c) outperforms the KNN overall but yields a stronger wake. To gain further insight on the problem, figure 8 shows the estimated probability density functions (PDFs) for \(\overline{u}\), \(\overline{v}\), and \(\overline{w}\) over the computational domain at \(\alpha=35^{\circ}\). The PDFs predicted by the MLP (solid line) and KNN (dashed line) surrogates are compared with that from the LES (black line). These plots help highlight challenges associated with predicting specific velocity values. As apparent from figure 8, predictions from both the KNN and MLP surrogates are in excellent agreement with corresponding LES values for the big-data regime (red). The KNN model features more significant discrepancies and both models tend to assign a lower probability density to extreme values at the right tail of the \(\overline{v}\) distribution (see figure 8 b) and small \(\overline{w}\) magnitudes (see figure 8 c). In the moderate-data regime, the accuracy of the KNN drops significantly for relatively larger values of \(\overline{u}\) and \(\overline{v}\) and errors also increase for small \(\overline{w}\) magnitudes. Note that dashed green lines are not visible in figure 8 because they overlap with the dashed blue ones. Conversely, we observe a modest decrease in the predictive ability of the MLP model for this regime, with larger discrepancies again occurring for relatively larger values of the \(\overline{u}\) and \(\overline{v}\). In the small-data regime, both the KNN and MLP models feature more significant errors throughout the range of values for the \(\overline{u}\) and \(\overline{v}\) velocities, although the PDF trends are qualitatively captured. Based on the above analyses, both KNN and MLP surrogates effectively predict unseen Figure 4: Horizontal slice \((x-y)\) of normalized time-averaged streamwise velocity \(\overline{u}/u_{\star}\) at \(z/h=0.5\) from the reference large-eddy simulation (LES) (a). Corresponding predictions using KNNs engineered for \(\alpha_{10}\) (b), \(\alpha_{15}\) (c), and \(\alpha_{30}\) (d) for an approaching wind angle \(\alpha=35^{\circ}\). \(\alpha\) values with reasonable accuracy in the big-data regime, capturing the nuanced flow field variability in the considered system. As the amount of data decreases, the accuracy of the KNN model drops more significantly than the MLP model, and discrepancies relative to the reference LES manifest themselves as an overprediction of \(\overline{u}\) in the shear layers generated by the cubes and towards the upper portion of the computational domain. ### Double-Averaged Flow Profiles When examining exchange processes between urban areas and the atmosphere, it is common to focus on time- and horizontally-averaged quantities, also known as double-averaged quantities. Conceptually, the choice of the horizontal averaging region should enable a sensible interpretation of the resulting vertical profiles, retaining the scales of interest while removing scales that will be described statistically (Schmid _et al._, 2019). In the considered open-channel flow setup, the flow is periodic in the horizontal directions and the interest is on vertical variations of flow statistics; thus, a sensible spatial-averaging region is a thin slab of thickness \(\delta_{z}\). In the past decades, significant attention has been given to developing one-dimensional models for the average wind patterns over urban landscapes that are horizontally homogeneous at the spatial scale of interest (Macdonald, 2000; Di Sabatino _et al._, 2008; Yang _et al._, 2016; Castro, 2017; Li & Katul, 2022). These profiles, along with associated aerodynamic parameters and profiles of higher-order velocity moments, play a vital role in phenomenological surface-flux models for urban climate and weather forecasting research (see, e.g., Skamarock _et al._, 2008; Grimmond _et al._, 2010; Chen _et al._, 2012). It is hence of interest to verify the abilities of the proposed surrogates in reproducing double-averaged profiles of relevant flow statistics. Figure 9 presents such profiles, with a lens on the mean Figure 5: Horizontal slice \((x-y)\) of normalized time-averaged streamwise velocity \(\overline{u}/u_{\star}\) at \(z/h=0.5\) from the reference LES (a). Corresponding predictions using MLPs engineered for \(\alpha_{10}\) (b), \(\alpha_{15}\) (c), and \(\alpha_{30}\) (d) for an approaching wind angle \(\alpha=35^{\circ}\). streamwise velocity (\(\langle\overline{u}\rangle\)) and normal resolved Reynolds stresses (\(\langle\overline{u^{\prime}u^{\prime}}\rangle\), \(\langle\overline{v^{\prime}v^{\prime}}\rangle\),\(\langle\overline{w^{\prime}w^{\prime}}\rangle\)) for a reference approaching wind angle \(\alpha=35^{\circ}\). Focusing on the \(\langle\overline{u}\rangle\) profile, in the big-data regime, the MLP (KNN) surrogate predicts such a quantity with an RMAE = 0.7% (1.8%). In the moderate-data regime, the performance of the MLP surrogate remains high, with an RMAE = 0.8%, whereas that of the KNN model deteriorates, with an RMAE = 13.0%. In the small-data regime, both surrogates overestimate \(\langle\overline{u}\rangle\) throughout the boundary layer, with an RMAE of 11.9% for the MLP and 13.0% for the KNN. Once again, predictions from the KNN for the moderate- and small-data regimes closely match. As shown in table 5, all models are accurately predicting normal resolved Reynolds stress profiles in the above-canopy region (\(z>h\)). The RMAE error for MLP (KNN) surrogate in the big-data regime is \(\leqslant\) 3% (\(\leqslant\) 6%) and it increases to \(\leqslant\) 9% (\(\leqslant\) 15%) in the small-data regime. In this region, models are also featuring relatively modest RMAE values for the \(\langle\overline{u^{\prime}w^{\prime}}\rangle\) Reynolds stress across the considered data regimes, with max (\(RMAE\)) = 8.3% for the MLP in the data-moderate regime. Interestingly, large RMAE values are observed (up to 17%) in both ML models across data regimes for \(\langle\overline{u^{\prime}v^{\prime}}\rangle\), with the KNN model outperforming the MLP. For this same quantity, the KNN performs well in the big-data regime and features more important errors in the moderate- and small-data regime. As shown in table 6, in the small- and moderate-data regimes, the differences between model outputs and reference LES are more significant in the urban canopy layer (UCL), i.e., the \(z\leqslant h\) interval. This is especially true for \(\langle\overline{u^{\prime}u^{\prime}}\rangle\) predictions by both MLP and KNN model in the small-data regime, for the \(\langle\overline{v^{\prime}v^{\prime}}\rangle\) (\(\langle\overline{u^{\prime}v^{\prime}}\rangle\)) predicted by the KNN (KNN and MLP) model(s) across data regimes. Despite these errors, the surrogates can predict the salient features of mean profiles, including an inflection in the mean streamwise velocity Figure 6: Vertical slice (\(y-z\)) of normalized time-averaged streamwise velocity \(\overline{u}/u_{\star}\) at \(y/3h=0.5\) from the reference LES (a). Corresponding predictions using KNNs for \(\alpha_{10}\) (b), \(\alpha_{15}\) (c), and \(\alpha_{30}\) (d) for an approaching wind angle \(\alpha=35^{\circ}\). Figure 8: Probability density function (PDF) comparison for time-averaged streamwise velocity \(\overline{u}/u_{\star}\) (a), cross stream velocity \(\overline{v}/u_{\star}\) (b), and the vertical velocity \(\overline{w}/u_{\star}\) (c) for an approaching wind angle \(\alpha=35^{\circ}\). Red lines correspond to the \(\alpha_{10}\) dataset, green lines to the \(\alpha_{15}\) one, and blue lines to the \(\alpha_{30}\). The dashed and solid lines correspond to KNN- and MLP-based surrogate models. The solid black line in (a), (b), and (c) denotes the reference LES results. The upward- and downward-facing triangles represents the KNN and MLP predictions. Figure 7: Vertical slice \((y-z)\) of normalized time-averaged streamwise velocity \(\overline{u}/u_{\star}\) at \(y/3h=0.5\) from the reference LES (a). Corresponding predictions using MLPs for \(\alpha_{10}\) (b), \(\alpha_{15}\) (c), and \(\alpha_{30}\) (d) for an approaching wind angle \(\alpha=35^{\circ}\). profile and local maxima of velocity variances towards the top of the canopy (\(z=h\)) and the ground. Further insight into model predictions can be gained by examining variations in aerodynamic surface parameters such as the displacement height (\(d\)) and the aerodynamic roughness length (\(z_{0}\)) across models, which are shown in table 7. As mentioned in the opening paragraph \begin{table} \begin{tabular}{l c c c c c c} \hline \hline RMAE & \(\alpha_{10}^{MLP}\) & \(\alpha_{10}^{KNN}\) & \(\alpha_{15}^{MLP}\) & \(\alpha_{15}^{KNN}\) & \(\alpha_{30}^{MLP}\) & \(\alpha_{30}^{KNN}\) \\ \(\langle\overline{u}\rangle/u\star\) & 0.007 & 0.018 & 0.008 & 0.130 & 0.119 & 0.130 \\ \(\langle\overline{u^{\prime}u^{\prime}}\rangle/u_{\star}^{2}\) & 0.029 & 0.029 & 0.048 & 0.144 & 0.085 & 0.144 \\ \(\langle\overline{v^{\prime}v^{\prime}}\rangle/u_{\star}^{2}\) & 0.020 & 0.056 & 0.050 & 0.051 & 0.051 & 0.050 \\ \(\langle\overline{w^{\prime\prime}w^{\prime}}\rangle/u_{\star}^{2}\) & 0.014 & 0.012 & 0.034 & 0.026 & 0.041 & 0.026 \\ \(\langle\overline{u^{\prime\prime}v^{\prime}}\rangle/u_{\star}^{2}\) & 0.174 & 0.114 & 0.105 & 0.152 & 0.172 & 0.153 \\ \(\langle\overline{u^{\prime\prime}w^{\prime}}\rangle/u_{\star}^{2}\) & 0.024 & 0.017 & 0.083 & 0.044 & 0.057 & 0.038 \\ \(\langle\overline{v^{\prime\prime}w^{\prime}}\rangle/u_{\star}^{2}\) & 0.037 & 0.014 & 0.024 & 0.159 & 0.102 & 0.157 \\ \hline \hline \end{tabular} \end{table} Table 5: RMAE on the double-averaged streamwise velocity and variance profiles with respect to LES simulation profile using KNN- and MLP-based surrogate models for the approaching wind angle \(\alpha=35^{\circ}\) above the urban canopy layer, i.e., \(z/h>1\). For \(\overline{u}/u_{\star}\), results correspond to the entire flow depth. of this section, these quantities are commonly utilized as input parameters in surface-layer parameterizations for weather forecasting (Stensrud and Yussouf, 2007) and earth system models (Hurrell et al., 2013). \(d\) and \(z_{0}\) are evaluated by fitting an equilibrium logarithmic law for aerodynamically rough surfaces to model predictions in the interval \(z\in[1.2h,2h]\)(Raupach et al., 1980), i.e., \[\langle\overline{u}\rangle(z)=\frac{u_{\star}}{\kappa}\ln\left(\frac{z-d}{z_{0 }}\right)\,, \tag{3.2}\] where the friction velocity \(u_{\star}\) is fixed to \(\sqrt{\Pi_{i}/(3h)}\) and \(d\), and \(z_{0}\) are treated as unknown parameters to be estimated. While the small scale separation between the domain height and cube height in our setup may not strictly justify the existence of an inertial sublayer region with a logarithmic velocity profile (Jimenez and Pinelli, 1999), we have observed a logarithmic profile in the fitting interval for each of the considered cases. This behavior can be explained by the cubes' constant height and relatively high packing density, which yields a shallow roughness sublayer and a start of a logarithmic region immediately above the mean canopy height \(h\), as also discussed in Li and Katul (2022). Considering the scope of this study, we hence find it reasonable to assume the existence of an inertial-sublayer region above the canopy and to evaluate model performance based on log-layer metrics. The MLP and KNN surrogates accurately predict \(d\) and \(z_{0}\) in the big- and moderate-data regimes. For the MLP model, the RMAE values for \(d\) and \(z_{0}\) are \(\leqslant 11\%\) and \(\leqslant 3\%\), respectively. Similarly, the KNN model accurately predicts \(d\) and \(z_{0}\) in the big-data regime, with RMAE values of \(\leqslant 9\%\) and \(\leqslant 1\%\), respectively. However, in the small-data regime, the accuracy of both the MLP and KNN models decreases significantly, with RMAE values exceeding \(26\%\) for \(d\) and \(36\%\) for \(z_{0}\). The KNN-based model also performs similarly in the moderate- and small-data regimes, once again confirming our previous findings. Revisiting the motivation for this section, it is apparent that the high \(R^{2}\) values shown in SS2.3.1 and SS2.3.2 are not sufficient to characterize surrogate model performance for the considered problem and physics-based errors metrics are instead required to guide the decision-making process. The following section examines gains in terms of computational efficiency provided by the considered surrogate models, and a comprehensive discussion of findings from this study will follow in SS4. ### Computational Efficiency of the Surrogates Models have been trained on a server featuring an NVIDIA RTX A6000 graphic card units (GPUs) and a AMD EPYC 7742 64-Core Processor. We report the wall-clock time required \begin{table} \begin{tabular}{l c c c c c c} \hline \hline RMAE & \(\alpha_{10}^{MLP}\) & \(\alpha_{10}^{KNN}\) & \(\alpha_{15}^{MLP}\) & \(\alpha_{15}^{KNN}\) & \(\alpha_{30}^{MLP}\) & \(\alpha_{30}^{KNN}\) \\ \(\langle\overline{u^{\prime}u^{\prime}}\rangle/u_{\star}^{2}\) & 0.027 & 0.042 & 0.028 & 0.165 & 0.133 & 0.165 \\ \(\langle\overline{v^{\prime}v^{\prime}}\rangle/u_{\star}^{2}\) & 0.045 & 0.043 & 0.048 & 0.077 & 0.059 & 0.077 \\ \(\langle\overline{w^{\prime}w^{\prime}}\rangle/u_{\star}^{2}\) & 0.024 & 0.014 & 0.022 & 0.020 & 0.034 & 0.020 \\ \(\langle\overline{u^{\prime}v^{\prime}}\rangle/u_{\star}^{2}\) & 0.196 & 0.201 & 0.141 & 0.232 & 0.337 & 0.232 \\ \(\langle\overline{u^{\prime}w^{\prime}}\rangle/u_{\star}^{2}\) & 0.062 & 0.051 & 0.042 & 0.094 & 0.067 & 0.094 \\ \(\langle\overline{v^{\prime}w^{\prime}}\rangle/u_{\star}^{\star}\) & 0.061 & 0.064 & 0.084 & 0.140 & 0.123 & 0.140 \\ \hline \hline \end{tabular} \end{table} Table 6: RMAE on the double-averaged streamwise velocity and variance profiles with respect to LES simulation profile using KNN- and MLP-based surrogate models for the approaching wind angle \(\alpha=35^{\circ}\) in the urban canopy layer, i.e., \(z/h\leqslant 1\). to train the MLP-based surrogate models and the inference time of both KNN- and MLP-based surrogates. The MLP surrogates are evaluated on the NVIDIA and AMD processors due to their high computational demands, whereas KNN-based surrogates are evaluated only on AMD processors. The training times of the MLP surrogates is 880 s, 607 s, and 345 s for the \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\) training datasets, respectively. The corresponding training time on the AMD processor is 2903 s, 2166 s, and 1198 s, highlighting the substantial speedup provided by GPU technology. The training time decreases as the size of the training dataset is reduced, as could have been expected since the number of training samples reduces while the network architecture, batch size, and epochs are held fixed. As training samples decrease, the model requires fewer iterations to complete an epoch resulting in a shorter training time. As outlined in SS2.3.1, KNN-based surrogate models do not require a training phase, substantially reducing their deployment cost. Table 8 presents the wall-clock time required to make an inference, i.e., to predict the quantity of interest given the input, on a computational grid of size \(96\times 48\times 48\) for both MLP and KNN models. The MLP surrogates exhibit a constant inference time of 7.3 s due to the fixed network architecture. On the other hand, the KNN model shows varying inference times ranging from 0.6 s to 1.8 s depending on the number of training samples used to construct the surrogate model. A smaller training dataset reduces the number of samples used to predict the output for a new input sample, thus reducing the search space for identifying the nearest neighbors and resulting in faster prediction times. Consequently, the inference time for KNN-based surrogates decreases with decreasing size of the training dataset. More importantly, the inference time of both models is significantly shorter than the 6.5 hours required to run the LES. Specifically, the MLP model speedup is 3,200\(\times\), whereas the KNN speedup is 13,000\(\times\) in the big-data regime (39,000\(\times\) in the small-data regime). This analysis reveals that using ML-based surrogates can significantly reduce the computational cost of predicting time-averaged turbulent statistics when compared to LES approaches, thus making these models amenable for use in multi-query approaches such as uncertainty quantification and inverse problems. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{-} & LES & \(\alpha_{10}^{MLP}\) & \(\alpha_{10}^{KNN}\) & \(\alpha_{15}^{MLP}\) & \(\alpha_{15}^{KNN}\) & \(\alpha_{30}^{MLP}\) & \(\alpha_{30}^{KNN}\) \\ & \(d\) & 0.47 & 0.49 & 0.51 & 0.52 & 0.59 & 0.55 & 0.59 \\ & \(z_{0}\) & 0.098 & 0.095 & 0.097 & 0.095 & 0.063 & 0.069 & 0.063 \\ \hline \hline \end{tabular} \end{table} Table 7: Displacement height (\(d\)) and aerodynamic roughness length (\(z_{0}\)) coefficients from KNN- and MLP-based surrogates for \(\alpha=35^{\circ}\). \begin{table} \begin{tabular}{c c c c} \hline \hline & \(\alpha_{10}\) & \(\alpha_{15}\) & \(\alpha_{30}\) \\ Inference\({}^{KNN}\) (s) & 1.8 & 1.4 & 0.6 \\ Inference\({}^{MLP}\) (s) & 7.3 & 7.2 & 7.2 \\ \hline \hline \end{tabular} \end{table} Table 8: Computation time for predicting turbulent statistics for a fixed wind angle on a computational grid of size \(96\times 48\times 48\) using KNN- and MLP-based surrogate model for the training datasets. ## 4 Discussion on Model Performance This section offers a critical perspective on the previous findings and diagnoses the cause of observed discrepancies in model predictions. The results presented in SS3 demonstrate that the proposed ML surrogates qualitatively capture key features of velocity statistics in the considered flow system, particularly in the big-data regime. However, as discussed SS3.2 and shown in table 4, the accuracy of the ML models varies depending on the flow variable in question. Specifically, the \(\overline{u}\) and \(\overline{v}\) velocities, as well as the normal resolved Reynolds stresses, are accurately predicted by the ML models. In contrast, the \(\overline{w}\) velocity and resolved shear stresses exhibit larger errors in their predictions. Results also show that the accuracy of both ML models decreases as the amount of training data is reduced. The KNN model exhibited a steep drop in performance between the big- and moderate-data regimes, and a similar level of performance between the moderate- and small-data regimes. On the other hand, the MLP model showed a more gradual decline in performance, and outperformed the KNN model overall. These findings were further supported by trends observed in the spatially-distributed averaged \(\overline{u}\) field (as discussed in SS3.3) and the profiles of flow statistics (as described in SS3.4). This section will provide further insight into these findings and discuss approaches to improve the ML model performance. As mentioned earlier, both ML models show significant errors while predicting \(\overline{w}\) and resolved shear stresses. For the MLP surrogate, this behavior can be attributed to the use of a cumulative loss function during the training phase, which does not impose specific constraints on individual flow statistics. The cumulative loss function, denoted as \(L_{\text{MLP}}\), is defined as the sum of individual loss functions \(L_{\theta_{j}}\) for each flow statistic \(\theta_{j}\), i.e., \[L_{\text{MLP}}=\sum_{j=1}^{N}L_{\theta_{j}}\,. \tag{4.1}\] However, evaluating \(L_{\text{MLP}}\) in this manner leads to loss contributions that vary in magnitude, despite the input-standardization procedure discussed in SS2.2. As a result, variables with more significant loss contributions are typically prioritized during the optimization process, resulting in a lower RMAE compared to variables with a lesser loss contribution. This issue is exemplified in figure 10, where we show the trend of individual loss contributions (MAE) and corresponding RMAE errors during the training phase for the \(\alpha_{10}\) dataset. From figure 10 (a), it is apparent that quantities such as \(\overline{u^{\prime}u^{\prime}}\) and \(\overline{v^{\prime}v^{\prime}}\) (green and orange lines, respectively) feature a larger MAE when compared to, e.g., \(\overline{w}\) and \(\overline{u^{\prime}w^{\prime}}\) (blue and brown lines, respectively). Consequently, \(\overline{u^{\prime}u^{\prime}}\) and \(\overline{v^{\prime}v^{\prime}}\) have been prioritized during the training process, and are ultimately characterized by smaller RMAE values when compared to \(\overline{w}\) and \(\overline{u^{\prime}w^{\prime}}\). This explains the observed behavior. This finding suggests that individually constraining flow statistics in the loss function, by e.g. relying on weighted averaged loss contributions, may be a viable pathway to homogenize RMAE errors. The same objective may also be achieved by considering alternative loss functions, such as percentage-error-based approaches. Moreover, the analysis emphasizes the importance of carefully designing the loss function to ensure that the models can effectively capture the relevant information, which can be an extension of this work. Let's now focus on the KNN model. Unlike the MLP surrogate, the KNN model does not require any training. Therefore, discrepancies in the predicted flow statistics arise due to a different reason. As discussed in SS2.3.1, KNN model predictions for a given flow statistic depend on the distance between the test sample and training data points for that quantity. For the mapping \(\mathcal{M}\), the distance to the nearest neighbor is a function of the spatial coordinates \((x,y,z)\) and the approaching wind angle parameter \((\alpha)\). Hence, both quantities are crucial in determining the model performance. To gain insight into the observed KNN model behavior, figure 11 shows the spatial variability (mean, \(\pm\) one and two standard deviations) over the \(z=h/2\) plane of \(\overline{u}\) and \(\overline{w}\) as a function of \(\alpha\) (green lines), along with the \(\alpha\)-variability of these exact quantities at varying spatial locations in the plane (black dashed lines). As shown in the figure, the time and horizontally-averaged \(\overline{u}\) (\(\langle\overline{u}\rangle\), dark green line) displays significant variations along the \(\alpha\) axis, whereas the mean \(\overline{w}\) is zero, which is a constraint imposed by the double-periodic domain. Additionally, for a fixed \(\alpha\), both \(\overline{u}\) and \(\overline{w}\) exhibit significant spatial variability, as evidenced by the standard deviations shown in the figure, particularly for \(\alpha<45\). However, what is more relevant to our problem are the \(\alpha\)-variations at fixed spatial locations (black dashed lines). At the selected spatial locations, \(\overline{u}\) is characterized by modes of variability with wavelengths larger than 30 deg in the \(\alpha\) plane, and the \(\overline{w}\) is a function of \(\alpha\). The \(\alpha\)-variability of the considered quantities at varying spatial locations is shown in figure 12. Figure 11: Spatial variability of \(\overline{u}\) (a) and \(\overline{w}\) (b) as a function of the wind approaching wind angle \(\alpha\) at \(z=h/2\). Solid green lines depict the spatial mean \(\pm\) one (dark green) and two (light green) standard deviations of the considered variable over the horizontal plane, while dashed black lines represents \(\alpha\)-variability of the considered quantities at varying spatial locations. Figure 10: Convergence of MAE (a) and relative mean absolute error (RMAE) (b) for time-averaged velocities (\(\overline{u_{i}}\)), normal stresses (\(\overline{u_{i}u_{i}}\)) and shear stresses (\(\overline{u_{i}u_{j}}\)) for an MLP surrogate trained on the big-data regime (i.e., \(\alpha_{10}\)). axis, whereas \(\overline{w}\) displays high-frequency modes of variability and relatively modest average magnitudes. The smooth \(\overline{u}\) variability enables the KNN model to produce relatively accurate predictions in a relative-error sense. In contrast, the high-frequency variability of \(\overline{w}\) is the root cause of the observed significant discrepancies. Additionally, the fact that \(\overline{w}\) variations occur around a nominally negligible mean value increases the relative-error metric, exacerbating the observed discrepancies. Our analysis also revealed that the KNN surrogate performs similarly in moderate- and small-data regimes for most quantities. This problem can also be diagnosed. For the \(\alpha_{10}\) dataset, the KNN features an overall good performance and is able to capture most of the considered flow statistics correctly. This shows that the big-data regime enables the KNN to meaningfully select the nearest neighbor accounting for variability in both space and approaching wind angle (\(\alpha\)). On the contrary, for the \(\alpha_{15}\) and \(\alpha_{30}\) datasets, the KNN algorithm selects the nearest neighbor based solely on spatial distance (\(x\), \(y\), \(z\)), rather than incorporating wind approaching angle (\(\alpha\)) information. This behavior arises because of the large spacing between data points in the parametric space compared to the standardized physical domain. In other words, for the moderate- and small=data regimes, the KNN algorithm will select the closest \(\alpha\) training value to the desired point, which for the considered \(\alpha=35^{\circ}\) case is \(\alpha=30^{\circ}\), and produce an average between two identified neighboring points in space. To address these KNN model limitations, more advanced pre-processing techniques or physics-based distance function approaches could, in principle, be engineered (Bahmani & Sun 2021). Alternatively, the \(\mathcal{M}\) mapping could also be modeled via a KNN algorithm at each spatial location. However, this would increase the overall model cost and potentially yield spatially discontinuous flow fields. ## 5 Conclusion This study examined the accuracy and computational efficiency of two ML-based surrogate models for the prediction of flow with arbitrary approaching wind angle over an array of cuboids. The first model is based on the KNN approach, a traditional ML technique, whereas the second is a more advanced MLP model. Both models aim at approximating the mapping \[\mathcal{M}:(x_{i},\alpha)\rightarrow(\overline{u}_{i},\overline{u^{\prime}_ {i}u^{\prime}_{j}})\;, \tag{5.1}\] A uniformly spaced grid was adopted to discretize the parameter space of wind angles (\(\alpha\)) and three distinct training sets were designed: \(\alpha_{10}\), \(\alpha_{15}\), and \(\alpha_{30}\). Flow statistics (\(\overline{u}_{i}\) and \(\overline{u^{\prime}_{i}u^{\prime}_{j}}\)) were generated for each approaching wind angle via LES for training and evaluating the surrogate models. The MLP (KNN) model achieved a \(R^{2}=0.97\), \(0.95\), and \(0.90\) (\(R^{2}=0.96\), \(0.88\), and \(0.89\)) in the big-, moderate- and small-data regime, respectively, when evaluated against the \(\alpha_{test}\) dataset. This suggests that these models can capture the \(\mathcal{M}\) mapping to an increasing degree of fidelity as the amount of training data increases. In the big-data regime, corresponding to the \(\alpha_{10}\) training dataset, both the KNN- and MLP-based surrogates produced flow statistics that closely match corresponding LES values for most quantities. Error analysis for this regime showed that these models could reconstruct first-order statistics with an RMAE \(\leqslant\) 5% for \(\overline{u}\) and \(\overline{v}\) components and up to 28% for \(\overline{w}\). The normal components of resolved Reynolds stresses are predicted with an RMAE \(\leqslant\) 9%, while shear stresses show a more significant error up to 30%. For a chosen \(\alpha=35^{\circ}\) case (unseen data), predictions from these models in terms of \(\overline{u}\) and \(\langle\overline{u}\rangle\) are visually indistinguishable from the reference LES solution. Moreover, profiles of horizontally-averaged resolved Reynolds stress tensor components within and above UCL can be accurately predicted by both models with RMAE \(\leqslant\) 7%, except for \(\overline{u^{\prime}v^{\prime}}\) which features RMAE = 20%. Last, models predict aerodynamic coefficients such as \(z_{0}\) and \(d\) with RMAE \(\leqslant\) 11% and \(\leqslant\) 3%, respectively. In the moderate- and small-data regimes, corresponding to the \(\alpha_{15}\) and \(\alpha_{30}\) training datasets, respectively, the MLP model outperforms the KNN. The predictive accuracy of the MLP model decreases gradually as the training data is reduced, whereas that of the KNN model features a stark drop when compared to that in the \(\alpha_{10}\) dataset, and small variability is observed between the \(\alpha_{15}\) and \(\alpha_{30}\) predicted flow statistics. Specifically, in the moderate- and small-data regimes, the RMAE error for the \(\overline{u}\) and \(\overline{v}\) 3-D fields using MLP (KNN) surrogate model is up to 6% (15%) and 12% (15%), respectively. In the moderate-data regime, the MLP (KNN) surrogate model predicts the resolved 3-D field of normal Reynolds stresses with an RMAE of \(\leqslant\) 10% (\(\leqslant\) 15%), whereas corresponding RMAE values for the resolved shear stress components is 35%(46%). Conversely, in the small-data regime, the RMAE for normal and shear stress components using MLP (KNN) can increase up to 13% (15%) and 49% (46%), respectively. For the reference \(\alpha=35^{\circ}\) case, the \(\overline{u}\) 3-D field is captured relatively accurately by the MLP model in the data-moderated regime, whereas large discrepancies can be observed in both models in the small-data regime. Model discrepancies emerge predominantly in the wake region and in shear layers forming around the cubes. In terms of horizontally-averaged profiles, in the moderate- (small-)data regimes, RMAE for the resolved normal Reynolds stress components using the MLP model is 5%(9%) above the UCL and 5%(14%) within the UCL. On the other hand, KNN exhibits a relatively larger RMAE for the resolved normal Reynolds stress components 15%(15%) above the UCL and 17%(17%) within the UCL. Conversely, the RMAE for resolved shear stress profiles in these data regimes can go up to 34% for both ML models. Finally, the accuracy of both models in the moderate- and small-data regimes for aerodynamic coefficients decreases significantly, featuring an RMAE value \(\leqslant\) 26% for \(d\) and \(\leqslant\) 36% for \(z_{0}\). In terms of computational efficiency, both models yield a significant speedup, 10\({}^{3}\times\) for the MLP model and 10\({}^{4}\times\) for the KNN, making them attractive approaches for multi-query applications. Overall, the MLP approach proved capable of effectively learning the considered complex and nonlinear mapping but required specialized computing infrastructure and resources. The KNN performed relatively worse in the moderate- and small-data regimes but was relatively straightforward to implement and deploy with limited computational resources. Findings suggest that the selection between the MLP and KNN approaches for a given application should depend on specific requirements, available data, and computational resources. When an application has abundant data and advanced computational resources, the MLP surrogate may be preferable due to its superior performance. Conversely, if an application has limited computational resources, the KNN approach may be more suitable, providing faster computation and reasonable accuracy at a fraction of the cost. Findings also highlighted that the \(R^{2}\) metric fails to capture the nuanced variability of individual flow statistics for the problem under consideration and may lead to misplaced confidence in model performance. Our study demonstrates that ML-based surrogates are effective in creating approximations of flow statistics over urban areas, as shown in our specific case of neutrally stratified equilibrium flow over an idealized urban environment and arbitrary approaching wind angle. This approach can be readily extended to more complex urban environments, including those with more realistic surface morphologies and atmospheric stability. However, developing a generalized version of the proposed models hinges on the availability of extensive CFD datasets and represents a major challenge due to the high computational cost of CFD simulations. Nevertheless, the benefits of this approach are significant, as it has the potential to drastically reduce the computational cost of CFD and enable faster design iterations required in uncertainty quantification and inverse problems.
2309.16422
Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4
In an era where cyberspace is both a battleground and a backbone of modern society, the urgency of safeguarding digital assets against ever-evolving threats is paramount. This paper introduces Cyber Sentinel, an innovative task-oriented cybersecurity dialogue system that is effectively capable of managing two core functions: explaining potential cyber threats within an organization to the user, and taking proactive/reactive security actions when instructed by the user. Cyber Sentinel embodies the fusion of artificial intelligence, cybersecurity domain expertise, and real-time data analysis to combat the multifaceted challenges posed by cyber adversaries. This article delves into the process of creating such a system and how it can interact with other components typically found in cybersecurity organizations. Our work is a novel approach to task-oriented dialogue systems, leveraging the power of chaining GPT-4 models combined with prompt engineering across all sub-tasks. We also highlight its pivotal role in enhancing cybersecurity communication and interaction, concluding that not only does this framework enhance the system's transparency (Explainable AI) but also streamlines the decision-making process and responding to threats (Actionable AI), therefore marking a significant advancement in the realm of cybersecurity communication.
Mehrdad Kaheh, Danial Khosh Kholgh, Panos Kostakos
2023-09-28T13:18:33Z
http://arxiv.org/abs/2309.16422v1
# Cyber Sentinel: Exploring Conversational Agents' Role in Streamlining Security Tasks with GPT-4 ###### Abstract In an era where cyberspace is both a battleground and a backbone of modern society, the urgency of safeguarding digital assets against ever-evolving threats is paramount. This paper introduces _Cyber Sentinel_, an innovative task-oriented cybersecurity dialogue system that is effectively capable of managing two core functions: explaining potential cyber threats within an organization to the user, and taking proactive/reactive security actions when instructed by the user. Cyber Sentinel embodies the fusion of artificial intelligence, cybersecurity domain expertise, and real-time data analysis to combat the multifaceted challenges posed by cyber adversaries. This article does into the process of creating such a system and how it can interact with other components typically found in cybersecurity organizations. Our work is a novel approach to task-oriented dialogue systems, leveraging the power of chaining GPT-4 models combined with prompt engineering across all sub-tasks. We also highlight its pivotal role in enhancing cybersecurity communication and interaction, concluding that not only does this framework enhance the system's transparency (Explainable AI) but also streamlines the decision-making process and responding to threats (Actionable AI), therefore marking a significant advancement in the realm of cybersecurity communication. LLMs, Cybersecurity, GPT-4, Chatbots, Language Model Chaining, Explainable AI, Actionable AI ## I Introduction In the modern digital landscape, cybersecurity has emerged as a pivotal concern, as organizations and individuals alike are becoming increasingly reliant on interconnected technologies. The rapid proliferation of digital assets, cloud services, and the Internet of Things has brought about unprecedented conveniences but has also introduced intricate security challenges. The omnipresence of cyber threats, ranging from malware and phishing attacks to sophisticated hacking attempts, has underscored the need for robust defensive mechanisms that can adapt and respond to evolving tactics. In a digital world that is expanding each day in size, it is not reasonable to expect human experts to sift through millions of data files and logs, hoping to find possible threats on their own. Over the years, artificial intelligence tools have provided significant contributions to various cybersecurity tasks, ranging from predictive use cases such as vulnerability analysis [1, 2], threat hunting [3, 4], intrusion prediction [5, 6] and intrusion detection [7, 8, 9] to responsive measures like automated mitigation [10] and remediation [11]. Generative AI (GenAI) [12] has especially shined throughout the recent years with the emergence of Transformer-based Large Language Models (LLMs) such as BERT [13] and GPT [14, 15]. The application of LLMs in cybersecurity remains relatively underexplored compared to other fields, especially considering their demonstrated efficacy in diverse domains. In this paper, we aim to investigate the applicability of LLMs in a number of cybersecurity-related tasks. Specifically, we introduce a conversational agent called _"Cyber Sentinel"_, backed by the OpenAI GPT-4 language model, and explore its usability in streamlining common security tasks. Cyber Sentinel's goal is to assist in querying and analyzing data from a Cyber Threat Intelligence (CTI) feed, such as those available in common Open Source Intelligence (OSINT) frameworks and feeds, and present this intelligence in a user-friendly format to a security operator for further processing. Additionally, Cyber Sentinel is designed to handle a number of security actions in response to a threat, such as updating firewall rules and updating SIEM configurations. The objectives are contextualized within the realms of explainable AI [16] and actionable AI [17], both of which have garnered significant attention in the recent AI literature [18]. In short, we aim to answer the following research questions: * **RQ1:** Can LLMs be utilized to understand cybersecurity logs, events, and threat feeds and explain them adequately to a human operator? _(Explainable AI)_ * **RQ2:** Can LLMs take security actions based on instructions from a human operator? _(Actionable AI)_ Our experiments show that LLMs can indeed be used for these tasks with minimal effort to some extent, but gauging their precise effectiveness in an operational environment is challenging. In this work, we will implement some basic functionalities regarding each task and discuss the results. Our main emphasis is on showcasing the potential of GenAI in cybersecurity (as shown in Fig. 1) rather than proposing a novel, end-to-end tool for facilitating cybersecurity pipelines. The structure of this paper is as follows: Section II covers related work and prior research in GenAI and Cybersecurity. Section III delves into Cyber Sentinel, an approach to leverage conversational agents in cybersecurity vericals. Section IV presents findings from the experiments, their potential implications, and inherent limitations. The paper wraps up in section V, summarizing the main points and suggesting directions for future research. ## II Related Work This section will present a literature review, focusing on Generative AI, Network Security, and the intersection of the two. ### _Generative AI_ Generative Artificial Intelligence has emerged as a transformative technology that holds significant promise across various domains, including cybersecurity. Generative AI encompasses a range of techniques that enable machines to create novel content in different formats, including but not limited to sound, text, or images, and is often indistinguishable from human-generated outputs [19]. This technology is underpinned by deep learning architectures, such as _Generative Adversarial Networks (GANs)_[20] and _Variational Autoencoders (VAEs)_[21], which have demonstrated remarkable proficiency in generating images, text, and even complex data sequences. The field is not solely focused on image generation though, with another class of generative models being transformers [22] which target text generation. One of the front-runners of these transformer-based generators is a Large Language Model released by OpenAI called _GPT_[15], with the latest version being GPT-4 [23]. GPT-4 can perform a plethora of language-related tasks, such as language translation, sentiment analysis, and question answering, with minimal task-specific fine-tuning [24]. This remarkable versatility has led to widespread exploration of GPT's applications in fields beyond language, including data augmentation [25], medicine [26, 27] and even law [28]. Although these models are capable of understanding most questions and situations within their context, they sometimes face difficulties in comprehending ambiguous or highly complex queries due to biases in their understanding. Moreover, in specialized expert fields, the abundance of unique abbreviations exacerbates the models' challenges in understanding context, resulting in imprecise and unproductive responses [29]. One of the approaches to overcome the mentioned limitation of LLMs is called Prompt Engineering, which is the process of crafting effective and precise instructions (i.e., prompts) when working with language models like GPT. It involves formulating input text in a way that guides the model to generate the desired output. Proper prompt engineering is important to obtain accurate and relevant responses from the model, as it helps to influence the way the model interprets and generates text [30]. There has been a great effort from the community towards this direction, resulting in prompt engineering almost becoming its own sub-field within language modeling [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 240, 213, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 285, 287, 288, 289, 291, 286, 287, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 333, 34, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 161, 170, 171, 172, 174, 175, 176, 177, 178, 179, 181, 192, 193, 194, 195, 196, 197, 198, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 209, 22, 211, 223, 216, 217, 218, 219, 224, 219, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258, 259, 261, 270, 271, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 113, 114, 116, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 145, 146, 147, 148, 149, 150, 151, 16, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 182, 193, 195, 196, 197, 198, 200, 203, 204, 205, 206, 206, 207, 208, 209, 210, 211, 209, 222, 22, 22, 231, 232, 240, 241, 242, 256, 257, 258, 269, 270, 282, 293, 294, 295, 296, 297, 298, 299, 300, 310, 320, 321, 323, 325, 326, 327, 328, 329, 340, 323, 341, 342, 343, 344, 35, 346, 347, 348 [33]. Another important aspect is tracking the current user state in a task-oriented dialog context. _Dialog State Tracking_ (DST) [34] provides a control over a set of slots which should be given to conversational agents and also facilitate the interaction with other components. A _"slot"_ refers to a designated and structured data field within a conversation or input. These slots are utilized to extract specific pieces of information, such as user interests, preferences, or parameters, to facilitate effective communication and task execution. The most important slot in a conversation is _"intent"_, which is the general theme of the conversation that the user's request mainly revolves around. DST should accumulate adequate information during the conversation with AI and is responsible for keeping dialog state and slot values updated at each turn of the conversation. Among the approaches that address DST, there are some that take slots into account independently. An approach to capture such intricate relationships involves the application of self-attention mechanisms [35]. Rather than relying solely on automatic relationship learning, an alternative research avenue leverages the existing knowledge inherent in domain ideas. One compelling strategy involves harnessing the hierarchical structure present in these ontologies, as demonstrated by [36]. ### _Network Security_ Network security stands as a cornerstone of modern cybersecurity strategies, aiming to protect critical data and systems from unauthorized access, attacks, and data breaches. With the evolving threat landscape, traditional security measures have been supplemented by advanced technologies and methodologies. In recent literature, various approaches have been explored to enhance network security. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) have been widely studied as means to detect and prevent unauthorized access and malicious activities [37, 38]. Machine learning algorithms, such as anomaly detection and behavioral analysis, are also gaining traction in network security due to their ability to identify patterns indicative of suspicious behavior [39, 40, 41]. Security Information and Event Management (SIEM) systems have emerged as essential tools for monitoring and managing security-related events within an organization's IT infrastructure. These systems collect and analyze vast amounts of security data from different sources, including network devices, servers, applications, and user activity logs. SIEMs offer real-time monitoring, threat detection, incident response, and compliance reporting, thereby enabling security teams to identify and respond to potential threats promptly [42]. Recent research focuses on enhancing SIEM capabilities through integration with machine learning and AI techniques. This fusion enables SIEMs to more accurately identify abnormal behavior and potential security incidents, reducing false positives and enhancing the efficiency of incident response [43, 44, 45]. Cyber Threat Intelligence (CTI) plays a crucial role in proactive cybersecurity measures by providing valuable insights into emerging threats, threat actors, and their tactics, techniques, and procedures (TTPs). CTI involves collecting, analyzing, and sharing information about cyber threats to empower organizations with the knowledge needed to anticipate and counteract potential attacks [46]. Modern applications of CTI include leveraging big data analytics and AI-driven techniques to process and analyze vast amounts of threat data in real-time [47, 48]. Automated threat intelligence platforms assist security teams in rapidly identifying and assessing new threats, facilitating quick decision-making and proactive defense measures. ### _LLMs and Cybersecurity_ AI's ability to generalize has effectively replaced traditional rule-based methods with smarter technology [49]. Nevertheless, the evolving digital environment is not only enhancing technology but also increasing the complexity of cyber threat actors. In the past, cyberspace dealt with relatively basic intrusion attempts, albeit in large numbers; however, the advent of AI-empowered cyberattacks initiated a completely new era, introducing both familiar and unfamiliar changes to cyberattack methods [49, 50]. Thus, one could argue that the advancement of GenAI tools in cybersecurity serves as a double-edged sword, aiding both the defenders and the adversaries [51]. On the offensive front, LLMs and chatbots (especially ChatGPT) have been used on a range of tasks such as malware development [52], phishing attacks [53], misinformation [54] and even false data injection targeting networks [55] and industrial systems [56]. An astute security expert might point out that most of these LLM-based approaches are neither novel nor efficient in achieving their goal. An important point worth mentioning here would be that information regarding cyber offenses involving malicious actions is generally prohibited in many jurisdictions due to legal and ethical considerations, which restricts its accessibility. The availability of large language models like ChatGPT can potentially ease the scarcity of resources for individuals with limited knowledge or skills seeking to bypass ethical constraints when engaging in cyber offenses. Since these LLMs offer a vast amount of information in one location, they can easily provide the comprehensive data necessary to execute various cyber offenses. On the other hand, LLMs have been just as effective in helping security experts defend against this new wave of cyber threats. Ransomware mitigation [57], synthetic CTI generation [58] and intrusion detection [59] are all examples of defensive use cases LLMs have had so far. Chatbots are particularly of interest lately, with some works focusing on promoting security awareness [60] or even finding system vulnerabilities [61]. Perhaps the closest use case to our work is _ChatIDS_[62], which is a chat-based AI designed to explain IDS alerts to non-experts by using large language models. ## III Methodology In this section, we will discuss our proposed novel framework. Subsection III-A will introduce Cyber Sentinel and give an overview of the other components it interacts with. Subsection III-B goes into finer details of the conversational agent itself and discusses how it is prompted by the user and how it communicates with other components. ### _Components_ Our proposed framework has four components, all of which are outlined in Fig. 2 and discussed in detail below: #### Iii-A1 IoC Signature Database Our CTI module is comprised of an Elasticsearch cluster which is continuously updated by a stream of Indicators of Compromise (IoCs) coming from OSINT data sources. These OSINT feeds are widely used by both academic and industrial cybersecurity communities. Table I includes all of the OSINT sources that were used in our work. Elasticsearch offers an extensive API for integration with other components, which is one of the reasons it is commonly used by the cybersecurity community. Whenever a user inquires about any CTI-related resources (e.g. "IoCs reported in the last week"), Cyber Sentinel constructs the appropriate query and then executes that query on the Elasticsearch database through its API. #### Iii-A2 Conversational Agent The conversational agent is the core part of the framework, integrating every other component together, and also acts as a proxy between system components and the user. It embodies a search engine architecture distinguished by a state-of-the-art, AI-driven interactive response mechanism backed by an LLM. This innovative design empowers users to interact with the software using natural language, formulating their queries in a conversational manner. For instance, a user might inquire, _"Can you display the recent updates from the IP addresses reported on the last day?"_. In reply, the system adeptly compiles and presents relevant data to the user in an intuitive and user-friendly format. #### Iii-A3 Siem The SIEM module, empowered by Wazuh, assumes a pivotal role in enhancing the overall security of the system. SIEM delivers a comprehensive array of functionalities, empowering systems to not only detect and monitor potential cybersecurity threats but also to effectively respond to them. Wazuh stands as an open-source security platform renowned for its all-encompassing security monitoring, threat detection, and incident response capabilities. Crafted for profound extensibility and tolerability, Wazuh seamlessly melds a robust SIEM system with an advanced Intrusion Detection System (IDS). The platform centrally accumulates and integrates security-related data originating from various sources including logs, events, and network traffic, thoroughly looking out for potential cybersecurity risks. Wazuh is also capable of fusing real-time threat identification with proactive and reactive response mechanisms, rapidly countering security incidents and eliminating potential threats. Wazuh's design is based on a server-client architecture, with the main components being Wazuh Manager (server) and Wazuh Agent (client). A typical Wazuh setup includes one (or more in distributed setups) Wazuh manager which collects and analyzes security data from various sources, such as logs, events, and alerts, generated by multiple agents installed on monitored systems. Agents are also capable of taking actions such as updating firewalls on the monitored nodes when instructed by the manager. Optionally, the manager can be complemented with _Elasticsearch_ and _Kibana_ to index and visualize the collected data. One of the main use cases of Wazuh is intrusion detection, which is handled through alerts. Alerts are notifications generated by Wazuh when it detects specific security events or suspicious activities on monitored systems. Wazuh uses rules to analyze security-related data, including log files, system events, and configuration changes. These rules define patterns and conditions that, when met, trigger alerts. Wazuh also assigns severity levels to alerts which indicates the potential impact and importance of the detected event, and these could range from _Low_ threat levels up to _Critical_ levels. In addition to alerting, Wazuh can be configured to trigger response actions when certain alerts occur. These actions can include executing scripts, blocking IP addresses, or other mitigation measures to address security breaches based on threat level. Wazuh comes with a wide set of predefined rules and responses suitable for most use cases, but it also has the option to define customized rules. These custom rules can be set to check all incoming traffic against a blacklist of IoCs stored in Wazuh's Centralized Database (CDB). As shown in Fig. 1 this capability was used in our work to create custom notifications based on user requests. In other words, security operators can update this CDB through conversation with Cyber Sentinel, and custom rules, then check all traffic in a network against the blacklist and generate alerts if any traffic matches the defined rules. Cyber Sentinel can also be instructed to update Wazuh \begin{table} \begin{tabular}{|c|c|} \hline **Source Name** & **Website URL** \\ \hline Abuse URL & [https://urlhaus-api.abuse.ch/](https://urlhaus-api.abuse.ch/) \\ \hline Abuese Malware & [https://urlhaus-api.abuse.ch/](https://urlhaus-api.abuse.ch/) \\ \hline Malware Bazaar & [https://bazaar.abuse.ch/](https://bazaar.abuse.ch/) \\ \hline AlienVault & [https://otx.alienvault.com/](https://otx.alienvault.com/) \\ \hline Anomali & [https://www.anomali.com/](https://www.anomali.com/) \\ \hline \end{tabular} \end{table} TABLE I: List of OSINT threat feeds used for Cyber Sentinel in this paper. Fig. 2: An overview of Cyber Sentinel and components it interacts with. response actions, for instance, block some of the indicated IPs. #### Iii-A4 Large Language Model The LLM module serves as the backbone component of the entire system, facilitating the conversational agent in executing its tasks effectively. The LLM that is used in our work is GPT-4 [23], the latest LLM released by OpenAI. Access to GPT-4 is provided through OpenAI APIs [63]. ### _Cyber Sentinel_ This section provides a closer look at Cyber Sentinel, the conversational agent introduced in the previous segment, and its implementation. Example user interactions will also be presented to demonstrate its capabilities. #### Iii-B1 System Design The logical implementation includes two segments: The extraction of user intent and slots and the subsequent execution of the requested actions. As defined before, _intents_ are the general subject of the user request, and _slots_ are supplementary fields for each intent that provide further details about the request. Table II contains the list of intents we implemented for Cyber Sentinel to handle with some examples for each intent, and table III has the list of slots extracted along with intents. These lists are by far non-exhaustive and are only considered as proof-of-concept for our work. We embraced a sequential approach while utilizing LLMs, chaining GPT-4 models to facilitate user interactions and streamline command processing which is further explained in subsection III-B2. GPT-4 models are fed a series of chat messages annotated with distinct dialogue roles user, assistant, and system. Through extensive pre-training and a provided system message, the LLM acquires a profound grasp of its designated role and associated responsibilities, thus enabling it to execute generic question-answering tasks in addition to specific security-related downstream tasks. Fig. 3 depicts Cyber Sentinel's logic flow diagram, and Fig. 4 includes the system template message that is fed to LLM alongside message history upon each interaction. 4 again) with a different prompt, this time asking it to extract slots based on the intent. Thanks to the high inference power of GPT-4, we are also able to discern some slots implicitly from user messages. For example, if the user asks for the last 24 hours of activity, GPT-4 is able to infer To_Date query parameter is the current date and time. Finally, if the result of this step is satisfactory (i.e., all required slots are extracted/inferred and the action can be performed), we proceed to the next step. Otherwise, we prompt the user to provide further details regarding the action they would like to take. 3. Finally, when both intent and related slots are extracted, we perform the requested action on either the Elastic-search database or the Wazuh cluster through their API. An example of this process is depicted in Fig. 5. ## IV Discussion The preceding section presented the framework and elucidated the functionality and integration of its various components. In this segment, the achieved results will be discussed, alongside their implications for the cybersecurity community. Potential applications, inclusive of misuse scenarios for the tool, will also be explored. The discussion will conclude by highlighting some inherent limitations of this research ### _Impacts and use cases_ The integration of conversational agents like Cyber Sentinel into cybersecurity operations has the potential to yield a range of significant impacts, ultimately enhancing the effectiveness and efficiency of security measures. Below, the advantages are explored in detail, highlighting how these agents can streamline operational processes, facilitate rapid detection and response to security events, and contribute to a more robust and adaptive security posture. * **Improved Threat Detection and Response:** One of the primary impacts of Cyber Sentinel is its capacity to significantly improve threat detection and response times. By leveraging its cyber threat intelligence module, the system can continuously monitor and analyze vast datasets of security events and alerts from an SIEM. Through natural language interactions with security analysts, it can swiftly pinpoint anomalies, recognize patterns, and identify potential security breaches. Additionally, it can take actions based on this intelligence with minimal instructions provided by security analysts, thus reducing the window of vulnerability and minimizing potential damages to systems. * **Enhanced Operational Efficiency:** The introduction of a conversational agent like Cyber Sentinel can streamline security operations. Security analysts often face a deluge of alerts, leading to alert fatigue and potentially missing critical threats. Cyber Sentinel's ability to prioritize and present relevant information to analysts in a conversational format alleviates this challenge. It can automate routine tasks such as querying SIEM data or updating firewall rules based on predefined policies. This automa \begin{table} \begin{tabular}{|c|c|} \hline **Category** & **Example** \\ \hline General Question & Where is the capital of Finland? \\ & How is the weather \\ \hline Cyber security Question & What is Phishing? \\ & How do banks protect customer data from cyber threats? \\ \hline & Give the latest IP addresses reported in the last 24 hours. \\ & Show the statistics of the latest IoCs in the last 7 days. \\ Query & Is this email address malicious: [email protected]_ \\ & Is this URL _John.Doe.com_ secure? \\ & Show me all attacks targeting TCP port 9000. \\ & How many attacks reported within the last 24 hours targeting TCP port 23? \\ \hline Action & Block the IP addresses within subnet _54.12.0.0/16_ \\ & Block the hash signature _530ac4..._ \\ \hline & Block 130.231.4.98 if it is malicious. \\ Hybrid & Block all IP addresses reported today. \\ \hline \end{tabular} \end{table} TABLE II: List of user intentions including user queries and actions to the Wazuh agent \begin{table} \begin{tabular}{|c|c|c|} \hline **Slot** & **Values** & **Description** \\ \hline Intent & Status, Search, Block, Unblock & Main intention(s) of the request \\ \hline Signature\_Type & IP, Subnet, Email, Hash, URL, Port & Type of requested signature \\ \hline Signature\_Value & String or Number (IPv4,IPv6,email, etc.) & Signature itself \\ \hline From\_Date & Datetime: 2023/01/01 & Filter search to start from a specific date and time \\ \hline To\_Date & Datetime: 2023/01/02 & Filter search to end on a specific date and time \\ \hline Quantity & Number & Search for the quantity requested \\ \hline \end{tabular} \end{table} TABLE III: Intent recognition: List of slots extracted from user message tion not only reduces manual workload but also ensures consistency and accuracy in security responses. * **Real-time Collaboration and Knowledge Sharing:** Cyber Sentinel's conversational capabilities facilitate real-time collaboration among security team members. Analysts can engage with the agent to discuss ongoing incidents, share insights, and collectively make decisions. The agent serves as a central hub for disseminating threat intelligence and best practices, promoting knowledge sharing among team members. This fosters a more collaborative and responsive security environment. The deployment of Cyber Sentinel in cybersecurity operations opens up a spectrum of practical use cases, each showcasing its adaptability and value across diverse scenarios: * **Incident Triage and Investigation:** In the context of incident response, Cyber Sentinel can play a crucial role in triaging and investigating security incidents. It can gather initial information from the SIEM, classify the incident severity, and provide analysts with a starting point for their investigations. This allows security teams to focus their efforts on high-priority incidents while the agent handles routine tasks. * **Threat Intelligence Analysis:** The agent's cyber threat intelligence module empowers organizations to stay proactive in the face of evolving threats. It continuously monitors external threat feeds, aggregating and analyzing data from various sources. It can then provide timely intelligence updates to security analysts, allowing them to adjust their defenses and strategies accordingly. * **Security Policy Management:** Cyber Sentinel can also streamline security policy management. Security policies are often complex, and keeping firewall rules up-to-date can be challenging. The agent can assist in the process by reviewing policy changes, suggesting updates based on threat intelligence, and implementing approved changes. This ensures that security policies remain agile and aligned with emerging threats. * **Security Awareness and Training:** Beyond its operational roles, Cyber Sentinel can contribute to improving the overall security posture of an organization. It can serve as an educational tool by providing security awareness training to employees. Through interactive conversations, it can impart knowledge about common threats, safe online practices, and company-specific security policies. While the potential benefits of Cyber Sentinel in cybersecurity are substantial, it is essential to acknowledge the potential for misuse or malicious purposes. Conversational agents like Cyber Sentinel, if compromised or manipulated, could be used to disseminate false information, disrupt in-place security measures, or even automate cyberattacks. This is particularly of importance as methods to _jailbreak_ LLMs (i.e., exploiting these models to enable them to perform tasks or generate content that goes beyond their intended use cases) keep getting more sophisticated each day [51]. Therefore, strict access controls, authentication mechanisms, and continuous monitoring are necessary to prevent unauthorized access or manipulation of the agent. Additionally, ethical considerations come into play when designing and deploying such agents. Ensuring that the agent respects privacy and complies with relevant regulations is paramount. Furthermore, measures should be in place to prevent the unintended disclosure of sensitive information during interactions with the conversational agent. In conclusion, the integration of conversational agents like Cyber Sentinel in cybersecurity operations brings forth significant impacts and a multitude of practical use cases. These agents, by harnessing the capabilities of artificial intelligence and natural language processing, have the potential to revolutionize how organizations defend against cyber threats, Fig. 5: A sample of Cyber Sentinel’s chaining process. First, LLM is prompted with a user message and a template dynamically created with the current date and time. First LLM constructs the necessary steps needed to perform the requested action through Chain-of-Thought, and extracts/formulates related _parameters_ (e.g. from_date, list_ip). Afterward, each of these generated steps is passed to another LLM which extracts given slots (like intent, index_type, from_date, etc.) and fills in other implicit slots (such as predicted to_date and retrieved list_ip). ofering a promising path towards more robust and efficient cybersecurity strategies. ### _Limitations_ Despite the great impacts and numerous use cases that were mentioned in the previous subsection, Cyber Sentinel also has a number of limitations that need to be considered for further work: * **Human Oversight and Decision-Making:** Cybersecurity tasks often require human expertise and judgment, particularly in situations involving complex, nuanced, or novel threats. Relying entirely on the agent without human oversight can be risky, as it may not always make the most appropriate decisions. This is especially true in cases where existing intelligence is not of good enough quality for the agent to be taken advantage of, or when faced with sophisticated threats and zero-day attacks. * **Privacy Concerns and Ethical Issues:** The deployment of a conversational agent within a cybersecurity ecosystem raises security and privacy concerns. If not adequately secured, the agent itself could become a target for attackers seeking to manipulate its responses or gather sensitive information. Furthermore, conversational agents are susceptible to biases present in their training data just like other AI systems. This could lead to biased recommendations or responses, potentially causing ethical dilemmas or reinforcing existing biases within security practices. * **Regulatory Compliance and Resource Intensity:** Compliance with cybersecurity regulations and standards is crucial for organizations. Implementing conversational agents must align with regulatory requirements, which can add complexity to the deployment process. Additionally, the deployment and maintenance of a conversational agent like Cyber Sentinel can be resource-intensive as LLMs are notoriously known to require a significant amount of processing power. Organizations must allocate sufficient resources for initial setup, ongoing training, and monitoring to ensure the agent operates effectively. ## V Conclusions In this paper, we introduced a conversational agent called _Cyber Sentinel_, which can be used for streamlining cyber security. Cyber Sentinel is capable of helping security analysts perform a range of security-related tasks, from querying a cyber threat intelligence feed to managing an SIEM's configuration. We went over how Cyber Sentinel works and how it interacts with other components in a typical cybersecurity organization. The potential impacts and limitations of the tool were also briefly considered. It should be mentioned once again that this is a work in progress and only presented now as a proof of concept that conversational agents and LLMs can be immensely powerful in certain cybersecurity tasks. Nevertheless, many steps remain to be explored before AI is ready to step into the role of security analyst itself. Some possible directions for future work may be: * **Adaptive Threat Detection:** Investigate the development of conversational agents with self-learning capabilities to adapt to evolving threats. This research could explore techniques such as reinforcement learning to enable agents to continually improve their threat detection accuracy. * **Explainability and Actionability:** Explore methods for making conversational agents more transparent and interpretable. Further research into explainable AI [16] and actionable AI [17] techniques could help build trust and confidence in AI-driven security decision-making. * **Multi-Agent Systems:** Study the potential of multi-agent systems where conversational agents work collaboratively to enhance security operations. Research could explore how these agents can distribute tasks, share insights, and collectively respond to complex cyber threats. * **Ethical Hacking and Vulnerability Assessment:** Research the application of conversational agents in ethical hacking and vulnerability assessment, including automated penetration testing and vulnerability scanning. Some work [64, 65] have already started down this path but further investigation is still required.
2309.06631
Extended SAID Partial-Wave Analysis of Pion Photoproduction
A unified Chew-Mandelstam description of single-pion photoproduction data, together with pion- and eta-hadroproduction data, has been extended to include measurements carried out over the last decade. We consider photo-decay amplitudes evaluated at the pole with particular emphasis on ng couplings and the influence of weighting on our fits. Both energy-dependent and single-energy analysis (energy-binned data) are considered.
William J. Briscoe, Axel Schmidt, Igor Strakovsky, Ron L. Workman, Alfred Svarc
2023-09-12T22:48:43Z
http://arxiv.org/abs/2309.06631v1
# Extended SAID Partial-Wave Analysis of Pion Photoproduction ###### Abstract A unified Chew-Mandelstam description of single-pion photoproduction data, together with pion- and eta-hadroproduction data, has been extended to include measurements carried out over the last decade. We consider photo-decay amplitudes evaluated at the pole with particular emphasis on \(n\gamma\) couplings and the influence of weighting on our fits. Both energy-dependent and single-energy analysis (energy-binned data) are considered. ## I Introduction Our knowledge of the baryon spectrum, as determined from analyses of experimental data, has advanced rapidly [24] over the past decade. The progress has been most significant for non-strange baryons, due largely to the wealth of new and more precise measurements made at electron accelerators worldwide. The majority of these new measurements have been performed at Jefferson Lab, USA (using the CLAS and Hall A detectors), with the MAMI accelerator in Mainz, Germany (the Crystal Ball/TAPS detector being particularly well suited for the measurement of neutral final states), and with the Crystal Barrel detector at ELSA in Bonn, Germany. While most of the early progress [1; 2; 3; 4] in baryon spectroscopy was based on the analysis of meson-nucleon scattering data, particularly pion-nucleon scattering (\(\pi N\to\pi N\), \(\pi N\to\pi\pi N\)), photon-nucleon interactions offer the possibility of detecting unstable intermediate states with small branchings to the \(\pi N\) channel. Many groups have performed either single-channel or multi-channel analyses of these photon-induced reactions. In the more recent single-channel analyses, fits have typically used isobar models [5; 6] with unitarity constraints at the lower energies, \(K\)-matrix-based formalisms, having built-in cuts associated with inelastic channels [7], and dispersion-relation constraints [8; 6]. Multi-channel fits have analyzed data (or, in some cases, amplitudes) from hadronic scattering experiments together with the photon-induced channels. These approaches have utilized unitarity more directly. Among others, analyses have been carried out by MAID [5], the Bonn-Gatchina [9], ANL-Osaka [10], Kent State [11], and JPAC [12] groups, SAID [7] (Scattering Analysis Interactive Database) and Julich-Bonn [13]. Here we should also briefly mention the possibility of extracting reaction amplitudes directly from scattering data with minimal model input. Examples of this approach are described in the analyses of kaon photoproduction data by the Jefferson Lab [14] and Bonn-Gatchina [15] groups. The measurements required for an amplitude extraction with minimal model bias differ depending on whether the goal is to obtain helicity amplitudes (the usual _complete experiment_ case [16]) or partial-wave amplitudes [17]. A number of recent studies have shown the limits to model independence [18] and the convergence [19] of independent fits with the availability of more observables measured with high precision. The above studies have also recently been extended to pseudo-scalar-meson electroproduction [20]. An objective of this program is the determination of all relevant characteristics of these resonances, _i.e._, pole positions, widths, principal decay channels, and branching ratios. In order to compare directly with QCD-inspired models and Lattice QCD predictions, there has also been a considerable effort to find "hidden" or "missing" resonances [21], predicted by quark models [22] and LQCD [23] but not yet confirmed. Actually, PDG [24] reports a third of predicted states by QMs and LQCD. Knowledge of the \(N\) and \(\Delta\) resonance photodecay amplitudes has largely been restricted to the charged states. Apart from lower-energy inverse reaction \(\pi^{-}p\to\gamma n\) measurements, the extraction of the two-body \(\gamma n\to\pi^{-}p\) and \(\gamma n\to\pi^{0}n\) observables requires the use of a model-dependent nuclear correction, which mainly comes from final state interaction (FSI) effects within the target deuteron [25; 26; 27]. As a result, the observables for proton-target experiments are most thoroughly explored and, among neutron-target (deuteron) measurements, the \(\pi^{0}n\) charge channel is least explored. This problem is less severe if isospin relations are used to express the four charge-channel amplitudes in terms of three isospin amplitudes [28]. Then, in principle, the \(\pi^{0}n\) production channel can be predicted in terms of the \(\pi^{0}p\), \(\pi^{+}n\) and, \(\pi^{-}p\) production channel amplitudes. This approach has been tested [29] with the improved availability of \(\pi^{0}n\) data; we will consider this again in the fits to data that follow. The GW SAID pion photoproduction analyses have been updated periodically since 1990 [30; 31], with more frequent updates published through our GW website [32]. Often, we present our results with CLAS and A2 Collaborations including determination of the resonance parameters (see, for instance, Refs. [33; 34; 35; 36; 37]) while our full analysis was reported 10 years ago [7; 38]. The present work updates our SAID partial-wave analysis (PWA) results and reports a new determination of photodecay amplitudes and pole positions in the complex energy plane. High activity of worldwide electromagnetic facilities (JLab, MAMI, CBELSA, MAX-lab, SPring-8, and ELPH) increased the body of the SAID database by a significant amount (see Table 1). 60% of these are \(\gamma p\to\pi^{0}p\) data. A review of the last two decades of using photon beams to measure the production of mesons, and in particular the information that can be obtained on the spectrum of light, non-strange baryons is given in Ref. [59]. A wealth of \(\gamma N\to\pi N\) data, for single- and double-polarization observables, have been anticipated over the past ten years. These data are pivotal in determining the underlying amplitudes in nearly complete experiments, and in discerning between various microscopic models of multichannel reaction theory. The amplitudes from these analyses can be utilized, in particular, in evaluating contributions to the Gerasimov-Drell-Hearn (GDH) sum rule and related integrals, as was reported recently [60]. In the following section (Sec. II), we summarize changes to the SAID database since 2012. The changes reflected in our multipoles are displayed in Section III. A comparison of past and recent photo-decay amplitudes, for resonances giving a significant contribution to pion photoproduction, is made in Section IV. Finally, in Section V, we summarize our results and comment on possible changes due to further measurements and changes in our parametrization form. ## II Extended SAID database At present, the SAID database [32] has 35,898 \(\gamma p\to\pi^{0}p\), 12,494 \(\gamma p\to\pi^{+}n\), 13,473 \(\gamma n\to\pi^{-}p\), and 2,515 \(\gamma n\to\pi^{0}n\) data below \(E_{\gamma}=2700\) MeV. Table 1 accumulates 21,190 \(\gamma p\to\pi^{0}p\), 1,502 \(\gamma p\to\pi^{+}n\), 10,923 \(\gamma n\to\pi^{-}p\), and 1,763 \(\gamma n\to\pi^{0}n\) data published since 2012 [32]. New measurements mostly cover the \(\pi^{0}p\) sector. Then there are a lot of single (\(\Sigma\), \(\mathbb{P}\), and \(\mathbb{T}\),) and double (\(\mathbb{E}\), \(\mathbb{G}\), \(\mathbb{F}\), and \(\mathbb{H}\)) polarized data which came recently. It is an essential input for the amplitude reconstruction of the pion photoproduction and determination photocouplings. One can see that the "neutron" database grows rapidly which is important for the determination of the neutral photocouplings. A full \(\chi^{2}\)/data contribution for each pion photoproduction reaction vs different PWAs reports in Table 2. It presents a partial \(\chi^{2}\)/data contribution of data from Table 3 vs different PWAs. ## III SAID multipole amplitudes The SAID parametrization of the transition amplitude \(T_{\alpha\beta}\) used in the hadronic fits to the \(\pi N\) scattering data is given as \[T_{\alpha\beta}=\sum_{\sigma}[1-\overline{K}C]^{-1}_{\alpha\sigma}\overline{ K_{\sigma\beta}}\,, \tag{1}\] where \(\alpha\), \(\beta\), and \(\sigma\) are channel indices for the \(\pi N\), \(\pi\Delta\), \(\rho N\), and \(\eta N\) channels. Here \(\overline{K_{\sigma\beta}}\) are the Chew-Mandelstam \(K\)-matrices, which are parameterized as polynomials in the scattering energy. \(C_{\alpha}\) is the Chew-Mandelstam function, an element of a diagonal matrix \(C\) in channel space, which is expressed as a dispersion integral with an imaginary part equal to the two-body phase space [65]. In Ref. [7], it was shown that this form could be extended to \(T_{\alpha\gamma}\) to include the electromagnetic channel as \[T_{\alpha\gamma}=\sum_{\sigma}[1-\overline{K}C]^{-1}_{\alpha\sigma}\overline{ K_{\sigma\gamma}}\,. \tag{2}\] Here, the Chew-Mandelstam K-matrix elements associated with the hadronic channels are kept fixed from the previous SAID solution SP06 [2], and only the electromagnetic elements are varied. The resonance pole and cut structures are also fixed from hadronic scattering. This provides a minimal description of the photoproduction process, where only the \(N^{*}\) and \(\Delta^{*}\) states present in the SAID \(\pi\)N scattering amplitudes are included in this multipole analysis. For each angular distribution, a normalization constant (\(X\)) and its uncertainty (\(\epsilon_{X}\)) were assigned. The quantity \(\epsilon_{X}\) is generally associated with the normalization uncertainty (if known). The modified \(\chi^{2}\) function to be minimized is given by \[\chi^{2}=\sum_{i}\left(\frac{X\theta_{i}-\theta_{i}^{exp}}{\epsilon_{i}} \right)^{2}+\left(\frac{X-1}{\epsilon_{X}}\right)^{2}, \tag{3}\] where the subscript \(i\) labels the data points within the distribution, \(\theta_{i}^{exp}\) is an individual measurement, \(\theta_{i}\) is the corresponding calculated value, and \(\epsilon_{i}\) represents the total angle-dependent uncertainty. The total \(\chi^{2}\) is then found by summing over all measurements. This re-normalization freedom is essential for obtaining the best SAID fit results. For other data analyzed in the fit, such as the total cross sections and excitation data, the statistical and systematic uncertainties were combined in quadrature and no re-normalization was allowed. In the previous fits to differential cross sections, the unrestricted best fit gave re-normalization constants \(X\) significantly different from unity. As can be seen from Eq. (3), if an angular distribution contains many measurements with small statistical uncertainties, a change in the re-normalization may improve the fit with only a modest \(\chi^{2}\) penalty. Here, however, the weight of the second term in Eq. (3) has been adjusted by the fit for each dataset to keep the re-normalization constants approximately within \(X\) of unity. With the new quality datasets (Table 1), a new SAID multipole analysis has been completed. This new global energy-dependent solution has been labeled as SM22. The overall fit quality of the present SM22 and previous SAID CM12 solutions are compared in Tables 3 and 4. There are many cases where the CM12 fit produces a \(\chi^{2}\) per datum, for new measurements, which is significantly than greater than unity. The new best fit, SM22, includes these new measurements, reducing the \(\chi^{2}\)/data to more acceptable values. Both energy-dependent (ED) and single-energy (SE) \begin{table} \begin{tabular}{|c|c|c c c c c|c|c|} \hline Reaction & Observable & Nexp & Nadata & E\({}_{\gamma}\)(min) & E\({}_{\gamma}\)(max) & \(\theta\)(min) & \(\theta\)(max) & Laboratory/ & Ref \\ & & & & (MeV) & (MeV) & (deg) & (deg) & Collaboration & \\ \hline \(\gamma p\rightarrow\pi^{0}p\) & \(d\sigma/d\Omega\) & 30 & 600 & 147 & 218 & 18 & 162 & MAMI/A2 & [39] \\ & & 269 & 7978 & 218 & 1573 & 15 & 165 & MAMI/A2 & [40] \\ & & 41 & 560 & 862 & 2475 & 15 & 165 & CBELSA/CBELSA/TAPS & [41] \\ & & 80 & 2030 & 1275 & 5425 & 27 & 140 & JLab/CLAS & [42] \\ & & 22 & 350 & 1325 & 2375 & 47 & 162 & SPring-8/LEPS2\&BGOegg & [43] \\ & \(\Sigma\) & 26 & 220 & 147 & 206 & 25 & 155 & MAMI/A2 & [39] \\ & & 78 & 1403 & 319 & 649 & 31 & 158 & MAMI/A2 & [44] \\ & & 39 & 700 & 1102 & 1862 & 32 & 148 & JLab/CLAS & [34] \\ & & 16 & 252 & 1325 & 2350 & 57 & 162 & SPring-8/LEPS2\&BGOegg & [43] \\ & \(\mathbb{P}\) & 8 & 152 & 683 & 917 & 51 & 163 & CBELSA/CBELSA/TAPS & [45] \\ & & 11 & 11 & 1845 & 5631 & 79 & 143 & JLab/GEp-III & \\ & & & & & & & & & \& GEp2gamma & [46] \\ & & & & & & & & & MAMI/A2 & [47] \\ & & 34 & 397 & 440 & 1430 & 30 & 162 & MAMI/A2 & [48] \\ & & 29 & 601 & 683 & 2805 & 29 & 163 & CBELSA/CBELSA/TAPS & [45] \\ & & 33 & 456 & 615 & 2250 & 22 & 158 & CBELSA/CBELSA/CBELSA/TAPS & [49] \\ & & 22 & 197 & 632 & 2187 & 37 & 144 & JLab/CLAS & [50] \\ & & 19 & 318 & 633 & 1300 & 23 & 156 & CBELSA/CBELSA/TAPS & [51] \\ & & 34 & 397 & 440 & 1430 & 30 & 162 & MAMI/A2 & [48] \\ & \(\mathbb{H}\) & 8 & 154 & 683 & 917 & 51 & 163 & CBELSA/CBELSA/TAPS & [45] \\ & \(\mathbb{C}_{x^{\prime}}\) & 45 & 45 & 462 & 1337 & 75 & 140 & MAMI/A2 & [52] \\ & & 13 & 13 & 1845 & 5643 & 82 & 143 & JLab/GEp-III & \\ & & & & & & & & & \& GEp2gamma & [46] \\ & & & & & & & & & \(\times\) \\ & \(\mathbb{C}_{z^{\prime}}\) & 13 & 13 & 1845 & 5643 & 80 & 143 & JLab/GEp-III & \\ & & & & & & & & & \(\times\) \\ \hline \(\gamma p\rightarrow\pi^{+}n\) & \(\Sigma\) & 39 & 386 & 1102 & 1862 & 32 & 148 & JLab/CLAS & [34] \\ & \(\mathbb{E}\) & 35 & 900 & 363 & 2181 & 20 & 146 & JLab/CLAS & [53] \\ & & \(\mathbb{G}\) & 22 & 216 & 632 & 2229 & 29 & 142 & MAMI/A2 & [50] \\ \hline \(\gamma n\rightarrow\pi^{-}p\) & \(\sigma_{tot}\) & 6 & 6 & 150 & 162 & & & MAX-lab/PIONS@MAX-lab & [54] \\ & \(d\sigma/d\Omega\) & 14 & 104 & 301 & 455 & 58 & 133 & MAMI/A2 & [55] \\ & & 156 & 8428 & 445 & 2510 & 26 & 128 & JLab/CLAS & [35] \\ & & 68 & 816 & 1050 & 3500 & 32 & 157 & JLab/CLAS & [33] \\ & \(\Sigma\) & 93 & 1293 & 947 & 2498 & 24 & 145 & JLab/CLAS & [56] \\ & \(\mathbb{E}\) & 21 & 266 & 727 & 2345 & 26 & 154 & JLab/CLAS & [36] \\ \hline \(\gamma n\rightarrow\pi^{0}n\) & \(d\sigma/d\Omega\) & 27 & 492 & 290 & 813 & 32 & 139 & MAMI/A2 & [37] \\ & & 49 & 931 & 446 & 1427 & 32 & 162 & MAMI/A2 & [57] \\ & & \(\Sigma\) & 12 & 189 & 390 & 610 & 49 & 148 & MAMI/A2 & [29] \\ & \(\mathbb{E}\) & 17 & 151 & 446 & 1427 & 46 & 154 & MAMI/A2 & [58] \\ \hline \end{tabular} \end{table} Table 1: Published data for \(\gamma N\rightarrow\pi N\) reactions since 2012 as given in the SAID database [32]: 1st column is the reaction, 2nd column is the the observable, 3rd column is the number of energy bins, 4th column is the number of data points. solutions were obtained from fits to the combined proton and neutron target database, extending from threshold to \(E_{\gamma}=2.7\) GeV for the ED fit and to \(E_{\gamma}=2.2\) GeV for SE fits. Apart from the main ED result (SM22) several supplemental fits were done in order to gauge the importance of including \(\pi^{0}n\) data (which can, in principle, be at least qualitatively predicted from the remaining more fully populated charge channels). Here fits were done with increased weight for the \(\pi^{0}n\) data and conversely the removal of all such data. In addition, a fit was done more heavily weighting all data poorly fitted by SM22. Figures 1 and 2 plot representative comparisons of SAID fits to data. In addition, older MAID and more recent Bonn-Gatchina results are plotted for comparison. Numerical comparisons of the various SAID fits are given in Tables 2 to 4. Comparisons of the present SAID \(I=3/2\) and \(I=1/2\) multipoles amplitudes amplitudes from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV) shown in Figs. 3 - 8. Also included, for comparison, are the BnGa and MAID multipoles. Comparisons of the present \(I=3/2\) and \(I=1/2\) ED and SE multipole amplitudes from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV) shown on Figs. 9 - 14. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Solution & Observable & \(\chi^{2}/(\pi^{0}p\) data) & \(\chi^{2}/(\pi^{+}n\) data) & \(\chi^{2}/(\pi^{-}p\) data) & \(\chi^{2}/(\pi^{0}n\) data) \\ \hline SM22 & Total & 30399/15901= 1.92 & 13945/6194= 2.25 & 12267/6662= 1.84 & 4190/1205= 3.48 \\ & UnPol & 9842/5730= 1.72 & 4984/2603= 1.91 & 7497/4706= 1.59 & 1995/649= 3.07 \\ & SinglePol & 16036/8249= 1.94 & 6078/2483= 2.45 & 4014/1684= 2.38 & 1258/405= 3.11 \\ & DoublePol & 4521/1922= 2.35 & 2883/1108= 2.60 & 765/275= 2.78 & 937/151= 6.21 \\ \hline SM44 & Total & 30870/15901= 1.94 & 14293/6194= 2.31 & 12358/6662= 1.86 & 3361/1205= 2.79 \\ & UnPol & 9880/5730= 1.72 & 5154/2603= 1.98 & 7832/4706= 1.66 & 1648/649= 2.54 \\ & SinglePol & 16405/8249= 1.99 & 6229/2483= 2.51 & 3830/1684= 2.27 & 823/405= 2.03 \\ & DoublePol & 4585/1922= 2.39 & 2910/1108= 2.63 & 696/275= 2.53 & 890/151= 5.89 \\ \hline NM22 & Total & 29998/15901= 1.89 & 13592/6194= 2.19 & 11992/6662= 1.80 & 8531/1205= 7.08 \\ & UnPol & 9887/5730= 1.73 & 4757/2603= 1.83 & 7262/4706= 1.54 & 2322/649= 3.58 \\ & SinglePol & 15662/8240= 1.90 & 5915/2483= 2.38 & 3746/1684= 2.22 & 4570/405= 11.28 \\ & DoublePol & 4449/1922= 2.31 & 2920/1108= 2.64 & 984/275= 3.58 & 1639/151= 10.85 \\ \hline WM22 & Total & 31315/15901= 1.97 & 14038/6194= 2.27 & 12819/6662= 1.92 & 3853/1205= 3.20 \\ & UnPol & 9816/5730= 1.71 & 4659/2603= 1.79 & 7735/4706= 1.64 & 2113/649= 3.26 \\ & SinglePol & 16922/8249= 2.05 & 6537/2483= 2.63 & 4258/1684= 2.53 & 885/405= 2.19 \\ & DoublePol & 4577/1922= 2.38 & 2.842/1108= 2.57 & 826/275= 3.00 & 855/151= 5.66 \\ \hline \hline CM12 & Total & 78254/15901= 4.92 & 27933/6194= 4.51 & 222454/6662=-33.39 & 7024/1205= 5.89 \\ (current & UnPol & 18074/5730= 3.15 & 4565/2603= 1.75 & 65514/4706= 13.92 & 4063/649= 6.26 \\ DB & SinglePol & 50016/8249= 6.06 & 12221/2483= 4.92 & 154303/1684= 91.62 & 976/405= 2.41 \\ & DoublePol & 10164/1922= 5.26 & 11147/1108= 10.06 & 2637/275= 9.59 & 1985/151= 13.15 \\ \hline CM12 & Total & 10544/4507= 2.34 & 10444/4916= 2.12 & 2486/1509= 1.65 & 987/373= 2.65 \\ (old DB) & UnPol & 2682/1094= 2.45 & 4247/2459= 1.73 & 1769/1118= 1.58 & 475/157= 3.03 \\ & SinglePol & 5846/2723= 2.15 & 3312/1523= 2.18 & 564/304= 1.86 & 512/216= 2.37 \\ & DoublePol & 2016/690= 2.92 & 2885/934= 3.09 & 153/87= 0.82 & \\ \hline MAID2007 & Total & 170832/14454=11.82 & 128063/5396=23.73 & 102968/5520= 18.65 & 29390/1205=24.39 \\ (current & UnPol & 74153/5188= 14.29 & 24533/2210=11.10 & 40840/4166= 9.80 & 2812/649= 4.33 \\ DB & SinglePol & 84286/7578= 11.12 & 96337/2168=44.44 & 59097/1182=50.00 & 22087/405= 54.54 \\ & DoublePol & 12393/1688= 7.34 & 7193/1018= 7.07 & 3031/172= 17.62 & 4494/151= 29.76 \\ \hline \end{tabular} \end{table} Table 2: Comparison of \(\chi^{2}\) per datum values for all charged and neutral channels covering fit energy range. The previous SAID fit, CM12, was published in Ref. [7] (and is valid up to E\({}_{\gamma}=2700\) MeV). CM12 is compared to both the current database and data before 2012. All data are available in the SAID database (DB) [32]. For the SM44 fit, \(\pi^{0}n\) data were weighted by an arbitrary factor of 4. For the WM22 fit, all data with large \(\chi^{2}/\)data for the SM22 solution (data are listed in Table 3) were weighted by an arbitrary factor of 4. The NM22 solution represents a fit without the inclusion of \(\pi^{0}n\) data. The previous MAID2007 solution is valid up to E\({}_{\gamma}=1680\) MeV (\(W=2\) GeV) [5]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Reaction & Obs & E\({}_{\gamma}\) & Data & MAID2007 & CM12 & SM22 & SM44 & WM22 & NM22 & Ref. \\ & & (MeV) & & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \\ \hline \(\gamma p\rightarrow\pi^{0}p\) & \(d\sigma/d\Omega\) & 675\(-\)2875 & 620 & 40.56 & 2.38 & 3.28 & 3.09 & 2.18 & 3.34 & [61] \\ & \(\mathbb{P}\) & 1845\(-\)2776 & 3 & & 242. & 107. & 83.1 & 26.13 & 89.01 & [46] \\ & & 773\(-\)2472 & 29 & 8.47 & 5.45 & 12.83 & 12.93 & 8.69 & 13.10 & [62] \\ & \(\mathbb{G}\) & 632\(-\)2187 & 197 & 11.45 & 46.34 & 4.23 & 4.43 & 4.02 & 3.87 & [50] \\ & \(\mathbb{C}_{x^{\prime}}\) & 1845\(-\)2776 & 3 & & 985. & 8.75 & 5.18 & 9.39 & 7.53 & [46] \\ & & 773\(-\)2472 & 28 & 28.25 & 9.96 & 7.64 & 7.82 & 4.89 & 8.39 & [62] \\ & \(\mathbb{C}_{x^{\prime}}\) & 1845\(-\)2776 & 3 & & 1370. & 8.68 & 14.40 & 2.46 & 7.87 & [46] \\ & & 773\(-\)2472 & 25 & 35.44 & 12.80 & 12.00 & 8.44 & 9.16 & 13.28 & [62] \\ \hline \(\gamma p\rightarrow\pi^{+}n\) & \(d\sigma/d\Omega\) & 725\(-\)2875 & 618 & 65.71 & 2.08 & 2.75 & 2.83 & 1.82 & 2.44 & [63] \\ & \(\mathbb{G}\) & 632\(-\)2229 & 216 & 21.09 & 25.33 & 4.42 & 4.66 & 3.57 & 4.49 & [50] \\ \hline \(\gamma n\rightarrow\pi^{0}n\) & \(\Sigma\) & 703\(-\)1475 & 216 & 100.1 & 2.37 & 4.72 & 2.81 & 2.93 & 19.26 & [64] \\ & \(\mathbb{E}\) & 446\(-\)1427 & 151 & 29.75 & 13.14 & 6.21 & 5.89 & 5.66 & 10.85 & [58] \\ \hline \end{tabular} \end{table} Table 3: List of data with large \(\chi^{2}\)/data for the SM22 and associated fits. Notation for solutions is given in the caption of Table 2. Figure 1: Samples of pion photoproduction off the proton. Data for \(\gamma p\to\pi^{0}p\) are from Refs. [42; 46; 50; 51; 52; 61; 62; 66] and for \(\gamma p\to\pi^{+}n\) are from Ref. [50]. Notation for solutions is given in the caption of Table 2. The SAID SM22 (WM22) fit is shown as a red solid (yellow dashed) curve. SAID CM12 [7] (MAID2007 [5]) predictions shown as blue dash-dotted (green dashed) curves. BG2019 [67] predictions are shown as magenta short dash-dotted curves. Figure 2: Samples of pion photoproduction off the neutron. Data for \(\gamma n\to\pi^{-}p\) are from Refs. [56; 68] and for \(\gamma n\to\pi^{0}n\) are from Refs. [58; 64; 69]. Notation for solutions is given in the caption of Table 2. The SAID SM22 (NM22) fit is shown as a red solid (black dotted) curve. SAID CM12 [7] (MAID2007 [5]) predictions are shown as blue dash-dotted (green dashed) curves. BG2019 [67] predictions are shown as magenta short dash-dotted curves. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Reaction & Obs & MAID2007 & CM12 & SM22 & SM44 & WM22 & NM22 & Ref. \\ & & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \(\chi^{2}\)/data & \\ \hline \(\gamma p\rightarrow\pi^{0}p\) & \(d\sigma/d\Omega\) & 10.44 & 7.08 & 1.32 & 1.36 & 1.32 & 1.33 & [39]\({}^{\ddagger}\) \\ & & 12.50 & 3.01 & 1.40 & 1.44 & 1.51 & 1.40 & [40]\({}^{\ddagger}\) \\ & & 4.44 & 2.33 & 3.46 & 3.41 & 3.22 & 3.49 & [41]\({}^{\dagger}\) \\ & & 18.28 & 2.34 & 2.69 & 2.50 & 2.37 & 2.77 & [42]\({}^{\dagger}\) \\ & & 16.15 & 3.63 & 2.39 & 2.31 & 2.74 & 2.45 & [43] \\ & \(\Sigma\) & 41.69 & 0.99 & 1.40 & 1.39 & 1.33 & 1.39 & [39] \\ & & 2.25 & 1.42 & 1.16 & 1.12 & 1.22 & 1.17 & [44]\({}^{\ddagger}\) \\ & & 72.13 & 43.81 & 3.62 & 3.87 & 4.04 & 3.47 & [34] \\ & & 4.93 & 11.21 & 1.95 & 1.96 & 2.46 & 1.81 & [43] \\ & \(\mathbb{P}\) & 2.13 & 1.50 & 1.04 & 1.09 & 1.17 & 1.05 & [45] \\ & & 241.0 & 6.47 & 82.62 & 26.1 & 89.01 & [46] \\ & \(\mathbb{T}\) & 1.30 & 1.41 & 1.06 & 1.07 & 1.09 & 1.04 & [47]\({}^{\ddagger}\) \\ & & 9.15 & 5.80 & 3.09 & 3.25 & 3.28 & 2.94 & [48] \\ & & 12.25 & 4.14 & 2.17 & 2.24 & 2.43 & 2.05 & [45] \\ & \(\mathbb{E}\) & 15.14 & 4.22 & 2.11 & 2.20 & 2.62 & 2.03 & [49] \\ & \(\mathbb{G}\) & 11.45 & 6.38 & 4.23 & 4.43 & 4.02 & 4.20 & [50] \\ & & 3.42 & 3.90 & 1.26 & 1.26 & 1.21 & 1.20 & [51] \\ & \(\mathbb{F}\) & 3.48 & 3.34 & 2.33 & 2.34 & 2.26 & 2.28 & [48] \\ & \(\mathbb{H}\) & 4.38 & 6.25 & 1.70 & 1.96 & 1.89 & 1.44 & [45] \\ & \(\mathbb{C}_{z^{\prime}}\) & 2.07 & 2.36 & 1.71 & 1.71 & 1.76 & 1.73 & [52] \\ & & 984.0 & 8.90 & 5.28 & 9.53 & 7.53 & [46] \\ & \(\mathbb{C}_{z^{\prime}}\) & & 1370. & 8.74 & 14.49 & 2.48 & 7.87 & [46] \\ \hline \(\gamma p\rightarrow\pi^{+}n\) & \(\Sigma\) & 285.1 & 18.37 & 3.00 & 3.14 & 3.81 & 2.97 & [34] \\ & \(\mathbb{E}\) & 5.09 & 9.82 & 1.96 & 1.86 & 2.21 & 2.03 & [53]\({}^{\ddagger}\) \\ & \(\mathbb{G}\) & 21.09 & 25.33 & 4.42 & 6.64 & 3.57 & 4.49 & [50] \\ \hline \(\gamma n\rightarrow\pi^{-}p\) & \(\sigma_{tot}\) & 0.33 & 0.05 & 0.06 & 0.20 & 0.10 & 0.90 & [54] \\ & \(d\sigma/d\Omega\) & 5.99 & 4.61 & 3.27 & 3.96 & 2.78 & 3.22 & [55] \\ & & 14.88 & 20.39 & 1.28 & 1.30 & 1.33 & 1.25 & [35]\({}^{\ddagger}\) \\ & & 30.39 & 76.83 & 3.97 & 3.97 & 3.77 & 4.17 & [33]\({}^{\dagger}\) \\ & \(\Sigma\) & 7.21 & 118.8 & 2.38 & 2.27 & 2.57 & 2.24 & [56] \\ & \(\mathbb{E}\) & 18.25 & 17.43 & 2.84 & 2.62 & 3.11 & 3.68 & [36] \\ \hline \(\gamma n\rightarrow\pi^{0}n\) & \(d\sigma/d\Omega\) & 3.77 & 7.29 & 2.88 & 2.43 & 3.14 & 3.89 & [37] \\ & & 20.32 & 18.72 & 11.22 & 9.52 & 9.97 & 15.73 & [57]\({}^{\dagger}\) \\ & \(\Sigma\) & 2.44 & 2.46 & 1.25 & 1.15 & 1.33 & 2.17 & [29] \\ & \(\mathbb{E}\) & 29.75 & 13.11 & 6.21 & 5.89 & 5.66 & 10.85 & [58] \\ \hline \end{tabular} \end{table} Table 4: Comparison \(\chi^{2}\)/data for published data since 2012 as given in Table 1 and available in the SAID database [32]. Notation for solutions is given in the caption of Table 2. Data, which are partially (completely) excluded in the SAID fits, denoted by \({}^{\ddagger}\) (\({}^{\dagger}\)). Figure 3: Comparison \(I=3/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). For the amplitudes, the subscript \(l\pm\) gives the value of \(j=l\pm 1/2\), and the superscript gives the isospin index. Notation for solutions is given in the caption of Table 2. New SAID SM22 fit is shown by red solid curves. Previous SAID CM12 [7] (MAID2007 [5], terminates at \(W=2\) GeV) predictions show by blue dash-dotted (green dashed) curves. BG2019 [67] predictions show by magenta short dash-dotted curves. Figure 4: Comparison \(I=3/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5\) GeV. Notation of the solutions is the same as in Fig. 3. Additionally, the WM22 fit is shown by yellow dashed curves. Figure 5: Comparison proton \(I=1/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). Notation of the solutions is the same as in Fig. 3. Additionally, WM22 fit is shown by yellow dashed curves. Figure 6: Comparison of proton \(I=1/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5\) GeV. Notation of the solutions is the same as in Fig. 3. For the amplitudes, the subscript \(p\) denotes a proton target, Additionally, WM22 fit shown by yellow dashed curves. Figure 7: Comparison of neutron \(I=1/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). Notation of the solutions is the same as in Fig. 3. Additionally, cyan short-dashed curves are SM44 fits. Figure 8: Comparison neutron \(I=1/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5\) GeV. Notation of the solutions is the same as in Fig. 7. Figure 9: Comparison of \(I=3/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). Notation for solutions is given in the caption of Table 2. For the amplitudes, the subscript \(n\) denotes a neutron target, New SAID SM22 fit is shown by red solid curves. Previous SAID CM12 [7] (MAID2007 [5], terminates at \(W=2\) GeV) predictions show by blue dash-dotted (green dashed) curves. BG2019 [67] predictions show by magenta short dash-dotted curves. SE associated with SM22 shown as blue open circles. Vertical arrows indicate resonance energies, \(W_{R}\), and horizontal bars show full (\(\Gamma\)) and partial (\(\Gamma_{\pi N}\)) widths associated with the SAID \(\pi N\) solution SP06 (Breit-Wigner parameters) [2]. Figure 10: Comparison \(I=3/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5~{}{\rm GeV}\). Notation of the solutions and data is the same as in Fig. 9. Figure 11: Comparison of proton \(I=1/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). Notation of the solutions is the same as in Fig. 9. The blue vertical arrows for (a) and (b) indicate the \(\eta\) production threshold. Figure 12: Comparison proton \(I=1/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5\) GeV. Notation of the solutions is the same as in Fig. 9. Figure 13: Comparison neutron \(I=1/2\) multipole amplitudes (orbital momentum \(l=0,1\)) from threshold to \(W=2.5\) GeV (\(E_{\gamma}=2.7\) GeV). For the amplitudes, the subscript \(n\) denotes a neutron target, the subscript \(l\pm\) gives the value of \(j=l\pm 1/2\), and the superscript gives the isospin index. Notation of the solutions is the same as in Fig. 9. The blue vertical arrows for (a) and (b) indicate the \(\eta\) production threshold. Figure 14: Comparison of neutron \(I=1/2\) multipole amplitudes (orbital momentum \(l=2\)) from threshold to \(W=2.5\) GeV. Notation of the solutions is the same as in Fig. 13. Resonance Couplings Following the notation of Refs. [38; 70], the \((\gamma,\pi)\) T-matrix element for helicity \(h\) is given by \[T^{h}_{\gamma,\pi}=\sqrt{2\,k\,q}\;{\cal A}^{h}_{\alpha}\;C\,, \tag{4}\] where \(\alpha\) denotes the partial wave and \(k,q\) are the center-of-mass (c.m.) momenta of the photon and the pion. The factor \(C\) is \(\sqrt{2/3}\) for isospin \(3/2\) and \(-\sqrt{3}\) for isospin \(1/2\). The helicity multipoles \({\cal A}^{h}_{\alpha}\) are given in terms of electric and magnetic multipoles \[{\cal A}^{1/2}_{\ell+} = -\frac{1}{2}\left[(\ell+2)E_{\ell+}+\ell M_{\ell+}\right]\,, \tag{5}\] \[{\cal A}^{3/2}_{\ell+} = \frac{1}{2}\sqrt{\ell(\ell+2)}\left[E_{\ell+}-M_{\ell+}\right]\,,\] (6) \[{\cal A}^{1/2}_{(\ell+1)-} = -\frac{1}{2}\left[\ell E_{(\ell+1)-}-(\ell+2)M_{(\ell+1)-} \right]\,,\] (7) \[{\cal A}^{3/2}_{(\ell+1)-} = -\frac{1}{2}\sqrt{\ell(\ell+2)}\left[E_{(\ell+1)-}+M_{(\ell+1)-} \right]\,, \tag{8}\] with \(J=\ell+1/2\) for "\(+\)" multipoles and \(J=(\ell+1)-1/2\) for "\(-\)" multipoles, all having the same total spin \(J\,\). In Tables 5 to 3, we list the pole positions together with the photo-decay amplitudes \[A_{h} = C\,\sqrt{\frac{q_{p}}{k_{p}}\frac{2\pi(2J+1)W_{p}}{m_{N}Res_{\pi N}}} \,{\rm Res}\,{\cal A}^{h}_{\alpha}\,, \tag{9}\] where the subscript \(p\) denotes quantities evaluated at the pole position and \(m_{N}\) is the nucleon mass. In Ref. [38], the elastic residues, \(Res_{\pi N}\), and the pole positions, \(W_{p}=M_{p}-i\Gamma_{p}/2\), were taken from the GWU SAID PWA, SP06 [2] and each multipole was fitted separately, using the Laurent plus Pietarinen (L+P) method [38], to determine the corresponding residues. Here, we have made a coupled multipole fit of all partial-wave amplitudes associated with particular resonances, including the pion-nucleon elastic scattering amplitudes. Thus, for example, the L+P fit of Ref. [38] for the \(E^{1/2}_{2-}\) multipole has been expanded to a simultaneous fit of the \(D_{13}\) elastic amplitude, \(E^{1/2}_{2-}\) and \(M^{1/2}_{2-}\) (proton target), plus \(E^{1/2}_{2-}\) and \(M^{1/2}_{2-}\) (neutron target), yielding more self-consistent results. As in Ref. [38], the fitted partial waves are \(S_{11}\), \(P_{11}\), \(D_{13}\), \(F_{15}\), \(P_{33}\), \(D_{33}\), and \(F_{37}\) with pion-nucleon partial waves taken from Ref. [71]. ## V Results and Conclusions The present results update the SAID fit (CM12) which first utilized a Chew-Mandelstam K-matrix approach (as opposed to the Heitler K-matrix formalism used in the original SAID analyses). The L+P method for pole parameter extraction has been extended to simultaneously incorporate all connected \(\pi N\) elastic and photoproduction amplitudes. The amplitude tables give pole positions and helicity amplitudes at the pole where available. Values for the \(n\gamma\) amplitudes were not extracted in the 2014 SAID analysis; comparisons can now be made to multi-channel determinations. Complex amplitudes are given in terms of modulus and phase. In cases where a large phase is found, close to 180 degrees, a minus sign is commonly extracted to ease comparison with the real amplitudes found in older Breit-Wigner fits. The "modulus" then has a sign and a phase closer to zero. Here, however, the modulus remains positive. In cases where the fitted multipoles have a clear canonical resonance variation, with a relatively small non-resonance contribution, comparison to the Bonn-Gatchina multi-channel analysis generally shows good agreement (to the 10% level). This includes the \(\Delta(1232)3/2^{+}\), \(N(1520)3/2^{-}\), \(N(1680)5/2^{+}\), and \(\Delta(1905)5/2^{+}\) and applies to both the \(p\gamma\), and \(n\gamma\) helicity amplitudes. Comparisons are more complicated for states associated with the low-angular momentum states \(E^{1/2}_{0+}\) and \(M^{1/2}_{1-}\). The \(N(1535)1/2^{-}\) and \(N(1650)1/2^{-}\) have some overlap and are close to the \(\eta N\) threshold cusp. The \(N(1440)\) is complicated by the close proximity of its pole position to the \(\pi\Delta\) threshold. We note that differences in \(N(1535)1/2^{-}\)\(p\gamma\) amplitudes disappear if one compares instead with the recent Julich-Bonn analysis [75]. For the \(n\gamma\) amplitudes, the agreement is qualitative and no Julich-Bonn values are available. Qualitative agreement is also seen for the \(N(1650)1/2^{-}\). Agreement for the \(\Delta(1700)3/2^{-}\) is good for the moduli and at least qualitative for the phases. For the \(N(1720)3/2^{+}\), within fairly large uncertainties, there is qualitative agreement of the helicity amplitude moduli, with less agreement at the level of phases. Hunt and Manley [11] note that the \(N(1675)5/2^{-}\) decays to \(p\gamma\) violate the Morohouse selection rule [76]. We see the moduli of \(p\gamma\) photo-decay amplitudes to be small but non-zero. In Figs. 15 - 18, we display L+P fits for the \(D_{13}\) partial-wave and multipole amplitudes, where resonance behavior is clear and the dominant feature, and the \(S_{11}\) amplitudes, where resonance overlap and a nearby \(\eta N\) cusp complicate this process. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|nA^{1/2}|\) & \(n\phi A^{1/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1535)1/2^{-}\) & 1.500\(\pm\)0.001 & 0.096\(\pm\)0.006 & 0.079\(\pm\)0.012 & -11.4\(\pm\)1.7 & 0.067\(\pm\)0.009 & -174\(\pm\)22 \\ & 1.501\(\pm\)0.006 & 0.095\(\pm\)0.011 & 0.074\(\pm\)0.010 & -17\(\pm\)11 & & \\ & 1.500\(\pm\)0.004 & 0.128\(\pm\)0.009 & 0.114\(\pm\)0.008 & 10\(\pm\)5 & 0.088\(\pm\)0.004 & -175\(\pm\)4 \\ \hline \hline \(N(1650)1/2^{-}\) & 1.650\(\pm\)0.001 & 0.110\(\pm\)0.008 & 0.042\(\pm\)0.001 & -12.5\(\pm\)0.4 & 0.026\(\pm\)0.005 & -72\(\pm\)13 \\ & 1.655\(\pm\)0.011 & 0.127\(\pm\)0.017 & 0.041\(\pm\)0.006 & 16\(\pm\)27 & & \\ & 1.652\(\pm\)0.007 & 0.102\(\pm\)0.008 & 0.032\(\pm\)0.006 & -2\(\pm\)11 & 0.016\(\pm\)0.004 & -28\(\pm\)10 \\ \hline \end{tabular} \end{table} Table 5: Photon-decay helicity amplitudes at the pole for \(p\gamma\) and \(n\gamma\) decays. Fit to pion-nucleon elastic amplitude \(S_{11}\) and multipole \(E_{0+}^{1/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (for proton couplings) and [73] (for neutron couplings) (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(|nA_{1/2}|\) & \(n\phi A^{1/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1675)5/2^{-}\) & 1.658\(\pm\)0.003 & 0.141\(\pm\)0.005 & 0.020\(\pm\)0.006 & 165\(\pm\)43 & 0.020\(\pm\)0.005 & 23\(\pm\)6 & 0.123\(\pm\)0.027 & -19\(\pm\)4 & 0.084\(\pm\)0.018 & -170\(\pm\)38 \\ & 1.657\(\pm\)0.005 & 0.141\(\pm\)0.011 & 0.015\(\pm\)0.002 & 25\(\pm\)12 & 0.019\(\pm\)0.002 & -40\(\pm\)8 & & \\ & 1.655\(\pm\)0.004 & 0.147\(\pm\)0.005 & 0.022\(\pm\)0.003 & -12\(\pm\)7 & 0.028\(\pm\)0.006 & -17\(\pm\)6 & 0.053\(\pm\)0.004 & 177\(\pm\)5 & 0.073\(\pm\)0.005 & 168\(\pm\)5 \\ \hline \end{tabular} \end{table} Table 6: Photon-decay helicity amplitudes at the pole for \(p\gamma\) and \(n\gamma\) decays. Fit to pion-nucleon elastic amplitude \(P_{11}\) and multipole \(M_{1}^{1/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (for proton couplings) and [73] (for neutron couplings) (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) & \(|nA_{1/2}|\) & \(n\phi A^{1/2}\) & \(|nA^{3/2}|\) & \(nA\phi^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1720)3/2^{+}\) & 1.670\(\pm\)0.001 & 0.280\(\pm\)0.002 & 0.057\(\pm\)0.027 & -42\(\pm\)19 & 0.071\(\pm\)0.033 & -8\(\pm\)4 & 0.056\(\pm\)0.021 & -21\(\pm\)8 & 0.065\(\pm\)0.024 & 169\(\pm\)64 \\ & 1.651\(\pm\)0.009 & 0.311\(\pm\)0.045 & 0.059\(\pm\)0.002 & -14\(\pm\)8 & 0.045\(\pm\)0.005 & -151\(\pm\)11 & & & \\ & 1.670\(\pm\)0.025 & 0.430\(\pm\)0.100 & 0.115\(\pm\)0.045 & 0\(\pm\)35 & 0.140\(\pm\)0.040 & 65\(\pm\)35 & 0.025\({}^{+0.040}_{-0.015}\) & 105\(\pm\)35 & 0.100\(\pm\)0.035 & -80\(\pm\)35 \\ \hline \end{tabular} \end{table} Table 7: Photon-decay helicity amplitudes at the pole for \(p\gamma\) and \(n\gamma\) decays. Fit to pion-nucleon elastic amplitude \(P_{13}\) and multipoles \(E_{1+}^{1/2}\) and \(M_{1+}^{1/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (for proton couplings) and [73] (for neutron couplings) (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) & \(|nA_{1/2}|\) & \(n\phi A^{1/2}\) & \(|nA^{3/2}|\) & \(nA\phi^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1520)3/2^{-}\) & 1.511\(\pm\)0.001 & 0.116\(\pm\)0.002 & 0.029\(\pm\)0.001 & 156\(\pm\)8 & 0.144\(\pm\)0.007 & 4.0\(\pm\)0.2 & 0.044\(\pm\)0.004 & -175\(\pm\)15 & 0.121\(\pm\)0.010 & -170\(\pm\)14 \\ & 1.514\(\pm\)0.001 & 0.109\(\pm\)0.005 & 0.028\(\pm\)0.001 & 154\(\pm\)7 & 0.133\(\pm\)0.006 & 13\(\pm\)2 & & \\ & 1.507\(\pm\)0.002 & 0.111\(\pm\)0.003 & 0.023\(\pm\)0.004 & 174\(\pm\)5 & 0.131\(\pm\)0.006 & 4\(\pm\)4 & 0.045\(\pm\)0.005 & 175\(\pm\)4 & 0.119\(\pm\)0.005 & -175\(\pm\)4 \\ \ \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}|\) & \(|pA^{3/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1680)5/2^{+}\) & 1.672\(\pm\)0.017 & 0.113\(\pm\)0.004 & 0.020\(\pm\)0.002 & 141\(\pm\)25 & 0.126\(\pm\)0.011 & -1.1\(\pm\)0.1 & 0.037\(\pm\)0.006 & -15\(\pm\)3 & 0.040\(\pm\)0.007 & -176\(\pm\)29 \\ & 1.674\(\pm\)0.003 & 0.113\(\pm\)0.005 & 0.014\(\pm\)0.005 & 130\(\pm\)20 & 0.123\(\pm\)0.004 & -6\(\pm\)3 & & & \\ & 1.678\(\pm\)0.005 & 0.113\(\pm\)0.004 & 0.013\(\pm\)0.003 & 160\(\pm\)17 & 0.135\(\pm\)0.005 & 1\(\pm\)3 & 0.032\(\pm\)0.003 & -7\(\pm\)5 & 0.063\(\pm\)0.004 & 170\(\pm\)5 \\ \hline \end{tabular} \end{table} Table 10: Photon-decay helicity amplitudes at the pole for \(p\gamma\) decay. Fit to pion-nucleon elastic amplitude \(F_{37}\) and multipoles \(E_{3+}^{3/2}\) and \(M_{3+}^{3/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (for \(\Delta(1232)3/2^{+}\)) and [72] (for \(\Delta(1620)3/2^{+}\)) (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(\Delta(1905)5/2^{+}\) & 1.799\(\pm\)0.006 & 0.227\(\pm\)0.012 & 0.051\(\pm\)0.006 & 166\(\pm\)21 & 0.009\(\pm\)0.001 & -171\(\pm\)22 \\ & 1.817\(\pm\)0.007 & 0.257\(\pm\)0.015 & 0.015\(\pm\)0.002 & -29\(\pm\)9 & 0.038\(\pm\)0.001 & -174\(\pm\)2 \\ & 1.800\(\pm\)0.006 & 0.290\(\pm\)0.015 & 0.025\(\pm\)0.005 & -28\(\pm\)12 & 0.050\(\pm\)0.004 & -175\(\pm\)10 \\ \hline \end{tabular} \end{table} Table 11: Photon-decay spectra for \(p\gamma\) decay. Fit to pion-nucleon elastic amplitude \(F_{37}\) and multipoles \(E_{3+}^{3/2}\) and \(M_{3+}^{3/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(N(1680)5/2^{+}\) & 1.672\(\pm\)0.017 & 0.113\(\pm\)0.004 & 0.020\(\pm\)0.002 & 141\(\pm\)25 & 0.126\(\pm\)0.011 & -1.1\(\pm\)0.037\(\pm\)0.006 & -15\(\pm\)3 & 0.040\(\pm\)0.007 & -176\(\pm\)29 \\ & 1.674\(\pm\)0.003 & 0.113\(\pm\)0.005 & 0.014\(\pm\)0.005 & 130\(\pm\)20 & 0.123\(\pm\)0.004 & -6\(\pm\)3 & & & \\ & 1.678\(\pm\)0.005 & 0.113\(\pm\)0.004 & 0.013\(\pm\)0.003 & 160\(\pm\)17 & 0.135\(\pm\)0.005 & 1\(\pm\)3 & 0.032\(\pm\)0.003 & -7\(\pm\)5 & 0.063\(\pm\)0.004 & 170\(\pm\)5 \\ \hline \end{tabular} \end{table} Table 12: Photon-decay helicity amplitudes at the pole for \(p\gamma\) decay. Fit to pion-nucleon elastic amplitude \(F_{33}\) and multipoles \(E_{3+}^{3/2}\) and \(M_{3+}^{3/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (third row). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Resonance & \(ReW_{p}\) & \(-2ImW_{p}\) & \(|pA^{1/2}|\) & \(p\phi A^{1/2}\) & \(|pA^{3/2}|\) & \(p\phi A^{3/2}\) \\ & (GeV) & (GeV) & (\(GeV^{-1/2}\)) & (deg) & (\(GeV^{-1/2}\)) & (deg) \\ \hline \(\Delta(1905)/2^{+}\) & 1.799\(\pm\)0.006 & 0.227\(\pm\)0.012 & 0.051\(\pm\)0.006 & 166\(\pm\)21 & 0.009\(\pm\)0.001 & -171\(\pm\)22 \\ & 1.817\(\pm\)0.007 & 0.257\(\pm\)0.015 & 0.015\(\pm\)0.002 & -29\(\pm\)9 & 0.038\(\pm\)0.001 & -174\(\pm\)2 \\ & 1.800\(\pm\)0.006 & 0.290\(\pm\)0.015 & 0.025\(\pm\)0.005 & -28\(\pm\)12 & 0.050\(\pm\)0.004 & -175\(\pm\)10 \\ \hline \end{tabular} \end{table} Table 13: Photon-decay helicity amplitudes at the pole for \(p\gamma\) decay. Fit to pion-nucleon elastic amplitude \(F_{35}\) and multipoles \(E_{3-}^{3/2}\) and \(M_{3-}^{3/2}\). Complex quantities given as modulus and phase. Results from present study (first row), PR2014 [38] (second row), and BnGa [72] (third row). Figure 15: Samples of Laurent+Pietarinen (L+P) coupled fit of the \(S_{11}\)\(\pi N\) partial wave of the GWU-SAID fit WI08 [71] and the SM22 ED GWU-SAID multipole solutions. Blue symbols are the GWU-SAID solutions, solid black curves are the L+P coupled-multipole fit, and thin red curves are the resonant contribution in the L+P coupled-multipole fit. Figure 16: Samples of Laurent+Pietarinen (L+P) coupled fit of the \(S_{11}\)\(\pi N\) partial wave of the GWU-SAID fit WI08 [71] and SM22 SE4 GWU-SAID multipole solutions. Notation of the solutions is the same as in Fig. 15. Figure 17: Samples of Laurent+Pietarinen (L+P) coupled fit of the \(D_{13}\)\(\pi N\) partial wave of the GWU-SAID fit WI08 [71] and SM22 SE4 GWU-SAID multipole solutions. Notation of the solutions is the same as in Fig. 15. Figure 18: Samples of Laurent+Pietarinen (L+P) coupled fit of the \(F_{37}\)\(\pi N\) partial wave of the GWU-SAID WI08 [71] and SM22-SE4 GWU-SAID multipole solutions. Notation of the solutions is the same as in Fig. 15. ###### Acknowledgements. This work was supported in part by the U. S. Department of Energy, Office of Science, Office of Nuclear Physics, under Awards No. DE-SC0016583 and No. DE-SC0016582, and in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
2309.10053
Polarized signatures of orbiting hot spots: special relativity impact and probe of spacetime curvature
[Abridged] Context. The Galactic Center supermassive black hole is well known to exhibit transient peaks of flux density on a daily basis across the spectrum. Recent infrared and millimeter observations have strengthened the case for the association between these flares and circular orbital motion in the vicinity of the event horizon. The strongly polarized synchrotron radiation associated with these events leads to specific observables called QU loops, that is, looping motion in the Stokes QU plane of linear polarization. Aims. We want to deepen the understanding of the QU loops associated with orbiting hot spots. We compute such loops in Minkowski and Schwarzschild spacetimes in order to determine which aspects of the observed patterns are due to special- or general-relativistic phenomena. Results. We show that QU loops in Minkowski spacetime at low or moderate inclination i < 45 deg share all qualitative features of Schwarzschild QU loops: there exist QU loops for all setups considered (including face-on view and vertical magnetic field), there may be one or two QU loops per orbital period for a vertical magnetic field configuration, there are always two QU loops in case of a toroidal magnetic field. We provide analytical formulas in Minkowski spacetime to explain the details of this behavior. Moreover, we analyze the flux variation of the hot spot and show that it is dictated either by the angular dependence of the radiative transfer coefficients, or by relativistic beaming. In the former case, this can lead to extreme flux ratios even at moderate inclination. Finally, we highlight the increasing mirror asymmetry of the Schwarzschild QU track with increasing inclination and show that this behavior is a specific Schwarzschild feature caused by light bending.
F. H. Vincent, M. Wielgus, N. Aimar, T. Paumard, G. Perrin
2023-09-18T18:07:56Z
http://arxiv.org/abs/2309.10053v1
# Polarized signatures of orbiting hot spots: ###### Abstract Context:The Galactic Center supermassive black hole is well known to exhibit transient peaks of flux density on a daily basis across the spectrum. Recent infrared and millimeter observations have strengthened the case for the association between these flares and circular orbital motion in the vicinity of the event horizon. The strongly polarized synchrotron radiation associated with these events leads to specific observables called QU loops, that is, looping motion in the Stokes QU plane of linear polarization. Aims:We want to deepen the understanding of the QU loops associated with orbiting hot spots. We compute such loops in Minkowski and Schwarzschild spacetimes in order to determine which aspects of the observed patterns are due to special- or general-relativistic phenomena. Methods:We consider a parcel of energized plasma in circular motion in Minkowski spacetime, and in Keplerian orbit in the Schwarzschild spacetime. We compute the polarized radiative transfer associated with this orbiting hot spot and derive the evolution of the flux density, astrometry, and Stokes Q and U parameters. Results:We show that QU loops in Minkowski spacetime at low or moderate inclination \(i\lesssim 45^{\circ}\) share all qualitative features of Schwarzschild QU loops: there exist QU loops for all setups considered (including face-on view and vertical magnetic field), there may be one or two QU loops per orbital period for a vertical magnetic field configuration, there are always two QU loops in case of a toroidal magnetic field. We provide analytical formulas in Minkowski spacetime to explain the details of this behavior. Moreover, we analyze the flux variation of the hot spot and show that it is dictated either by the angular dependence of the radiative transfer coefficients, or by relativistic beaming. In the former case, this can lead to extreme flux ratios even at moderate inclination. Finally, we highlight the increasing mirror asymmetry of the Schwarzschild QU track with increasing inclination and show that this behavior is a specific Schwarzschild feature caused by light bending. Conclusions:Although special-relativistic effects have not been extensively discussed in this context, they are a crucial part in generating the observed QU loops. However, general-relativistic light bending leads to a specific observable feature encoded in the asymmetry of the observed loops. This might allow quantifying the spacetime curvature. ## 1 Introduction The emission from the close surroundings of the Galactic supermassive black hole Sagittarius A* (Sgr A*) is variable at all wavelengths, with a degree of variability that depends strongly on frequency (Genzel et al. 2010). The source exhibits local maxima of variable emission, from radio frequencies to X rays, called radiation flares (see e.g. Genzel et al. 2010; Morris 2023). The physical nature of these events remains unclear after 20 years of study since the first detected events (Baganoff et al. 2001; Genzel et al. 2003). Many models have been proposed, and we refer to Vincent et al. (2014) for a review. Among them, the class of hot-spot models (Broderick & Loeb 2006; Hamaus et al. 2009, and references therein) is of particular interest. The underlying assumption of this model is that Sgr A* flares are caused by the radiation emitted by transient, localized at least initially), compact (few gravitational radii), orbiting (in the disk plane or along the jet funnel) parcels of energized plasma in the inner region of the accretion/ejection flow surrounding the black hole. This model is of particular relevance given the detections of orbital motions consistent with circular trajectories, very close to the event horizon, associated with infrared and X-ray flares (GRAVITY Collaboration et al. 2018; Wielgus et al. 2022b; Gravity Collaboration et al. 2023). Such hot spots might be the end product of the acceleration of particles in the inner regions of the flow by magnetic reconnection (see e.g. Ripperda et al. 2022; El Mellah et al. 2023). It has recently been shown that hot spots generated by magnetic reconnection may account for photometric and astrometric infrared observations (Aimar et al. 2023a). The polarization properties of infrared and millimeter flares have been studied since the early 2000s. Eckart et al. (2006) observed swings of the electric vector position angle (EVPA) of up to 40\({}^{\circ}\) in 10 min during an infrared flare observed by the NAOS/CONICA adaptive optics instrument, while Trippe et al. (2007) measured a swing reaching 70\({}^{\circ}\) within 15 min, with the same instrument. The authors note that these swings are consistent with a hot-spot model with an orbital radius of the order of the innermost stable circular orbit (ISCO) associated with the black hole, which corresponds to a Keplerian period of the or der of 30 min for a non-spinning black hole of \(\sim 4\times 10^{6}\,M_{\odot}\). The change of polarization angle has been linked to the variation of the relative orientation between the direction of emission reaching the distant observer and that of the ambient magnetic field, as the spot orbits around the black hole. The hot-spot model was further discussed in the context of these infrared polarized flare observations by Meyer et al. (2006). Compatible infrared flare observations and similar conclusions were obtained by Nishiyama et al. (2009). At radio frequencies, Marrone et al. (2006) reported a 50\({}^{\circ}\) EVPA swing over 2.5 hours during a millimeter flare observed by the Submillimeter Array, and noticed a roughly periodic evolution of the angle with time. When representing the evolution in the QU plane corresponding to the Stokes Q and U linear polarization parameters, the authors obtained a loop pattern exhibiting two full orbits in the QU plane - the first so-called QU loop reported in the literature. The authors argued that this signature might be associated with a hot spot orbiting at a radius larger than the ISCO of the black hole. Two different instruments observed QU loops recently: the Very Large Telescope Interferometer GRAVITY beam combiner, and the Atacama Large Millimeter Array (ALMA). First, GRAVITY Collaboration et al. (2018) (see also Gravity Collaboration et al. 2023) observed a series of polarized infrared flares. The QU pattern traces a single loop during the observed astrometric orbital period. The authors show that this pattern is consistent with a hot spot orbiting at a radius close to the ISCO of a non-spinning black hole. Second, Wielgus et al. (2022b) observed a QU loop with ALMA at millimeter wavelengths, following an X-ray flare reported by Chandra (Wielgus et al. 2022a). The authors show that the data are consistent with a hot spot orbiting at a radius about two times the ISCO of a non-spinning black hole, with the QU loop period interpreted as the Keplerian period of the hot spot. The hot spot interpretation is not unique though: the EVPA swings have been interpreted by Yusef-Zadeh et al. (2007) not in terms of an orbiting hot spot, but rather within the framework of an ejected expanding blob of plasma. This alternative model has recently been discussed by Michail et al. (2023). In this article, we investigate the polarized synchrotron radiation emitted by orbiting hot spots. In this context, the orientation of the magnetic field has a crucial impact on the observables. Indeed, the electric vector (the orientation on sky of which is encoded in the QU loop) is oriented along the cross product \(\mathbf{K}\times\mathbf{B}\), where \(\mathbf{K}\) is the photon's direction of emission and \(\mathbf{B}\) is the magnetic field vector, both expressed in the comoving frame of the emitter. There is a growing body of evidence that the magnetic field in the close surroundings of Sgr A* is rather ordered, dynamically important (i.e. the plasma dynamics is sensitive to the magnetic field), with a dominant poloidal component (i.e. in a plane orthogonal to the equatorial plane of the black hole). The hot spot modeling of infrared data performed by GRAVITY Collaboration et al. (2018, 2020c) favors a strong poloidal field. The QU loop observed by Wielgus et al. (2022b) favors a vertical field, while the persistence of the rotation measure, the sign of the circular polarization, and the magnitude of the linear polarization fraction all favor a structured magnetic field of persistent topology (see also Wielgus et al. 2023). The analysis of Michail et al. (2023) favors a magnetic field orientation aligned with the angular momentum vector of the accretion flow, that is, vertical for an accretion flow centered on the equatorial plane of the black hole. The analysis of the spatially resolved event horizon scale images of Sgr A* obtained by the Event Horizon Telescope (EHT; EHTC et al. 2022a,b) further supports the magnetically arrested disk (MAD) accretion flow model interpretation (EHTC et al. 2022c), characterized by dynamically important magnetic fields with a strong vertical component near the event horizon (Narayan et al. 2003). Furthermore, ordered magnetic fields in the compact region around Sgr A* were revealed by pre-EHT very long baseline interferometry polarimetric observations (Johnson et al. 2015). We aim to study the properties of QU loops associated with hot spots around black holes. Such investigations are the subject of recent intense theoretical efforts (GRAVITY Collaboration et al. 2020c; Gelles et al. 2021; Narayan et al. 2021; Vos et al. 2022; Gravity Collaboration et al. 2023; Najafi-Ziyazi et al. 2023). Here we intend to contribute to this emerging topic by mainly focusing on the impact of special relativity on the observables. We develop a thorough analysis of QU loops in Minkowski spacetime and show that these flat spacetime loops share the main features of their general-relativistic counterparts, demonstrating that QU loops are strongly affected by the relativistic velocities of their emitter, and the associated special-relativistic light aberration. We also develop an analytical understanding of the properties of these QU loops. We then compute QU loops in the Schwarzschild spacetime, comparing them to their Minkowski counterparts and to the relevant literature. The main aim of this article is to elucidate which aspects of these observable patterns are due to special-relativistic, and due to general-relativistic effects. The paper is organized as follows. Section 2 describes our hot-spot model. Section 3 introduces in detail the topic of QU loops and all the necessary concepts. Section 4 is the main section of the article and is dedicated to the properties of Minkowski QU loops. Section 5 describes Schwarzschild QU loops, and section 7 gives our conclusions and perspectives. ## 2 Modeling hot spot observables In this section we present our model of a rotating hot spot around a compact object. We discuss the spacetime geometry, the shape, physical characteristics and emission of the hot spot, and the radiative transfer integration by means of relativistic ray tracing. We consider physically motivated values of the model parameters. For a more extensive discussion of the impact of the individual parameters on the QU loop patterns see Vos et al. (2022). ### Spacetime geometry The main aim of this article is to discuss the respective influence of special- and general-relativistic effects on the polarized signatures associated with orbiting hot spots. To that end, we perform calculations in Minkowski and Schwarzschild spacetimes. We consider that the spacetime is described in spherical coordinates \((t,r,\theta,\varphi)\). We assume that the spacetime is static and spherically symmetric, meaning that we will not discuss any impact of the compact object's spin in this article. The metric line element thus reads \[\mathrm{d}s^{2} =g_{rr}\,\mathrm{d}^{2}+g_{rr}\,\mathrm{d}r^{2}+g_{\theta\theta} \,\mathrm{d}\theta^{2}+g_{\varphi\theta}\,\mathrm{d}\varphi^{2} \tag{1}\] \[=g_{rr}\,\mathrm{d}^{2}+g_{rr}\,\mathrm{d}r^{2}+r^{2}\left(\mathrm{ d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\varphi^{2}\right)\] where \(g_{rr}=\mathbf{\partial}_{\mathbf{u}}\cdot\mathbf{\partial}_{\mathbf{v}}\) are the metric coefficients, that can be expressed as the dot products between the natural basis vectors associated with the spherical coordinates, \(\mathbf{\partial}_{\mathbf{\mu}}\). The two static spacetimes (Minkowski, Schwarzschild) that we will consider are thus fully defined by their metric coefficients \(g_{tt}\) and \(g_{rr}\). #### 2.1.1 Minkowski spacetime As a flat manifold, Minkowski spacetime has little a priori relevance for interpreting data originating from the close environment of a supermassive black hole. However, by studying QU loops in this context we aim at revealing key observable features to discuss and interpret and, in particular, to be able to tell which aspects are specific to spacetime curvature, and which aspects are already present in a flat spacetime. The Minkowski metric is defined with \[g_{\pi}=-1,\quad g_{rr}=1. \tag{2}\] #### 2.1.2 Schwarzschild spacetime The Schwarzschild metric in Schwarzschild coordinates is \[g_{\pi}=-\left(1-\frac{r_{S}}{r}\right),\quad g_{rr}=\left(1-\frac{r_{S}}{r} \right)^{-1}, \tag{3}\] where \(r_{S}=2M\) is the location of the Schwarzschild event horizon. The Jebsen-Birkhoff1 theorem (Jebsen 1921; Birkhoff & Langer 1923) ensures that the Schwarzschild geometry uniquely describes the spacetime outside of any spherically symmetric object in vacuum. We note that we use throughout this article a system of units where the gravitational constant \(G\) and light speed \(c\) have unit values, so that the gravitational radius, \(GM/c^{2}\), is simply equal to \(M\). Footnote 1: The famous Birkhoff’s theorem of general relativity, published by Birkhoff in 1923, was first published two years before by the Norwegian physicist Jebsen (see Voje Johansen & Ravndal 2005, for an historical account). ### Hot spot geometry and physical quantities Let us consider two spatial positions \(P_{0}(x_{0},y_{0},z_{0})\) and \(P(x,y,z)\), where the Cartesian coordinates are related to the spherical ones by means of standard Euclidean formulas. We define the coordinate distance between \(P_{0}\) and \(P\) as the Euclidean distance defined by \(d^{2}=(x-x_{0})^{2}+(y-y_{0})^{2}+(z-z_{0})^{2}\). The center of our hot spot is located at a constant radius \(r_{0}\), in the equatorial plane \(\theta_{0}=\pi/2\), with a varying azimuthal angle \(\varphi_{0}\). The hot spot is described by specifying the profiles of the electron number density \(n_{\rm e}\), temperature of the electrons \(T_{\rm e}\), as well as the magnitude and direction of the ambient magnetic field \(B\). All these quantities are measured in the rest frame of the orbiting hot spot, that we will hereafter refer to as the emitter's frame. We assume the following profiles for the physical quantities \[n_{\rm e}=n_{\rm e0}\exp\left(-\frac{d^{2}}{2\sigma_{r}^{2}} \right), \tag{4}\] \[T_{\rm e}=T_{\rm e0}\exp\left(-\frac{d^{2}}{2\sigma_{r}^{2}} \right),\] \[\frac{B^{2}}{4\pi}=\eta\,m_{p}c^{2}\,n_{\rm e},\] where \(d^{2}\) is the squared coordinate distance (as defined at the beginning of this section) to the center of the hot spot (\(x_{0}=r_{0}\cos\varphi_{0},y_{0}=r_{0}\sin\varphi_{0},z_{0}=0\)), \(n_{\rm e0}\) and \(T_{\rm e0}\) are the density and temperature at the center of the hot spot, \(\sigma_{r}\) is the Gaussian standard deviation, which is related to the full width at half maximum, that is, to the effective diameter \(D_{\rm hs}\) of the hot spot, by \(D_{\rm hs}\approx 2.35\,\sigma_{r}\), \(\eta\) is the magnetization parameter, \(m_{p}\) is the proton rest mass, and we assume a constant ratio between the particle rest-mass and magnetic energy densities. Furthermore, we assume that the hot spot is described by a spatial Gaussian profile around its center, while its properties remain constant in time. The values assumed for the parameters introduced so far are listed in Table 1. We note that the magnetic field maximum magnitude (\(B_{0}=140\) G, a consequence of the simple prescription given by the third line of Eq. 4) is rather high as compared to the typical value that can be derived from the synchrotron cooling time (see e.g. Aimar et al. 2023a), but the precise value anyway does not impact the results of this article. We also note that we fix the magnetization to \(\eta=1\) corresponding to a strongly magnetized flow, in agreement with hints that Sgr A* is likely a magnetically arrested flow (e.g. GRAVITY Collaboration et al. 2018; EHTC et al. 2022c; Wielgus et al. 2022b). The central density and temperature are chosen to ensure a near infrared maximum derreddened flux (that, is corrected from the strong extinction towards the Galactic center) of \(\approx 10\) mJy at low inclination for a vertical magnetic field. This value corresponds to a rather bright infrared flare, see the percentiles of Sgr A* dereddened flux distribution provided in Table 1 of GRAVITY Collaboration et al. (2020a), and can be compared first to the dereddened flux density of S2 that reaches \(\approx 16\) mJy (GRAVITY Collaboration et al. 2020a), and second to the brightest infrared flare ever observed that reached \(\approx 60\) mJy (Do et al. 2019). We note that other configurations, with different magnetic field geometry, can lead to much higher fluxes that are not in agreement with observations. Nonetheless, we keep the central density and temperature fixed in order to ease the interpretation of the impact of the magnetic field gometry on the observables. \begin{table} \begin{tabular}{c c c} \hline Symbol & Value & Property \\ \hline \(M\) & \(4.3\times 10^{6}\) M\({}_{\odot}\) & compact object mass \\ \(D\) & \(8.28\) kpc & compact object distance \\ \(a\) & \(0\) & BH spin parameter \\ \(r_{0}\) & \(8\,r_{g}\) & hot spot orbital radius \\ \(\sigma_{r}\) & \(r_{g}\) & hot spot Gaussian extension \\ \(n_{\rm e0}\) & \(2\times 10^{6}\) cm\({}^{-3}\) & max number density of electrons \\ \(T_{\rm e0}\) & \(10^{11}\) K & max electron temperature \\ \(B_{0}\) & \(140\) G & max magnetic field \\ \(\eta\) & \(1\) & magnetization \\ \(\kappa\) & \(4\) & index of \(\kappa\) electron distribution \\ \(i\) & \([90^{\circ}-180^{\circ}]\) & inclination angle \\ \(\lambda_{\rm obs}\) & \(2.2\,\mu\)m & observing wavelength \\ \(f\) & \(200\,\mu\)as & field of view \\ \(N\times N\) & \(128\times 128\) & image resolution \\ \hline \end{tabular} \end{table} Table 1: Parameters of our model. The mass and distance to Sgr A* are taken from GRAVITY Collaboration et al. (2020b, 2021). The orbital radius is close to that found by GRAVITY Collaboration et al. (2018); Wielgus et al. (2022b). The density and temperature are chosen to ensure a \(2.2\,\mu\)m derreddened flux of the order of \(10\) mJy. The magnetic field is linked to the density through the assumption of Eq. 4. It is still listed here for completeness. We remind that the inclination angle \(i\) corresponds to the Boyer-Lindquist \(\theta\) angle (illustrated in Fig. 3) of the observer. In the text, the complementary angle \(t=\pi-i\) is often used. ### Hot spot motion The hot spot center located at \(r_{0}\) is assumed to follow a circular timelike geodesic, that is, a Keplerian orbit of the spacetime considered. Its 4-velocity thus reads \[\mathbf{u}=u^{\prime}\left(\boldsymbol{\partial}_{t}+\Omega\,\boldsymbol{\partial }_{\boldsymbol{\varphi}}\right),\quad\Omega=\frac{u^{\prime\prime}}{u^{\prime}}. \tag{5}\] The expressions of \(u^{\prime}\) and \(\Omega\) depend on the spacetime metric. That of \(\Omega\) is well known for the Schwarzschild spacetime expressed in Schwarzschild coordinates, \(\Omega_{\mathrm{Schwarzschild}}=M^{1/2}\,r^{-3/2}\) (e.g. Bardeen et al. 1972). Given that this expression coincides with the Newtonian result, we use the same expression in Minkowski, even though there is no reason for the hot spot to follow orbital motion in a flat spacetime (in the absence of a central massive object). Hence, we only consider Minkowski spacetime to determine what features of the observables are specific to a curved spacetime, and what features are already present in a flat geometry. With an expression for \(\Omega\), it is straightforward to derive that of \(u^{\prime}\) by using the normalization of the 4-velocity, \(\mathbf{u}\cdot\mathbf{u}=-1\). We finally obtain \[u^{\prime} =\sqrt{\frac{r}{r-M}},\quad\Omega=M^{1/2}\,r^{-3/2},\quad\text{ \@@cite[cite]{[\@@bibref{}{Minkowski}{}{}]}} \tag{6}\] \[u^{\prime} =\sqrt{\frac{r}{r-3M}},\quad\Omega=M^{1/2}\,r^{-3/2}.\quad\text{ \@@cite[cite]{[\@@bibref{}{Schwarzschild}{}{}]}}\] ### Magnetic field configuration We have so far only defined the magnitude of the magnetic field through Eq. 4. We proceed to specify its direction, hence we need to define a unit spacelike vector, normal to the hot spot 4-velocity given that the magnetic field vector lies in the rest space of the emitter. We will consider only two different configurations: either vertical, or toroidal. These two configurations are inspired by two plausible magnetic configurations that could exist around Sgr A*. Either the environment is weakly magnetized and the magnetic field lines will follow the motion of the matter swirling towards the black hole, in which case the magnetic field will be mostly toroidal (this would correspond to a SANE \(-\) standard and normal evolution - situation), or the environment is strongly magnetized and the magnetic field does not follow the motion of the matter, in which case it would have a strong vertical component like in MAD states. Thus, we define \[\mathbf{\tilde{B}}=(0,B^{\prime},B^{\prime\prime},0),\quad\text{ \@@cite[cite]{[\@@bibref{}{Vertical}{}{}]}} \tag{7}\] \[\mathbf{\tilde{B}}=(B^{\prime},0,0,B^{\prime\prime}),\quad\text{ \@@cite[cite]{[\@@bibref{}{Toroidal}{}{}]}}\] where the upper bar means that the vector is a unit vector, with the constraints that \[\mathbf{\tilde{B}}\cdot\mathbf{\tilde{B}}=1,\quad\mathbf{\tilde{B}}\cdot \mathbf{u}=0. \tag{8}\] The second condition implies that the magnetic field \(\mathbf{\tilde{B}}\) lies in the local rest space of the emitter, that is, the space orthogonal to its 4-velocity. We are thus defining the magnetic field as measured by the emitter. These conditions immediately lead to \[\mathbf{\tilde{B}}=\cos\theta\,\boldsymbol{e}_{r}-\sin\theta\, \boldsymbol{e}_{\theta},\quad\text{\@@cite[cite]{[\@@bibref{}{Vertical}{} {}]}} \tag{9}\] \[\mathbf{\tilde{B}}=\frac{1}{\sqrt{-\left(g_{tt}+\Omega^{2}g_{ \varphi\varphi}\right)}}\left(\sqrt{\frac{g_{\varphi\varphi}}{g_{tt}}}\,\Omega \,\boldsymbol{\partial}_{t}+\sqrt{-\frac{g_{tt}}{g_{\varphi\varphi}}}\, \boldsymbol{\partial}_{\boldsymbol{\varphi}}\right),\ \text{\@@cite[cite]{[\@@bibref{}{Toroidal}{}{}]}}\] where \(\Omega\) is the Keplerian rotation velocity defined in Eq. 6, and we use the orthonormal basis associated to the natural coordinate basis \(\boldsymbol{\partial}_{\boldsymbol{\mu}}\) \[\boldsymbol{e}_{t}=\frac{\boldsymbol{\partial}_{t}}{\sqrt{-g_{tt}}},\, \boldsymbol{e}_{r}=\frac{\boldsymbol{\partial}_{r}}{\sqrt{g_{rr}}},\, \boldsymbol{e}_{\theta}=\frac{\boldsymbol{\partial}_{\boldsymbol{\theta}}}{ \sqrt{g_{\theta\varphi}}},\,\boldsymbol{e}_{\varphi}=\frac{\boldsymbol{ \partial}_{\boldsymbol{\varphi}}}{\sqrt{g_{\varphi\varphi}}}. \tag{10}\] This basis coincides with the locally non-rotating frame (Bardeen et al. 1972) of the Schwarzschild spacetime. Note that although the hot spot's center \(r_{0}\) orbits in the equatorial plane, the full hot spot is a 3-dimensional structure in space and is not restricted to the equatorial plane. This is why the magnetic field is defined for all \(\theta\) and not only for \(\theta=\pi/2\). ### Radiative transfer The hot spot is assumed to emit synchrotron radiation, and the emitting electrons are considered to follow a \(\kappa\) distribution, that is, a mix between a thermal core and a power-law tail. This distribution is well adapted to simulate the state of electrons locally accelerated (for instance through magnetic reconnection) that radiate during Sgr A* flares. The \(\kappa\) distribution is thus a more physical assumption, particularly for the infrared emission during a flare, than the thermal spectrum considered by Wielgus et al. (2022b) and Vos et al. (2022). This distribution reads \[n_{\mathrm{e}}(\gamma)=N\,\gamma(\gamma^{2}-1)^{1/2}\left(1+\,\frac{\gamma-1}{ \kappa\theta_{\mathrm{e}}}\right)^{-(\kappa+1)} \tag{11}\] where \(\gamma\) is the Lorentz factor of the electrons, \(N\) is a normalizing coefficient chosen such that the integral of \(n_{\mathrm{e}}(\gamma)\) over all \(\gamma\) is equal to the total number density of the hot spot, \(\theta_{\mathrm{e}}=K\tau_{\mathrm{e}}/m_{\mathrm{e}}c^{2}\) is the dimensionless electron temperature, with \(k\) and \(m_{\mathrm{e}}\) being the Boltzmann constant and electron rest mass. We chose a parameter \(\kappa=4\). This translates to an infrared spectral index \(\alpha=0\) where \(\nu F_{\nu}\propto\nu^{\alpha}\), which is reasonable for bright flares (Gillessen et al. 2006). We utilize the emission, absorption, and Faraday rotation/conversion coefficients for \(\kappa\)-synchrotron as derived by Marszewski et al. (2021). These coefficients have rather complicated and lengthy expressions that we do not fully repeat here. However, it will be useful for forthcoming discussion to indicate that the emission coefficients for the various Stokes parameters are expressed as \[j_{\nu}\propto\frac{n_{\mathrm{e}}e^{2}\nu_{\mathrm{c}}}{c}X_{\kappa}^{-( \kappa-2)/2}\,\sin\theta_{B},\quad\left\{\begin{array}{l}\propto\sin^{2} \theta_{B},\\ \propto\nu^{-1},\end{array}\right.\quad\text{\@@cite[cite]{[\@@bibref{}{ for}{}{}]}} \tag{12}\] where \(X_{\kappa}=\nu[\nu_{\mathrm{c}}(\theta_{\mathrm{e}}\kappa)^{2}\sin\theta_{B}]^{-1}\), \(\nu_{c}\) is the cyclotron frequency, and \(\theta_{B}\) is the angle between the magnetic field direction and the direction of emission. The proportionality factor in the above expressions depends on \(\kappa\) and on the particular Stokes parameter that is considered. This expression coincides with the so-called high-frequency emission coefficient reported in Eq. 44 of Marszewski et al. (2021), which applies for our typical conditions. The strong directional dependence of this expression, evident in the \(\sin\theta_{B}\) term, will be crucial for the forthcoming discussion. We note that for \(\kappa=4\), the expression behaves as \(\sin^{2}\theta_{B}\), so that it cancels in the direction of emission along the magnetic field lines, and reaches maximum in the direction normal to the magnetic field. We also note that the frequency dependence of the emission coefficient follows \(\nu^{-1}\). While the Faraday effects are generally negligible for modeling infrared flares, they become important at millimeter wavelengths, for which significant Faraday rotation is most likely associated with the compact emission region, contributing non-trivially to the observed complex linear polarization (Wielgus et al. 2023). ### Polarized ray tracing We compute the polarized flux emanating from the orbiting hot spot by using the Gvorey code (Vincent et al. 2011; Aimar et al. 2023b). We consider an observer located at a distance \(D=8.28\) kpc (GRAVITY Collaboration et al. 2021). The compact object's mass is fixed to \(M=4.3\times 10^{6}\,M_{\odot}\) (GRAVITY Collaboration et al. 2020b). The inclination (corresponding to the spherical coordinate \(\theta\)) is varied in [90\({}^{\circ}\), 180\({}^{\circ}\)], with 90\({}^{\circ}\) corresponding to an edge-on view, and 180\({}^{\circ}\) to a face-on view. This range encompasses the best-fit inclination for Sgr A\({}^{\circ}\) of \(\simeq 160^{\circ}\) derived by GRAVITY Collaboration et al. (2018); Wielgus et al. (2022b). Inclinations higher than 90\({}^{\circ}\) recover a clockwise motion on sky of the hot spot, consistent with observations. Null geodesics are traced backwards from the observer's screen towards the hot spot, and the full polarized radiative transfer is solved. We account for the finite velocity of light (so-called "slow-light" paradigm). The final product of the computation is a set of maps of the specific Stokes parameters (\(I_{\nu},Q_{\nu},U_{\nu}\)), introduced in section 3.1, for the various orbital phases of the hot spot. We discard Stokes V in this article, although it is computed. We always consider a resolution of \(N\times N=128\times 128\) pixels, and a field of view of \(f=200\,\mu\)as. The observing wavelength is set to \(\lambda_{\rm obs}=2.2\,\mu\)m, coinciding with that of the GRAVITY instrument. All parameters discussed in this section are listed in Table 1. ## 3 Polarization signature of hot spots Before turning to the detailed properties of QU loops that will be discussed in the context of Minkowski spacetime in the next section, in this section we introduce all relevant material for the following discussions. We will define the Stokes Q and U parameters, the electric vector position angle, and intuitively introduce the concept of QU loops associated with orbiting hot spots. ### Stokes Q and U parameters, observed EVPA We consider a linearly polarized wave incident on the observer's screen. This is a simplification in the sense that synchrotron radiation is mostly linearly polarized but has non-zero circular polarization. Given that in this article we will never discuss circular polarization, we only introduce here the linearly polarized part of the radiation, encoded in the Stokes Q and U parameters. Note that our ray-tracing calculations consider the full synchrotron radiative transfer, with also non-zero Stokes V. The electric field describing the incident wave on the observer's screen is \[{\bf E}=E\left(\cos\chi_{\rm o}\@vec{e}_{\delta}+\sin\chi_{\rm o}\@vec{e}_{ \rm o}\right) \tag{13}\] where \((\@vec{e}_{\rm o},\@vec{e}_{\rm\delta})\) are the unit vectors in the plane of the screen of the observer, pointing towards the East and North directions respectively, see Fig. 1 for an illustration. The angle \(\chi_{\rm o}\), called the observed electric vector position angle (EVPA) lies East of North from the North direction. The index \(o\) is there to remind that this angle is defined in the observer's frame, hence the name of observed EVPA. We will introduce below an emission EVPA, defined in the emitter's frame. The linear polarization information is encoded in the observed EVPA, but this angle is not directly observable. It is useful to introduce the following Stokes parameters \[Q =E_{\delta}^{2}-E_{\rm o}^{2}, \tag{14}\] \[U =E_{d}^{2}-E_{\rm o}^{2},\] where the various \(E_{i}\) represent the coordinate of the electric vector along the corresponding directions illustrated in Fig. 1. These are observable quantities, equal to differences of intensities along specific directions on sky. Equations 13 and 14 immediately lead to \[Q=E^{2}\left(\cos^{2}\!\chi_{\rm o}-\sin^{2}\!\chi_{\rm o}\right)=I\,\cos 2 \chi_{\rm o}, \tag{15}\] where \(I=E^{2}\) is the total intensity, or Stokes I parameter. Expressing the electric vector in the basis \((\@vec{e}_{\rm o},\@vec{e}_{\rm d})\) associated to the directions \((a,d)\) rotated by 45\({}^{\circ}\) with respect to \((\alpha,\delta)\), see Fig. 1, it is straightforward to obtain \[U=I\,\sin 2\chi_{\rm o}, \tag{16}\] so that the observed EVPA is simply obtained by \[\chi_{\rm o}=\frac{1}{2}\,{\rm atan2}\,(Q,U)\,. \tag{17}\] This angle lies in the range \[\chi_{\rm o}\in[-\pi/2,\pi/2], \tag{18}\] and is defined modulo \(\pi\), given that it only encodes the direction of oscillation of the electric field. Figure 1: Electric field, observed EVPA and Stokes Q and U. All quantities are defined in the observer’s frame, as measured by the distant observer. The observed electric field associated to the wave received at the observer’s screen is the black arrow, with a position angle East of North corresponding to the observed Electric Vector Position Angle, or observed EVPA. For a fully linearly polarized wave, there is a bijection (up to a sign ambiguity) between providing the electric vector magnitude and direction on screen, and the pair of Stokes parameters \((Q,U)\). The electric vector magnitude is given by \(\sqrt{Q^{2}+U^{2}}\), while its orientation follows \(\chi_{\rm o}=1/2\,{\rm atan2}(Q,U)\), see Eq. 17. It is easy to check from the definitions of Eqs. 14 that the North-South and East-West directions coincide with positive and negative Stokes Q (and zero Stokes U), respectively, while the diagonals correspond to positive and negative Stokes U (and zero Stokes Q), respectively. ### Emitter's and observer's bases, emission EVPA The natural basis for expressing synchrotron emission in the emitter's frame is the orthogonal triad made of the following three vectors, all defined in the emitter's frame: * the direction of photon emission \(\mathbf{K}\) measured by the emitter, * the magnetic field vector \(\mathbf{B}_{\perp}\) projected orthogonally to \(\mathbf{K}\), * and the emitter's frame polarization vector \(\mathbf{F}\), which reads \[\mathbf{F}=\mathbf{K}\times\mathbf{B}. \tag{19}\] We call these vectors \((\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}})=(\mathbf{F},-\mathbf{B}_{\perp },\mathbf{K})\), and refer to them as the emitter's polarization basis. They are illustrated by the black vectors in Fig. 2. It is in this emitter's basis that the polarized synchrotron radiative transfer coefficient are given. However, the observable Stokes parameters are defined in the observer's polarization basis, \((\mathbf{e_{a}},\mathbf{e_{\delta}})\), corresponding to the unit vectors in the East and North directions on the observer's sky. We thus need to integrate the polarized radiative transfer equation in this observer-related basis, and thus transform from the emitter's basis to the observer's basis. For doing this, we need to parallel-transport the \((\mathbf{e_{a}},\mathbf{e_{\delta}})\) basis from the observer to the emitter along the photon's geodesic. The resulting vectors, parallel-transported to the emitter's frame, are illustrated by the green vectors in Fig. 2. The angle \(\chi_{\epsilon}\) between the parallel-transported North direction and the polarization vector \(\mathbf{F}\) allows to rotate between the synchrotron-adapted emitter's basis and the observer's basis. We call this angle the emission EVPA, hence the index \(\epsilon\) in our notation. This wording reminds that this angle is expressed in the emitter's basis, and allows to make an explicit difference with the observed EVPA, \(\chi_{\epsilon}\), introduced above. There is in general no equality between \(\chi_{\epsilon}\) and \(\chi_{\epsilon}\), for the simple reason that \(\chi_{\epsilon}\) evolves along the geodesic as radiative transfer equations are integrated in the region containing plasma. However, for our setup consisting of a very compact emission region with nearly homogeneous conditions of motion and magnetic field, the emission and observed EVPA are very nearly equal. The distinction that we introduced between \(\chi_{\epsilon}\) and \(\chi_{\epsilon}\) is thus not important for our results (and we will often simply refer to the EVPA, without precision), but we consider that it is still important to make the distinction. The emission EVPA can be easily computed in the emitter's frame, from the projections of the vector \(\mathbf{B}_{\perp}\) on the parallel-transported observer's polarization basis axes: \[\chi_{\epsilon}=\frac{\pi}{2}-\mathrm{atan2}\left(\mathbf{B}_{\perp}\cdot \mathbf{e_{w}},\mathbf{B}_{\perp}\cdot\mathbf{e_{\delta}}\right), \tag{20}\] where \(\mathbf{e_{w}}=-\mathbf{e_{a}}\) is the unit vector in the West direction, parallel transported to the emitter. We note that \(\mathbf{B}_{\perp}\) is not a unit vector in general, contrary to \(\mathbf{e_{w}}\) and \(\mathbf{e_{\delta}}\), but this does not change the result of the atan2 function in Eq. 20. The emission EVPA is a crucial quantity for integrating the polarized radiative transfer. We refer to Aimar et al. (2023b) for details. ### Newtonian QU loops Let us consider a hot spot orbiting around a black hole, with a toroidal ambient magnetic field, observed face-on by an infinitely distant observer, as illustrated in Fig. 3. Let us for the time being not consider any (special or general) relativistic effect (that is, no lensing, no aberration, no relativistic Doppler or beaming effects). The radiation is emitted in the vertical direction along the vector \(\mathbf{K}\). It is easy to visualize that one complete rotation of the hot spot will lead to a complete rotation of the polarization vector \(\mathbf{F}\) in the plane of the sky, as illustrated in Fig. 3. The bottom-right panel of this figure shows that this leads to a double loop in the QU plane. Hence, at the most basic level, QU loops are a non-relativistic feature, simply a manifestation of an axisymmetric structure of the observed system. If we consider the same setup as described above, but now take a vertical magnetic field, our non-relativistic point of view leads to concluding that the polarization vector would be consistently zero (\(\mathbf{K}\) and \(\mathbf{B}\) being parallel) as the hot spot rotates, leading to no QU loop. As we will see in the next section, adding only special relativistic effects (that is, still no light bending) allows to recover QU loops in all cases, including for a face-on observer with an ambient vertical magnetic field. ## 4 QU loops in Minkowski spacetime In this section we derive an analytical understanding of QU loops in Minkowski spacetime, and in particular we clarify in what cases the rotating hot spot generates one or two loops in the QU plane. Using Minkowski spacetime is helpful in order to gain insight in a simplified framework, without accounting for the light bending occurring in a curved spacetime. A non-intuitive conclusion of this section is that all features of QU loops discussed in the literature in the Schwarzschild or Kerr contexts are actually already present in Minkowski. The crucial advantage of the flat geometry is that exact analytical formulas can be derived to explain the QU loops. The next three subsections are devoted to deriving an analytical expression of the evolution of the emission EVPA depending on whether the magnetic field is vertical or toroidal. This analytical model is then compared to numerical simulations, which additionally constitutes a test of our polarized ray-tracing code. ### Direction of emission and aberration Let us consider a hot spot orbiting in Minkowski spacetime. For the time being we do not specify the magnetic field orientation and only focus on the direction of emission in the emitter's frame. Figure 2: Emitter’s and observer’s polarization bases. All vectors discussed here are expressed in the emitter’s frame. The direction of emission is \(\mathbf{K}\), while \(\mathbf{B}_{\perp}\) is the ambient magnetic field projected normal to \(\mathbf{K}\). The emitter’s frame polarization vector reads \(\mathbf{F}=\mathbf{K}\times\mathbf{B}=\mathbf{K}\times\mathbf{B}_{\perp}\). The vectors \((\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}})=(\mathbf{F},-\mathbf{B}_{\perp },\mathbf{K})\) form the emitter’s orthogonal basis, naturally adapted for expressing synchrotron radiative transfer. The polarization basis of the observer \((\mathbf{e_{a}},\mathbf{e_{\delta}})\), corresponding to unit vectors in the East and North directions, has been parallel transported to the emitter’s frame. The vector \(\mathbf{e_{w}}=-\mathbf{e_{a}}\) is along the West direction, such that \((\mathbf{e_{w}},\mathbf{e_{\delta}},\mathbf{K})\) forms the observer’s orthogonal triad. The emission EVPA is the angle \(\chi_{\epsilon}=(\mathbf{e_{a}},\mathbf{F})\) evaluated East of North, lying in between the observer’s and emitter’s bases. It is expressed by Eq. 20. The Minkowski 4-velocity of the emitter (that is, of the hot spot), defined in Eq. 6, reads \[{\bf u}=A\left(\@vec{e}_{t}+r_{0}^{-1/2}\@vec{e}_{\varphi}\right),\quad A=\sqrt{ \frac{r_{0}}{r_{0}-M}}, \tag{21}\] where we replaced the natural basis vectors \(\@vec{\partial}_{\@vec{\mu}}\) by the orthonormal basis vectors, using Eq. 10. Let us consider an observer with an inclination \(90^{\circ}\leq i\leq 180^{\circ}\). We call \(\iota=\pi-i\), which thus lies between 0 and 90\({}^{\circ}\). The observer is assumed to be located at \(\varphi=-\pi/2\), that is, in the \(YZ\) plane (see Fig. 3). The 4-vector tangent to the photon geodesic at emission reads \[{\bf k} ={\bf e}_{\rm t}+\cos\iota\@vec{e}_{\theta}-\sin\iota\,{\bf e}_{ \rm y} \tag{22}\] \[={\bf e}_{\rm t}-\sin\iota\,\sin\varphi\,{\bf e}_{\rm r}+\cos \iota\@vec{e}_{\theta}-\sin\iota\,\cos\varphi\@vec{e}_{\varphi},\] where \({\bf e}_{\rm y}=\sin\varphi\,{\bf e}_{\rm r}+\cos\varphi\,{\bf e}_{\varphi}\) is the unit vector along the \(Y\) axis illustrated in Fig. 3. The vector \({\bf k}\) is clearly a null vector of the Minkowski spacetime. In the particular case of an exactly face-on view, we have \[{\bf k}={\bf e}_{\rm t}+\@vec{e}_{\theta},\quad({\rm face}-{\rm on}) \tag{23}\] such that the spatial component of the 4-vector points towards the negative Z axis, that is, towards the face-on observer. Our final goal is to compute the emission EVPA, so we do not need this null 4-vector, but rather its spacelike projection orthogonal to the 4-velocity of the emitter, that is, in the rest frame of the emitter. This reads \[{\bf K}={\bf k}+({\bf k}\cdot{\bf u})\ {\bf u}. \tag{24}\] This simple relation is very crucial and virtually contains all the results presented below. Even for a face-on observer, the actual Figure 3: QU loop illustration in a non-relativistic context. **Top panel:** The black hole is represented by the black disk. The hot spot (red disk) orbits in the equatorial \(XY\) plane around the black hole (black disk). The \(Z\) axis is normal to the equatorial plane. We consider an observer looking face-on at the black hole, located towards the negative Z axis. The North direction of the observer’s screen is assumed to lie along the \(-Y\) axis. The \(\theta\) and \(\varphi\) angles of the spherical coordinates are represented. The hot spot rotates in the positive \(\varphi\) direction. The green vector \({\bf K}\) represents the direction of emission of the photon (we discard any relativistic effect here), the blue vector \({\bf B}\) is the magnetic field, assumed toroidal. The polarization vector \({\bf F}={\bf K}\times{\bf B}\) is shown in pink. Successive positions of the hot spot are labeled from 1 to 5. **Bottom-left panel:** Rotation of the polarization vector on the sky plane of the observer, with the Stokes directions of Fig. 1 overlaid. **Bottom-right panel:** The associated QU plane and QU loops. The polar coordinates in this plane are (\(\rho=F_{\rm LP}=\sqrt{Q^{2}+U^{2}}\), \(\phi=2\chi_{\rm e}\)), where \(F_{\rm LP}\) is the linearly polarized flux, and \(\chi_{\rm e}\) is the observed EVPA. direction of photon emission does not lie along the vertical direction, contrary to what is illustrated in the non-relativistic Fig. 3. It acquires a toroidal component by means of the projection written above, stemming from the toroidal component of \(\mathbf{u}\). This is simply the standard special relativistic aberration effect. We can express \[\mathbf{k}\cdot\mathbf{u}=-A\left(1+\frac{\sin{\iota}\cos{\varphi}}{\sqrt{r_{0} }}\right)\equiv-\omega, \tag{25}\] where it is easy to check that \(\omega\) coincides with the norm of \(\mathbf{K}\), that is, with the pulsation of the photon as measured by the emitter. ### Vertical magnetic field Let us now restrict the discussion to an ambient vertical magnetic field. We want to derive an analytic expression of the evolution of the emission EVPA with the orbital phase \(\varphi\). For simplicity, we will consider here a pointlike hot spot in the equatorial plane (so \(\theta=\pi/2\) in all this section). The unit vector along the magnetic field direction reads \[\mathbf{\tilde{B}}=-\boldsymbol{e_{\theta}}. \tag{26}\] Our goal is to express the emission EVPA, from Eq. 20. Let us start by writing \[\mathbf{e_{w}} =\mathbf{e_{X}}=\cos{\varphi}\,\boldsymbol{e_{r}}-\sin{\varphi} \,\boldsymbol{e_{\varphi}}, \tag{27}\] \[\boldsymbol{e_{\theta}} =-\cos{\iota}\,\mathbf{e_{Y}}+\sin{\iota}\,\mathbf{e_{Z}}\] \[=-\cos{\iota}\sin{\varphi}\,\mathbf{e_{r}}-\sin{\iota}\, \boldsymbol{e_{\theta}}-\cos{\iota}\cos{\varphi}\,\boldsymbol{e_{\varphi}},\] where we note that we are working in the flat Minkowski spacetime, so the observer polarization basis is simply conserved along the geodesic. We now need only the expression of the projection of the magnetic field orthogonal to the direction of emission \[\mathbf{B}_{\perp}=\mathbf{\tilde{B}}-\left(\mathbf{\tilde{B}}\cdot\mathbf{ \tilde{K}}\right)\,\mathbf{\tilde{K}}, \tag{28}\] where \(\mathbf{\tilde{K}}=\mathbf{K}/\omega\) is the unit vector along \(\mathbf{K}\). At this point, we have expressed all the quantities of interest and can write the emission EVPA expression. The details of the computation are not particularly illuminating, so we provide them in Appendix A. The final expression for the emission EVPA reads \[\chi_{e}(\varphi)=\frac{\pi}{2}-\mathrm{atan2}\left(\cos{\iota}\sin{\varphi} \,\frac{A}{\omega\sqrt{r_{0}}},\sin{\iota}+\cos^{2}{\iota}\cos{\varphi}\, \frac{A}{\omega\sqrt{r_{0}}}\right). \tag{29}\] Let us first check what happens for an exactly face-on observer, \(\iota=0\). In this case the expression simplifies considerably to \(\chi_{e}(\varphi)=\pi/2-\varphi\). It is clear from this expression that, as the hot spot rotates with \(\varphi\) varying on a \(2\pi\) interval, so will the emission EVPA. The emission EVPA will thus cover two times its domain of definition. And so will the observed EVPA, because the two quantities are nearly equal for our setup (see above). So this will lead to a double QU loop seen by the distant observer. This is the first non-intuitive conclusion of our analysis: already in Minkowski, a face-on observer considering a hot spot immersed in a vertical magnetic field will detect a double QU loop signal. Note that the crucial difference between the analysis developed in this section and the non-relativistic analysis of section 3.3 is the aberration affecting the apparent direction of light propagation. The vector \(\mathbf{K}\) is not purely vertical, as is represented in Fig. 3, it acquires a component in the equatorial plane when projecting orthogonal to the relativistic 4-velocity \(\mathbf{u}\) of the emitter. Figure 4 illustrates this. We now turn to the discussion of a few important properties of Minkowski QU loops in a vertical magnetic field, before discussing simulation results. #### 4.2.1 Emission EVPA symmetry Our emission EVPA expression has the following property \[\chi_{e}(\varphi)=\pi-\chi_{e}(2\pi-\varphi)=-\chi_{e}(2\pi-\varphi) \tag{30}\] where the second equality comes from the fact that the EVPA is defined modulo \(\pi\). This relation means that the first half of the orbit \(\varphi\in[0,\pi]\) and the second half \(\varphi\in[\pi,2\pi]\) have the same EVPA evolution, up to a sign difference. Equivalently, the EVPA orbital evolution is symmetric with respect to \(\varphi=\pi\), up to a sign. #### 4.2.2 QU loop mirror symmetry EVPA is not the only quantity that shows a symmetry in the orbital evolution of the hot spot. The same goes for the photon's emitted energy \(\omega\). It is indeed obvious from Eq. 25 that \[\omega(2\pi-\varphi)=\omega(\varphi). \tag{31}\] Figure 4: Effect of the spacetime geometry on the emission direction \(\mathbf{K}\). A hot spot (red disk) is orbiting around a black hole (black disk). The observer is located face-on towards the negative \(Z\) axis. In a Newtonian spacetime, the direction of emission (i.e. the unit vector \(\mathbf{K}\) along the projection of the null 4-vector \(\mathbf{k}\) normal to the emitter’s 4-velocity) is exactly vertical towards the negative \(Z\) axis (dashed pale blue arrow). This is the case illustrated in the non-relativistic figure 3. Special relativistic light aberration leads to an additional azimuthal component (solid dark blue arrow). Note that the direction of emission in the Schwarzschild spacetime is along the sum of the two solid arrows, given that the special relativistic aberration is of course also included in the Schwarzschild geometry. The various vectors are approximately to scale for a Keplerian hot spot at a few gravitational radii: the aberration and light bending effects are not small corrections to an approximately vertical direction, they lead to strong distorsions of the apparent emission direction (of order tens of percents). The same also goes for the angle \[\theta_{B}=\mathrm{acos}\left(\vec{\mathbf{K}}\cdot\vec{\mathbf{B}}\right) \tag{32}\] between the magnetic field and the photon's direction of emission. Indeed, Appendix A shows that, for a vertical magnetic field, \[\vec{\mathbf{K}}\cdot\vec{\mathbf{B}}=-\frac{\cos\iota}{\omega(\varphi)}, \tag{33}\] where the \(\varphi\) dependence is made explicit. We thus have \[\theta_{B}(2\pi-\varphi)=\theta_{B}(\varphi). \tag{34}\] The emitted flux only depends on the photon's emitted energy as well as on the direction of emission relative to the magnetic field direction. Indeed, for our circular orbit, all other physical quantities (density, magnetic field magnitude, temperature) are constant. As a consequence, Eqs 31 and 34 mean that the emitted linearly polarized flux satisfies \[F_{\mathrm{LP}}(2\pi-\varphi)=F_{\mathrm{LP}}(\varphi). \tag{35}\] Together with Eq. 30, and keeping in mind that for our setup the emission and observed EVPA are nearly equal, this relation leads to the conclusion that the QU track in the Minkowski spacetime is symmetric with respect to the horizontal axis. Indeed, Fig. 3 shows that the linearly polarized flux and the double of the observed EVPA (compare to Eq. 17) are the polar coordinates of the QU track. For the rest of this article we will refer to this symmetry with respect to the horizontal Q axis as the QU loop mirror symmetry. #### 4.2.3 Number of loops The emission EVPA orbital evolution \(\chi_{e}(\varphi)\) is dictated by Eq. 29, and is symmetric with respect to \(\varphi=\pi\) up to a sign. Thus, if the full allowed range of EVPA, \([-\pi/2,\pi/2]\), is covered in the first half of the orbit, then it will be covered again in the second half, leading to two QU loops. This can happen provided that the EVPA visits all possible values in \([-\pi/2,\pi/2]\) during the first orbit, so if its tangent reaches infinity. There will thus be two QU loops provided that \[\frac{\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathrm{w}}}{\mathbf{B}_{\perp} \cdot\mathbf{e}_{\mathrm{g}}}=\frac{\cos\iota\sin\varphi\,\frac{A}{\omega \sqrt{r_{0}}}}{\sin\iota+\cos^{2}\iota\cos\varphi\,\frac{A}{\omega\sqrt{r_{0}}}} \tag{36}\] varies between \(-\infty\) and \(+\infty\) when \(\varphi\) varies between \(0\) and \(\pi\). This quantity will reach infinity provided that the denominator \[\sin\iota+\cos^{2}\iota\cos\varphi\,\frac{A}{\omega\sqrt{r_{0}}}=0, \tag{37}\] considered as an equation for the variable \(\varphi\) with a given inclination \(\iota\), has a root for some value of \(\varphi\). Note that this is not such a trivial equation as it might seem, because \(\omega\) depends on \(\iota\), see Eq. 25. By examining this function numerically it is easy to show that it has a root only when \[\iota<\iota_{0}(r_{0}) \tag{38}\] which is the condition for obtaining two loops in a vertical magnetic field, in Minkowski spacetime. The limiting angle \(\iota_{0}\) depends on the orbital radius \(r_{0}\), the dependence being illustrated in Fig. 6. This is illustrated in the left panel of Fig. 5. We note that the existence of such a limit angle behavior for the existence of one or two loops has already been discussed in the Schwarzschild context by Gelles et al. (2021), see their Fig. 9, but without the analytical treatment that we provide here building on the simplicity of the Minkowski geometry. This is the second conclusion of this section: QU loops of the Minkowski spacetime in a vertical magnetic field share the exact same property as already discussed in the Kerr context by several authors (GRAVITY Collaboration et al., 2020; Gelles et al., 2021; Vos et al., 2022), that is, the existence of either one or two loops depending on the inclination and on the orbital radius. To our knowledge, the relation between this behavior and the special relativistic aberration effect has not been discussed in the literature to date. #### 4.2.4 Simulated QU loops Figure 7 illustrates these findings by showing the results of a polarized ray-tracing calculation in Minkowski spacetime for a hot spot seen under an inclination smaller and bigger than the critical angle \(\iota_{0}\approx 20^{\circ}\) for \(r_{0}=8M\). As predicted, we obtain respectively two and one QU loops in these cases. The EVPA evolution follows very precisely the analytical prediction of Eq. 29 at low inclination, which validates our calculation, and is at the same time a non-trivial consistency test of our polarized radiative transfer. We note that this is a clear demonstration of the near equality between the emitted and observed EVPA, because the colored dots and the red profile of the EVPA panel in Fig. 7 respectively represent an observed and emitted EVPA. It is also interesting to note that, although the analytical and numerical EVPA profiles remain similar, they are clearly more different at higher inclination, \(\iota=30^{\circ}\). This is not due to a limitation of the precision of the numerical integration. Instead, the differences are related to the Roemer effect, due to the finite velocity of light, that is not taken into account in the analytical profile. As a consequence, the numerical data lead the analytical profile in the first half orbit (where the hot spot is further away from the observer), while it lags behind the analytical profile in the second half orbit (where the hot spot is closer to the observer). As expected, the exact same behavior occurs for a toroidal magnetic field. Moreover, the QU track is mirror-symmetric, as predicted above. It is interesting to note that the evolution of the observed flux might seem counter-intuitive. Indeed, the source is approaching the observer on the left part of the trajectory (East side). But the flux evolution (upper-right panel of Fig. 7) shows that contrary to what relativistic beaming intuition would suggest, the flux is actually at minimum on the approaching side. This is a consequence of the \(\sin\theta_{B}\) dependence of the synchrotron radiative transfer coefficients, see Eq. 12. This angle is close to \(0\,[\pi]\) on the left side of the sky plane (which corresponds to an orbital phase \(\varphi=\pi\)), as demonstrated by the analytical profiles of the left panel of Fig. 8. These profiles represent the orbital phase evolution of \(\theta_{B}=\mathrm{acos}\left(\vec{\mathbf{K}}\cdot\vec{\mathbf{B}}\right)\), the expression of which is known analytically from the formulas provided in Appendix A. We note that around \(\iota=\iota_{0}\approx 20^{\circ}\) (for \(r_{0}=8M\)), the influence of the \(\theta_{B}\) dependence of the emission not only mitigates the relativistic beaming, but inverses the tendency by leading to a light curve that peaks on the receding side. We have checked that if one averages over \(\sin\theta_{B}\) (that is, if one considers an isotropized emission), the usual flux profile, peaking on the approaching side, is recovered. The Doppler effect cannot be responsible for this strong flux depletion at the orbital phase \(\varphi=\pi\), because the emitted frequency is at minimum at the orbital phase \(\varphi=\pi\) (see the top-left panel of Fig. 14), so the emitted Doppler-shifted flux is actually maximised there (see Eq. 12). Figure 8 shows that this behavior is specific to the low inclination. Higher inclination progressively leads to the more intuitive situation dominated by relativistic beaming. This is very natural: a vertical magnetic field seen at low inclination leads to \(\theta_{B}\) angles around \(0[\pi]\), where the \(\sin\theta_{B}\) dependence of the radiative transfer coefficient has a strong impact, while at high inclination, \(\theta_{B}\) varies around \(\pi/2\), where this dependence is weaker. This is the third conclusion of this section: in the Minkowski spacetime and for a vertical magnetic field, the flux variation is driven by the angular dependence of the synchrotron radiative transfer coefficients at low inclination, and by relativistic beaming at high inclination. The linear polarization of our hot spot is always very high, of order 75%, which is twice as high as the typically observed near infrared values (e.g. GRAVITY Collaboration et al. 2018). This is due to the very simple setup that we consider, with a small isolated emitting body. A more realistic scenario (see e.g. App. B of Gravity Collaboration et al. 2023), with a more extended or distorted structure, and the addition of larger-scale quiescent emission, would recover a more realistic level of linear polarization. ### Toroidal magnetic field The exact same computation that we presented in the last section for a vertical magnetic field can be performed for a toroidal magnetic field. Starting from Eq. 9, and specializing to the Minkowski spacetime in the equatorial plane, we obtain \[\mathbf{\tilde{B}}=A\left(\frac{\mathbf{e}_{\mathrm{t}}}{\sqrt{\nu_{0}}}+ \mathbf{e}_{\psi}\right), \tag{39}\] which is a unit spacelike vector normal to \(\mathbf{u}\). We refer the reader to Appendix A for the details of the computation and simply give here the final result \[\chi_{\mathrm{e}}(\varphi)=\frac{\pi}{2}-\mathrm{atan}2\left(\sin\varphi\left[ C\;\frac{A\omega}{\sqrt{\nu_{0}}}-1\right],\cos\varphi\cos\iota\left[C\;\frac{A \omega}{\sqrt{\nu_{0}}}-1\right]\right), \tag{40}\] where \(C=1/\omega^{2}\left(1/\sqrt{\nu_{0}}+\sin\iota\cos\varphi\right)\). Similar properties as in the vertical case can be derived in the exact same way as presented in the previous section. In particular, the relation \[\chi_{\mathrm{e}}(\varphi)=-\chi_{\mathrm{e}}(2\pi-\varphi) \tag{41}\] Figure 5: Denominator of the expression on the rhs of Eq. 36 (left panel, vertical field) and 43 (right panel, toroidal field). These expressions are strongly dependent on the orbital radius, which is set to \(r_{0}=8M\) here. The various colors encode various values of \(\iota\) in \([0,30^{\circ}]\) (left panel) or \([0,90^{\circ}]\) (right panel). In the vertical case, the denominator has a root only for \(\iota<\iota_{0}(r_{0})\), and this critical angle verifies \(\iota_{0}\approx 20^{\circ}\) for \(r_{0}=8M\). The condition \(\iota<\iota_{0}(r_{0})\) will lead to two QU loops, while higher inclinations will lead to a single QU loop. In the toroidal case, all values of \(\iota\) lead to the existence of two roots, so there will always be two loops, whatever the inclination. Figure 6: Evolution of the limit inclination angle \(\iota_{0}\) (see Eq. 38), which separates double (\(\iota<\iota_{0}\)) and single (\(\iota>\iota_{0}\)) QU loops in Minkowski spacetime for a vertical magnetic field, with the hot spot orbital radius \(\iota_{0}\). This angle converges towards 0 as \(r_{0}\) increases. Figure 7: Minkowski QU loops in vertical magnetic field. The top six panels are computed for \(\iota=10^{\circ}<t_{0}\), where \(t_{0}\) is defined in Eq. 38 and defines the highest angle for which there should be two QU loops. The bottom six panels are computed for \(\iota=10^{\circ}>t_{0}\). The six panels represent the following quantities. Top-left: the summed images of the hot spot in normalized intensity; we note that the color coding is inverted to improve the readability, darker color means more intense emission. Top-middle: the astrometric track on sky; in this panel and the next ones, color codes for time, from violet to red, clockwise motion on sky. Top-right: the total flux (colored dots), linearly polarized flux (\(F_{\rm LP}=\sqrt{Q^{2}+U^{2}}\), red curve), and linear polarization fraction (LP, in percent, blue curve) evolution; the flux ratio \(F_{\rm ratio}\) (maximum over minimum fluxes) is provided in the bottom-right corner of the panel, together with the linearly polarized flux ratio (written in red), and the flux ratio \(F_{\rm ratio}\)(avg) obtained after averaging over the angular dependence of the radiative transfer coefficients (the \(\sin\theta_{\rm B}\) dependence). We note that the density and temperature of the hot spot have been chosen such that the low-inclination, vertical magnetic field near infrared flux peaks around 10 mJy. Bottom-left: the (Q/L/U) plane. Bottom-middle: the (Q,U) plane, to which we refer when discussing the QU loops. Bottom-right: the observed EVPA evolution; the red profile shows the emission EVPA evolution as predicted by the analytic model derived in Eq. 29. As predicted, the upper case shows two QU loops, while the bottom one shows only one loop. still holds, and the QU loop mirror symmetry as well, which is due to the symmetry of the expression of the emission angle for a toroidal magnetic field, derived in App. A, \[\mathbf{\tilde{K}}\cdot\mathbf{\tilde{B}}=-\frac{A}{\omega(\varphi)}\left(\frac {1}{\sqrt{\nu_{0}}+\sin t\cos\varphi}\right), \tag{42}\] leading to the same property as in Eq. 34. The number of QU loops can be studied following the same reasoning as in the previous section. This leads to studying the range of variation of the simple expression \[\frac{\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathbf{\pi}}}{\mathbf{B}_{\perp} \cdot\mathbf{e}_{\mathbf{\delta}}}=\frac{\sin\varphi}{\cos\varphi\,\cos t}, \tag{43}\] and in particular the roots of the denominator \[\cos\varphi\cos t=0 \tag{44}\] as \(\varphi\) varies in \([0,2\pi]\). Obviously here, there are always two roots at \(\varphi=\pi/2,3\pi/2\) whatever the inclination (see the illustration in the right panel of Fig. 5), leading to the existence of two QU loops for all inclinations. Figure 9 shows simulations of Minkowski QU loops in a toroidal magnetic field, and confirms the existence of a double loop for the same two values of inclinations that lead to either one or two loops in the vertical field case, in perfect agreement with the results above. The numerical profile of the EVPA exactly matches the analytics at low inclination, and is slightly offset with respect to the analytics at higher inclination, because of the Roemer effect, as discussed for the vertical case. We note that contrary to the vertical case, the flux evolution appears to follow here the standard relativistic beaming intuition, with the flux peaking at the approaching side and the flux ratio increasing with the inclination. This is because the \(\theta_{B}\) dependence is very weak at low inclination \(\iota\lesssim 45^{\circ}\), as demonstrated by the right panel of Fig. 8. At high inclination on the contrary, the \(\theta_{B}\) dependence becomes very strong and would counteract the beaming effect. This dependence is very natural: at low inclination, a toroidal magnetic field leads to \(\theta_{B}\) angles much closer to \(\pi/2\) than to \(0[\pi]\), while at high inclination, the contrary is true. The dependence is reversed compared to the vertical magnetic field case. ## 5 QU loops in Schwarzschild spacetime This section presents QU loops computations considering the same setups as illustrated in the previous section, but taking into account the spacetime curvature associated with the Schwarzschild geometry. We stress that we consider only the primary image and do not include the secondary or higher-order images formed by the extremely lensed photons executing at least half an orbit around the black hole (e.g., Johnson et al. 2020). These higher-order images do not change the main qualitative features of the QU loops but have an impact at a finer level, at the low and moderate inclinations that we consider here, see for instance Gelles et al. (2021); Wielgus et al. (2022b). This simplification allows to reduce the needed imaging resolution. Figures 10 and 11 show these QU loops, in the case of a vertical or toroidal magnetic field, respectively. Interestingly, for most cases, there is no pronounced difference between these Schwarzschild QU loops and their Minkowski counterparts computed in the previous section. The main features of the loops are already present in flat spacetime and while the light bending changes detailed values of the observables, it has little impact on the general picture. Naturally, we would expect a more significant impact in case of an orbital radius smaller than the observationally motivated \(r_{0}=8r_{g}\) that we have assumed. Regarding the flux variation, we note the same behavior of the Schwarzschild/vertical cases as discussed for their Minkowski counterparts. The flux goes through its minimum at the hot spot approaching side, due to the \(\theta_{B}\) dependence of the radiative transfer coefficients. We note that the Schwarzschild/vertical case, seen at \(\iota=10^{\circ}\), shows a smaller flux variation than its Minkowski counterpart (factor of \(\approx 2\) ver Figure 8: Minkowski evolution of the angle \(\theta_{B}\), the angle between the magnetic field and the emission direction in the emitter’s frame, for \(r_{0}=8M\). The various colors encode various inclinations \(\iota\) between \(0\) (face-on, dark blue) and \(90^{\circ}\) (edge-on, light blue), with a \(10^{\circ}\) step. The synchrotron emission is suppressed at \(\theta_{B}=0[\pi]\), so we conclude that the orbital phase \(\varphi=\pi\) (corresponding to the left part of the image, towards the East direction) is strongly suppressed around \(\iota=\iota_{0}\approx 20^{\circ}\). We remind that this angle \(\iota_{0}\) depends on the orbital radius, here \(r_{0}=8M\). sus a factor of \(\approx 4\) peak-to-peak ratio). This is because the value of \(\theta_{B}\) never goes as close to \(\pi\) in the Schwarzschild case as in the Minkowski case. As a consequence, the flux minimum is higher in the Schwarzschild case. On the contrary, for \(\iota=30^{\circ}\), the value of \(\theta_{B}\) in the Schwarzschild case goes through nearly exactly \(\pi\), leading to a flux minimum approaching zero, contrary to the Minkowski case that keeps \(\theta_{B}\) further from \(\pi\). This explains the extreme flux ratio (factor of 50!) for the Schwarzschild/vertical case at \(\iota=30^{\circ}\). The Schwarzschild/toroidal case is also similar to the corresponding Minkowski setup in the sense that the flux variation is dominated by relativistic beaming with the usual flux maximum at the approaching side of the orbit. The flux ratios are similar for Minkowski and for Schwarzschild, showing that the special-relativistic beaming effect is the dominant flux-driving mechanism. Figure 9: Same as Fig. 7 for a toroidal magnetic field. Two QU loops are present for both inclinations, contrary to the vertical case of Fig. 7, in agreement with our analytical derivation. ## 6 Comparing Schwarzschild and Minkowski QU loops Figure 12 shows a comparison of the QU loops computed in the Schwarzschild and Minkowski spacetimes, that were presented in Figs. 7- 11, as well as two higher inclination cases, \(\iota=45^{\circ},80^{\circ}\). This figure again shows that flat-space and curved-space QU loops are very similar for most cases. However, there is one important property that we demonstrated in the Minkowski case (see Section 4.2.2), the QU loop mirror symme Figure 10: Same as Fig. 7 but in the Schwarzschild spacetime, for a vertical magnetic field. It might seem surprising that there is a small kick on the astrometric path towards the South-East. This is due to the dependence of the radiative transfer coefficients on \(\sin\theta_{B}\), where \(\theta_{B}\) is the angle between the direction of the magnetic field and the direction of emission, see Marszewski et al. (2021). The upper-left panel clearly shows a flux depletion towards the South-East, due to this effect. At this orbital phase, the direction of emission in the emitter’s frame, \(\mathbf{K}\), becomes vertical and parallel to the magnetic field. Due to the combination of special-relativistic aberration and general-relativistic lensing effects, the direction of \(\mathbf{K}\) varies with orbital phase. The QU loops of this figure should be compared to that of Fig. 7: the similarity is striking. try, which is lost in Schwarzschild as inclination increases. This is a direct consequence of light bending. We note that the QU loop fitted to the high-sensitivity ALMA observations appears strongly asymmetric (Wielgus et al. 2022b). Let us consider a hot spot in Schwarzschild spacetime and the wavevector connecting this hot spot to the distant observer. The direction of this wavevector differs from the Minkowski case due to the existence of light bending. Let us write \[\mathbf{k}^{\mathbf{S}}\approx\mathbf{k}^{\mathbf{M}}+\mathbf{\delta k}^{ \mathrm{lensing}} \tag{45}\] where \(\mathbf{k}^{\mathbf{S}}\) is the Schwarzschild wavevector, \(\mathbf{k}^{\mathbf{M}}\) is the Minkowski wavevector, and \(\mathbf{\delta k}^{\mathrm{lensing}}\) is the shift due to light bending. We note that this equation is not rigorous in the sense that we compare vectors that belong to tangent spaces to different manifolds, but it is still useful to get an intuition of the effect of light bending. The situation is illustrated in Fig. 13, for face-on and edge-on, and edge-on, respectively. Figure 11: Same as Fig. 7 but in the Schwarzschild spacetime, for a toroidal magnetic field. The QU loops of this figure should be compared to that of Fig. 9: the similarity is striking. on inclinations. The lensing shift vector is a radial vector constant with orbital phase at zero inclination. This means that light bending does not break the QU loop mirror symmetry at zero inclination. Indeed, there are only three quantities that impact the Stokes parameters, namely * the photon's energy in the emitter's frame, \(\omega=-\mathbf{k}\cdot\mathbf{u}\), * the cosine of the direction of emission in the emitter's frame, \(\cos\theta_{\mathbf{B}}=\mathbf{k}\cdot\mathbf{B}/\omega\), 2 Footnote 2: It is clear from Eq. 24 that \(\mathbf{K}\cdot\mathbf{B}=\mathbf{k}\cdot\mathbf{B}\). * the EVPA. These quantities are independent of orbital phase at zero inclination, because of the constancy of the lensing shift vector with orbital phase, illustrated in the left panel of Fig. 13. However, at edge-on inclination, the situation is completely changed and the lensing shift vector becomes very dependent on the orbital phase (see the right panel of Fig. 13). This will lead to a strong dependence with orbital phase of the three quantities discussed above, and to the breaking of the QU loop mirror symmetry. This is in perfect agreement with the results of Fig. 12 which shows that the loop mirror symmetry still holds at low inclination and becomes less and less conserved with increasing inclination. One point remains to be discussed, which is why the QU loop mirror symmetry is broken much quicker with increasing inclination for a vertical magnetic field (in this case, the symmetry is lost already at \(t\approx 30^{\circ}\)) rather than for a toroidal field (in this case, the symmetry approximately holds until \(t\approx 80^{\circ}\)). This is related to the orbital phase evolution of the three quantities listed above. Figure 14 shows the orbital phase evolution of these quantities at \(t=30^{\circ}\) in Minkowski and Schwarzschild, and for a vertical or toroidal field. This figure demonstrates that the EVPA and the emission direction are much more asymmetric for a vertical field than for a toroidal field, for this moderate inclination. In the toroidal case, the evolution of these quantities, although shifted in phase compared to the Minkowski case, remains rather similar to the flat-spacetime setup. We have checked that computing the QU track of a Schwarzschild/vertical setup at \(\epsilon=30^{\circ}\), but imposing by hand some ad-hoc symmetric evolution of the EVPA and of the emission direction, leads to a mirror symmetric QU loop. ## 7 Conclusion This article has two main goals: (i) highlighting the role of special-relativistic aberration in generating the observed QU loops; (ii) elucidating an observable feature directly produced by spacetime curvature. First, we highlight the crucial importance of special-relativistic effects in generating the observable QU loops associated with the polarized synchrotron flares of Sgr A*. We have shown that most features discussed so far in the literature regarding QU loops (existence of the loops, number of loops, dependence with inclination and orbital radius) are already present in the Minkowski spacetime and are thus independent of light bending. The simplicity of Minkowski spacetime is a great asset allowing to develop a complete understanding of these features. Second, we indicate a specific property that is due to light bending. Minkowski QU loops are always mirror symmetric in the sense that the two half orbits lead to the same QU track. The axis of symmetry corresponds to the horizontal Q axis in our configuration with the angular momentum of the hot spot projected onto the observer's screen aligning with the vertical direction. In general the argument pertains to existence of any line of mirror symmetry in the QU plane, following the uncertain orientation of the observed system. On the contrary, and due to light bending, Schwarzschild QU loops are not symmetric in general. Schwarzschild QU loops in a toroidal magnetic field remain approximately (meaning to a better accuracy than current observations could tell) symmetric up to very high inclination (within \(\approx 10^{\circ}\) of edge-on view). Nonetheless, Schwarzschild QU loops in a vertical magnetic field, which is the favored configuration for the likely MAD Sgr A* flow, quickly lose their mirror symmetry with increasing inclination, and are already clearly asymmetric at a moderate inclination of about \(30^{\circ}\). Thus, the asymmetry of the QU loops might constitute a compelling probe enabling quantification of the spacetime curvature in the close environment of Sgr A*. The detailed future studies of the QU loops could also constitute a path to confirming the existence of secondary images around black holes, which is another way to characterize curved spacetimes. It is important to keep in mind the simplicity of our modeling and that astrophysical complexity might obscure the spacetime curvature effect on the asymmetry of the observed loop. A non-axisymmetric profile of the physical quantities (density, magnetic field, temperature) along the hot spot orbit might break the QU loop mirror symmetry even in the absence of curvature. Internal physics of the hot spot (e.g., cooling) may have a similar effect by introducing time-dependence to the emission coefficient. Non-circular motion, like an ejection along a jet sheath might also impact the conclusion. These possible limitations should be addressed in future works. Figure 12: Comparison of the QU loops computed in the Schwarzschild (solid blue) and Minkowski (dashed green) spacetimes. The magnetic field is vertical for the top row and toroidal for the bottom row. The inclination increases from left to right and is specified in the top-right corner of each panel. Mind the different scalings of the various panels. Figure 13: Lensing and asymmetry of Schwarzschild QU loops. The left panel is depicted at zero inclination, the right one is edge-on. The green arrows show the wavevectors **k** in Minkowski spacetime that connect the hot spot to the observer. The blue arrows show the corresponding wavevectors for the Schwarzschild case. They differ from Minkowski due to light bending, which adds a shift to the wavevector, depicted in pink. This shift vector is constant with orbital phase and along the positive radial direction at zero inclination. It varies a lot with orbital phase for edge-on view, from being zero at the closest point to the observer, to purely vertical at the furthest point (”on the other side of the black hole”). This different dependence of the shift vector with orbital phase depending on inclination has a considerable impact on the Schwarzschild QU loop asymmetry, see text for details. [MISSING_PAGE_POST] ## Appendix A Analytical expressions in Minkowski spacetime Let us reiterate the expressions of the emitter's 4-velocity (Eq. 21) \[\mathbf{u}=A\left(\mathbf{e}_{t}+\tau_{0}^{-1/2}\mathbf{e}_{\mathbf{\varphi}}\right),\quad A =\sqrt{\frac{r_{0}}{r_{0}-M}}, \tag{26}\] that of the wavevector (Eq. 22) \[\mathbf{k}=\mathbf{e}_{\mathbf{t}}-\sin{t}\,\sin{\varphi}\,\mathbf{e}_{\mathbf{ r}}+\cos{t}\,\mathbf{e}_{\mathbf{\theta}}-\sin{t}\,\cos{\varphi}\,\mathbf{e}_{\mathbf{\varphi}}, \tag{27}\] that of the photon's emitted energy (Eq. 25) \[\omega=-\mathbf{k}\cdot\mathbf{u}=A\left(1+\frac{\sin{t}\cos{\varphi}}{\sqrt{ r_{0}}}\right), \tag{28}\] that of the projection of \(\mathbf{k}\) orthogonal to \(\mathbf{u}\), \[\mathbf{K} =\mathbf{k}+\left(\mathbf{k}\cdot\mathbf{u}\right)\,\mathbf{u} \tag{29}\] \[=\left(1-\omega A\right)\mathbf{e}_{\mathbf{t}}-\sin{t}\,\sin{ \varphi}\,\mathbf{e}_{\mathbf{r}}+\cos{t}\,\mathbf{e}_{\mathbf{\theta}}\] \[\quad\quad-\left(\sin{t}\,\cos{\varphi}+\omega A\tau_{0}^{-1/2} \right)\,\mathbf{e}_{\mathbf{\varphi}},\] and that of the observer's basis vectors (Eq. 27) \[\mathbf{e}_{\mathbf{\varphi}} =\cos{\varphi}\,\mathbf{e}_{\mathbf{r}}-\sin{\varphi}\,\mathbf{e}_{\mathbf{ \varphi}}, \tag{30}\] \[\mathbf{e}_{\mathbf{\theta}} =-\cos{t}\sin{\varphi}\,\mathbf{e}_{\mathbf{r}}-\sin{t}\,\mathbf{e}_{ \mathbf{\theta}}-\cos{t}\cos{\varphi}\,\mathbf{e}_{\mathbf{\varphi}}.\] ### Vertical magnetic field Considering a unit vertical magnetic field \[\mathbf{\bar{B}}=-\mathbf{e}_{\mathbf{\theta}}, \tag{31}\] we have \[\mathbf{\bar{B}}\cdot\mathbf{K}=-\cos{t}, \tag{32}\] and the projection of \(\mathbf{\bar{B}}\) normal to the unit vector \(\mathbf{\bar{K}}=\mathbf{K}/\omega\) along \(\mathbf{K}\) reads \[\mathbf{B}_{\perp} =\mathbf{\bar{B}}-\frac{\mathbf{\bar{B}}\cdot\mathbf{K}}{\omega^ {2}}\,\mathbf{K} \tag{33}\] \[=\frac{\cos{t}}{\omega^{2}}\left[\left(1-\omega A\right)\mathbf{e }_{\mathbf{t}}-\sin{t}\,\sin{\varphi}\,\mathbf{e}_{\mathbf{r}}+\left(\cos{t}- \frac{\omega^{2}}{\cos{t}}\right)\,\mathbf{e}_{\mathbf{\theta}}\right.\] \[\qquad\qquad\left.-\left(\sin{t}\,\cos{\varphi}+\omega A\tau_{0} ^{-1/2}\right)\,\mathbf{e}_{\mathbf{\varphi}}\right].\] The projections of this vector along the observer's basis vectors then read \[\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathbf{\theta}} =\cos{t}\,\sin{\varphi}\,\frac{A}{\omega\sqrt{r_{0}}}, \tag{34}\] \[\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathbf{\theta}} =\sin{t}+\cos^{2}{t}\,\cos{\varphi}\,\frac{A}{\omega\sqrt{r_{0}}},\] from which the EVPA expression of Eq. 29 follows. We also have \[\mathbf{\bar{K}}\cdot\mathbf{\bar{B}}=\frac{\mathbf{K}}{\omega}\cdot\mathbf{ \bar{B}}=-\frac{\cos{t}}{\omega} \tag{35}\] where \(\mathbf{\bar{K}}\) is the unit vector along \(\mathbf{K}\). We thus find the result of Eq. 33. ### Toroidal magnetic field Considering now a toroidal magnetic field \[\mathbf{\bar{B}}=A\left(\frac{\mathbf{e}_{\mathbf{t}}}{\sqrt{r_{0}}}+\mathbf{e}_{ \mathbf{\varphi}}\right), \tag{36}\] we have \[\frac{\mathbf{\bar{B}}\cdot\mathbf{K}}{\omega^{2}}=-\frac{A}{\omega^{2}} \left(\frac{1}{\sqrt{r_{0}}}+\sin{t}\cos{\varphi}\right)\equiv-CA \tag{37}\] where we introduce \[C\equiv\frac{1}{\omega^{2}}\left(\frac{1}{\sqrt{r_{0}}}+\sin{t}\cos{\varphi} \right). \tag{38}\] So we get \[\mathbf{B}_{\perp} =\mathbf{\bar{B}}-\frac{\mathbf{\bar{B}}\cdot\mathbf{K}}{\omega^{ 2}}\,\mathbf{K} \tag{39}\] \[=A\,\left[\,\frac{1}{\sqrt{r_{0}}}+C\left(1-\omega A\right) \right]\,\mathbf{e}_{\mathbf{t}}-AC\,\sin{t}\,\sin{\varphi}\,\mathbf{e}_{ \mathbf{r}}+AC\,\cos{t}\,\mathbf{e}_{\mathbf{\theta}}\] \[\qquad\qquad+A\,\left[1-C\left(\sin{t}\,\cos{\varphi}+\frac{ \omega A}{\sqrt{r_{0}}}\right)\right]\,\mathbf{e}_{\mathbf{\varphi}}.\] The projections onto the observer's basis vectors then read \[\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathbf{\theta}} =A\sin{\varphi}\left(C\,\frac{A\omega}{\sqrt{r_{0}}}-1\right), \tag{40}\] \[\mathbf{B}_{\perp}\cdot\mathbf{e}_{\mathbf{\theta}} =A\cos{\varphi}\cos{t}\left(C\,\frac{A\omega}{\sqrt{r_{0}}}-1 \right),\] from which the EVPA expression of Eq. 40 follows. We also have \[\mathbf{\bar{K}}\cdot\mathbf{\bar{B}}=-C\omega A, \tag{41}\] which is the result of Eq. 42. ## Acknowledgements We thank Frank Eisenhauer, Jack Livingston, and Diogo Ribeiro for helpful comments to the draft. This research is supported by the European Research Council advanced grant "M2FINDERS - Mapping Magnetic Fields with INterferometry Down to Event hoRizon Scales" (Grant No. 101018682).
2308.00169
Conditional lower bounds on the distribution of central values in families of $L$-functions
We establish a general principle that any lower bound on the non-vanishing of central $L$-values obtained through studying the one-level density of low-lying zeros can be refined to show that most such $L$-values have the typical size conjectured by Keating and Snaith. We illustrate this technique in the case of quadratic twists of a given elliptic curve, and similar results would hold for the many examples studied by Iwaniec, Luo, and Sarnak in their pioneering work on $1$-level densities.
Maksym Radziwiłł, Kannan Soundararajan
2023-07-31T22:02:06Z
http://arxiv.org/abs/2308.00169v1
# Conditional lower bounds on the distribution of central values in families of \(L\)-functions ###### Abstract. We establish a general principle that any lower bound on the non-vanishing of central \(L\)-values obtained through studying the one-level density of low-lying zeros can be refined to show that most such \(L\)-values have the typical size conjectured by Keating and Snaith. We illustrate this technique in the case of quadratic twists of a given elliptic curve, and similar results would hold for the many examples studied by Iwaniec, Luo, and Sarnak in their pioneering work on 1-level densities [5]. The first author was partially supported by DMS-1902063. The second author is partially supported by an NSF grant, and a Simons Investigator award from the Simons Foundation. ## 1. Introduction In this paper we consider the \(L\)-function \(\Gamma(s,E)\) of a bounded domain \(\mathbb{Q}\) with boundary \(\partial\mathbb{Q}\). The \(L\)-function \(\Gamma(s,E)\) is defined by \[\Gamma(s,E)=\sum_{n=1}^{\infty}a(n)\Gamma(s,E).\] The \(L\)-function \(\Gamma(s,E)\) is defined by \[\Gamma(s,E)=\sum_{n=1}^{\infty}a(n)\Gamma(s,E).\] The Keating-Snaith conjectures predict that for \(d\in\mathcal{E}\), the quantity \(\log L(\frac{1}{2},E_{d})\) has an approximately normal distribution with mean \(-\frac{1}{2}\log\log|d|\) and variance \(\log\log|d|\). To state this precisely, let \(\alpha<\beta\) be real numbers, and for any \(X\geq 20\), let us define \[\mathcal{N}(X;\alpha,\beta)=\Big{|}\Big{\{}d\in\mathcal{E},X<|d|\leq 2X:\ \frac{\log L(\frac{1}{2},E_{d})+\frac{1}{2}\log\log|d|}{\sqrt{\log\log|d|}}\in( \alpha,\beta)\Big{\}}\Big{|}. \tag{1}\] Then the Keating-Snaith conjecture states that, for fixed intervals \((\alpha,\beta)\) and as \(X\to\infty\), \[\mathcal{N}(X;\alpha,\beta)=|\{d\in\mathcal{E},X\leq|d|\leq 2X\}|\Big{(}\frac{ 1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e^{-\frac{x^{2}}{2}}dx+o(1)\Big{)}. \tag{2}\] Here we interpret \(\log L(\frac{1}{2},E_{d})\) to be negative infinity if \(L(\frac{1}{2},E_{d})=0\), and the conjecture implies in particular that \(L(\frac{1}{2},E_{d})\neq 0\) for almost all \(d\in\mathcal{E}\). Towards this conjecture, we established in [7] that \(\mathcal{N}(X;\alpha,\infty)\) is bounded above by the right hand side of the conjectured relation (2). Complementing this, we now establish a conditional lower bound for \(\mathcal{N}(X;\alpha,\beta)\). **Theorem 1**.: _Assume the Generalized Riemann Hypothesis for the family of twisted \(L\)-functions \(L(s,E\times\chi)\) for all Dirichlet characters \(\chi\). Then for fixed intervals \((\alpha,\beta)\) and as \(X\to\infty\) we have_ \[\mathcal{N}(X;\alpha,\beta)\geq|\{d\in\mathcal{E},X\leq|d|\leq 2X\}|\Big{(} \frac{1}{4}\frac{1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e^{-\frac{x^{2}}{2}}dx+o( 1)\Big{)}.\] Above we have assumed GRH for all character twists of \(L(s,E)\); this is largely for convenience, and would allow us to restrict \(d\) in progressions. With more effort one could relax the assumption to GRH for the family of quadratic twists \(L(s,E_{d})\). Note that the factor \(\frac{1}{4}\) in our theorem matches the proportion of quadratic twists with non-zero \(L\)-value obtained in Heath-Brown's work [3]. While we have described results for the family of quadratic twists of an elliptic curve, the method is very general and applies to many situations where 1-level densities of low lying zeros in families have been analyzed and yield a positive proportion of non-vanishing for the central values. The work of Iwaniec, Luo, and Sarnak [5] gives many such examples, and the technique described here refines their non-vanishing corollaries, showing that the non-zero \(L\)-values that are produced have the typical size conjectured by Keating and Snaith. For instance, consider the family of symmetric square \(L\)-functions \(L(s,\mathrm{sym}^{2}f)\) where \(f\) ranges over Hecke eigenforms of weight \(k\) for the full modular group (denote the set of such eigenforms by \(H_{k}\)), with \(k\leq K\) (thus there are about \(K^{2}/48\) such \(L\)-values). Assuming GRH in this family, Iwaniec, Luo, and Sarnak (see Corollary 1.8 of [5]), showed that at least a proportion \(\frac{8}{9}\) of these \(L\)-values are non-zero. We may refine this to say that for any fixed interval \((\alpha,\beta)\) and as \(K\to\infty\) \[\Big{|}\{f\in H_{k},\ k\leq K:\frac{\log L(\frac{1}{2},\mathrm{sym}^{2}f)- \frac{1}{2}\log\log k}{\sqrt{\log\log k}}\in(\alpha,\beta)\}\Big{|}\geq\Big{(} \frac{8}{9}\frac{1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e^{-x^{2}/2}dx+o(1) \Big{)}\frac{K^{2}}{48}.\] We end the introduction by mentioning the recent work of Bui, Evans, Lester, and Pratt [1] who establish "weighted" (where the weight is a mollified central value) analogues of the Keating-Snaith conjecture. This amounts to a form of conditioning on non-zero value since central values that are zero are assigned a weight equal to zero. The use of such a weighted measure allows [1] to establish a full asymptotic, however as a side effect they have little control over the nature of the weight. **Acknowledgments.** We are grateful to Emmanuel Kowalski for a careful reading of the paper, and helpful comments. The first author was partially supported by DMS-1902063. The second author is partially supported by an NSF grant, and a Simons Investigator award from the Simons Foundation. The paper was completed while KS was a Senior Fellow at the Institute for Theoretical Studies, ETH Zurich, whom he thanks for their excellent working conditions, and warm hospitality. ## 2. Notation and statements of the key propositions We begin by introducing some notation, as in our paper [7], and then describing three key propositions which underlie the proof of the main theorem. Let \(N_{0}\) denote the lcm of \(8\) and \(N\). Let \(\kappa\) be \(\pm 1\), and let \(a\bmod N_{0}\) denote a residue class with \(a\equiv 1\) or \(5\bmod 8\). We assume that \(\kappa\) and \(a\) are such that for any fundamental discriminant \(d\) with sign \(\kappa\) and with \(d\equiv a\bmod N_{0}\), the root number \(\epsilon_{E}(d)=\epsilon_{E}\chi_{d}(-N)\) equals \(1\). Define \[\mathcal{E}(\kappa,a)=\{d\in\mathcal{E}:\ \ \kappa d>0,\ \ d\equiv a\bmod N_{0}\},\] so that \(\mathcal{E}\) is the union of all such sets \(\mathcal{E}(\kappa,a)\). We write below \[-\frac{L^{\prime}}{L}(s,E)=\sum_{n=1}^{\infty}\frac{\Lambda_{E}(n)}{n^{s}},\] where \(|\Lambda_{E}(n)|\leq 2\Lambda(n)\) so that \(\Lambda_{E}(n)=0\) unless \(n=p^{k}\) is a prime power. If \(p\nmid N_{0}\), we may write \(a(p)=\alpha_{p}+\overline{\alpha_{p}}\) for a complex number \(\alpha_{p}\) of magnitude \(1\) (unique up to complex conjugation), and then \[\Lambda_{E}(p^{k})=(\alpha_{p}^{k}+\overline{\alpha_{p}}^{k})\log p.\] Note that \[-\frac{L^{\prime}}{L}(s,E_{d})=\sum_{n=1}^{\infty}\frac{\Lambda_{E}(n)}{n^{s} }\chi_{d}(n).\] For fundamental discriminants \(d\in\mathcal{E}\) with \(|d|\leq 3X\), and a parameter \(3\leq x\) define \[\mathcal{P}(d;x)=\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{0}\end{subarray}}\frac{a(p)}{\sqrt{p}}\chi_{d}(p). \tag{3}\] Let \(h\) denote a smooth function with compactly supported Fourier transform \[\widehat{h}(\xi)=\int_{-\infty}^{\infty}h(t)e^{-2\pi i\xi t}dt,\] and such that \(|h(x)|\ll(1+x^{2})^{-1}\) for all \(x\in\mathbb{R}\). For concreteness, one could simply consider \(h\) to be the Fejer kernel given by \[h(x)=\Big{(}\frac{\sin(\pi x)}{\pi x}\Big{)}^{2},\qquad\widehat{h}(t)=\max(1-|t|,0). \tag{4}\] Lastly, let \(\Phi\) denote a smooth, non-negative function compactly supported in \([\frac{1}{2},\frac{5}{2}]\) with \(\Phi(x)=1\) for \(x\in[1,2]\), and we put \(\check{\Phi}(s)=\int_{0}^{\infty}\Phi(x)x^{s}dx\). Below all implied constants will be allowed to depend on \(N\), \(h\), and \(\Phi\), which are considered fixed. Our first proposition connects \(\log L(\frac{1}{2},E_{d})\) with the sum over primes \(\mathcal{P}(d;x)\) (for suitable \(x\)) with an error term given in terms of the zeros of \(L(s,E_{d})\). Such formulae have a long history, going back to Selberg, and the work here complements an upper bound version that played a key role in [14]. **Proposition 1**.: _Let \(d\) be a fundamental discriminant in \(\mathcal{E}\), and let \(3\leq x\leq|d|\). Assume GRH for \(L(s,E_{d})\), and suppose that \(L(\frac{1}{2},E_{d})\) is not zero. Let \(\gamma_{d}\) run over the ordinates of the non-trivial zeros of \(L(s,E_{d})\). Then_ \[\log L(\tfrac{1}{2},E_{d})=\mathcal{P}(d;x)-\tfrac{1}{2}\log\log x+O\Big{(} \frac{\log|d|}{\log x}+\sum_{\gamma_{d}}\log\Big{(}1+\frac{1}{(\gamma_{d}\log x )^{2}}\Big{)}\Big{)}.\] To analyze sums over the zeros we shall use the following proposition, whose proof is based on the explicit formula. The ideas behind this proposition are also familiar, and in this setting (and in the case \(\ell=1\) below) may be traced back to the work of Heath-Brown [3]. **Proposition 2**.: _Let \(h\) be a smooth function with \(h(x)\ll(1+x^{2})^{-1}\) and whose Fourier transform is compactly supported in \([-1,1]\). Let \(L\geq 1\) be a real number, and \(\ell\) be a positive integer coprime to \(N_{0}\), and assume that \(e^{L}\ell^{2}\leq X^{2}\). If \(\ell\) is neither a square, nor a prime times a square, then_ \[\sum_{d\in\mathcal{E}(\kappa,a)}\Big{(}\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_ {d}L}{2\pi}\Big{)}\Big{)}\chi_{d}(\ell)\Phi\Big{(}\frac{\kappa d}{X}\Big{)} \ll X^{\frac{1}{2}+\epsilon}\ell^{\frac{1}{2}}e^{\frac{L}{4}}. \tag{5}\] _If \(\ell\) is a square then_ \[\sum_{d\in\mathcal{E}(\kappa,a)}\Big{(}\sum_{\gamma_{d}}h\Big{(} \frac{\gamma_{d}L}{2\pi}\Big{)}\Big{)}\chi_{d}(\ell)\Phi\Big{(}\frac{\kappa d }{X}\Big{)} =O(X^{\frac{1}{2}+\epsilon}\ell^{\frac{1}{2}}e^{\frac{L}{4}}) \tag{6}\] _Finally if \(\ell\) is \(q\) times a square, for a prime number \(q\), then_ \[\sum_{d\in\mathcal{E}(\kappa,a)}\Big{(}\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_ {d}L}{2\pi}\Big{)}\Big{)}\chi_{d}(\ell)\Phi\Big{(}\frac{\kappa d}{X}\Big{)} \ll\frac{X}{LN_{0}}\frac{\log q}{\sqrt{q}}\prod_{p|\ell}\Big{(}1+\frac{1}{p} \Big{)}^{-1}+X^{\frac{1}{2}+\epsilon}\ell^{\frac{1}{2}}e^{L/4}. \tag{7}\] Finally, to understand the distribution of \(\mathcal{P}(d;x)\) both when \(d\) is chosen uniformly over discriminants \(d\in\mathcal{E}\), and when \(d\in\mathcal{E}\) is weighted by contributions from low-lying zeros, we shall use the method of moments, drawing upon the following proposition. **Proposition 3**.: _Let \(k\) be any fixed non-negative integer. Let \(X\) be large, and put \(x=X^{1/\log\log\log X}\). Then_ \[\sum_{d\in\mathcal{E}(\kappa,a)}\mathcal{P}(d;x)^{k}\Phi\Big{(}\frac{\kappa d} {X}\Big{)}=\Big{(}\sum_{d\in\mathcal{E}(\kappa,a)}\Phi\Big{(}\frac{\kappa d}{X }\Big{)}\Big{)}(\log\log X)^{\frac{k}{2}}(M_{k}+o(1)), \tag{8}\] _where \(M_{k}\) denotes the \(k\)-th Gaussian moment:_ \[M_{k}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}x^{k}e^{-\frac{x^{2}}{2}}dx= \begin{cases}\frac{k!}{2^{k/2}(k/2)!}&\text{ if $k$ is even}\\ 0&\text{ if $k$ is odd}.\end{cases}\] _Further, for any parameter \(L\geq 1\) with \(e^{L}\leq X^{2}\) we have,_ \[\sum_{d\in\mathcal{E}}\mathcal{P}(d;x)^{k} \Big{(}\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}L}{2\pi}\Big{)} \Big{)}\Phi\Big{(}\frac{\kappa d}{X}\Big{)}=O(X^{\frac{1}{2}+\epsilon}e^{\frac {L}{4}})\] \[+\frac{X}{N_{0}}\prod_{p\mid N_{0}}\Big{(}1-\frac{1}{p^{2}}\Big{)} \widehat{\Phi}(0)\Big{(}\frac{2\log X}{L}\widehat{h}(0)+\frac{h(0)}{2}+O\Big{(} \frac{1}{L}\Big{)}\Big{)}(M_{k}+o(1))(\log\log X)^{\frac{k}{2}}. \tag{9}\] ## 3. Deducing the Theorem from the main propositions We keep the notations introduced in Section 2. Let \(X\) be large, and put \(x=X^{1/\log\log\log X}\). **Lemma 1**.: _Let \(\alpha<\beta\) be real numbers. Let \(\mathcal{G}_{X}(\alpha,\beta)\) denote the set of discriminants \(d\in\mathcal{E}\) with \(X\leq|d|\leq 2X\) such that_ \[\frac{\mathcal{P}(d;x)}{\sqrt{\log\log X}}\in(\alpha,\beta),\] _and such that there are no zeros \(\rho_{d}=\frac{1}{2}+i\gamma_{d}\) of \(L(s,E_{d})\) with \(|\gamma_{d}|\leq(\log X\log\log X)^{-1}\). Then, for any \(\delta>0\),_ \[|\mathcal{G}_{X}(\alpha,\beta)|\geq\Big{(}\frac{1}{4}-\delta\Big{)}\Big{(} \frac{1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e^{-t^{2}/2}dt+o(1)\Big{)}|\{d\in \mathcal{E}:X\leq|d|\leq 2X\}.\] Proof.: Take \(\Phi\) to be a smooth approximation to the indicator function of the interval \([1,2]\), and let \(\kappa\) and \(a\bmod N_{0}\) be as in Section 2. The first part of Proposition 3 (namely (8)) together with the method of moments shows that \[\sum_{\begin{subarray}{c}d\in\mathcal{E}(\kappa,a)\\ \mathcal{P}(d;x)/\sqrt{\log\log X}\in(\alpha,\beta)\end{subarray}}\Phi\Big{(} \frac{\kappa d}{X}\Big{)}=\Big{(}\frac{1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e ^{-t^{2}/2}dt+o(1)\Big{)}\Big{(}\sum_{d\in\mathcal{E}(\kappa,a)}\Phi\Big{(} \frac{\kappa d}{X}\Big{)}\Big{)}. \tag{10}\] Next, take \(h\) to be the Fejer kernel given in (4), and \(L=(2-\delta/2)\log X\). Then the second part of Proposition 3 together with the method of moments shows that \[\sum_{\begin{subarray}{c}d\in\mathcal{E}(\kappa,a)\\ \mathcal{P}(d;x)/\sqrt{\log\log X}\in(\alpha,\beta)\end{subarray}} \sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}L}{2\pi}\Big{)}\Phi \Big{(}\frac{\kappa d}{X}\Big{)}=\Big{(}\frac{1}{\sqrt{2\pi}}\int_{\alpha}^{ \beta}e^{-t^{2}/2}dt+o(1)\Big{)}\sum_{d\in\mathcal{E}(\kappa,a)}\sum_{\gamma_{ d}}h\Big{(}\frac{\gamma_{d}L}{2\pi}\Big{)}\Phi\Big{(}\frac{\kappa d}{X}\Big{)}\] \[=\Big{(}\frac{1}{\sqrt{2\pi}}\int_{\alpha}^{\beta}e^{-t^{2}/2}dt+ o(1)\Big{)}\Big{(}\frac{1}{1-\delta/4}+\frac{1}{2}+o(1)\Big{)}\sum_{d\in \mathcal{E}(\kappa,a)}\Phi\Big{(}\frac{\kappa d}{X}\Big{)}.\] Note that the weights \(\sum_{\gamma_{d}}h(\gamma_{d}L/(2\pi))\) are always non-negative, and if \(L(s,E_{d})\) has a zero with \(|\gamma_{d}|\leq(\log X\log\log X)^{-1}\) then the weight is \(\geq 2+o(1)\) (since there would be a complex conjugate pair of such zeros, or a double zero at \(\frac{1}{2}\)). Combining this with (10), and summing over all the possibilities for \(\kappa\) and \(a\), we obtain the lemma. **Lemma 2**.: _The number of discriminants \(d\in\mathcal{E}\) with \(X\leq|d|\leq 2X\) such that_ \[\sum_{(\log X\log\log X)^{-1}\leq|\gamma_{d}|}\log\Big{(}1+\frac{1}{(\gamma_{d }\log x)^{2}}\Big{)}\geq(\log\log\log X)^{3}\] _is \(\ll X/\log\log\log X\)._ Proof.: Applying Proposition 2 with \(\ell=1\), \(h\) given as in (4), and \(1\leq L\leq(2-\delta)\log X\), we obtain (after summing over the possibilities for \(\kappa\) and \(a\)) \[\sum_{\begin{subarray}{c}d\in\mathcal{E}\\ X\leq|d|\leq 2X\end{subarray}}\sum_{\gamma_{d}}\Big{(}\frac{\sin(\gamma_{d}L/2)}{ \gamma_{d}L/2}\Big{)}^{2}\ll X\frac{\log X}{L}.\] Integrate both sides of this estimate over \(L\) in the range \(\log x\leq L\leq 2\log x\). Since, for any \(y>0\) and \(t\neq 0\), \[\frac{1}{y}\int_{y}^{2y}\Big{(}\frac{\sin(\pi tu)}{\pi tu}\Big{)}^{2}du\gg\min \Big{(}1,\frac{1}{(ty)^{2}}\Big{)},\] we obtain \[\sum_{\begin{subarray}{c}d\in\mathcal{E}\\ X\leq|d|\leq 2X\end{subarray}}\sum_{\gamma_{d}}\min\Big{(}1,\frac{1}{(\gamma_{d} \log x)^{2}}\Big{)}\ll X\frac{\log X}{\log x}=X\log\log\log X.\] Now if \(|\gamma_{d}|\geq(\log X\log\log X)^{-1}\) then \[\log\Big{(}1+\frac{1}{(\gamma_{d}\log x)^{2}}\Big{)}\ll(\log\log\log X)\min \Big{(}1,\frac{1}{(\gamma_{d}\log x)^{2}}\Big{)},\] and therefore we may conclude that \[\sum_{\begin{subarray}{c}d\in\mathcal{E}\\ X\leq|d|\leq 2X\end{subarray}}\sum_{(\log X\log\log X)^{-1}\leq|\gamma_{d}|}\log \Big{(}1+\frac{1}{(\gamma_{d}\log x)^{2}}\Big{)}\ll X(\log\log\log X)^{2}.\] The lemma follows at once. With these results in place, it is now a simple matter to deduce the main theorem. By Proposition 1 1 we know that for \(d\in\mathcal{E}\) with \(X\leq|d|\leq 2X\) \[\log L(\tfrac{1}{2},E_{d})=\mathcal{P}(d;x)-\tfrac{1}{2}\log\log X+O(\log\log \log X)+O\Big{(}\sum_{\gamma_{d}}\log\Big{(}1+\frac{1}{(\gamma_{d}\log x)^{2}} \Big{)}\Big{)}.\] Lemma 1 tells us that for \(d\in\mathcal{G}_{X}(\alpha,\beta)\) we may arrange for \(\mathcal{P}(d;x)/\sqrt{\log\log X}\) to lie in the interval \((\alpha,\beta)\) and for there to be no zeros with \(|\gamma_{d}|\leq(\log X\log\log X)^{-1}\). Lemma 2 now allows us to discard \(\ll X/\log\log\log X\) elements of \(\mathcal{G}_{X}(\alpha,\beta)\) so as to ensure that the contribution of zeros with \(|\gamma_{d}|\geq(\log X\log\log X)^{-1}\) is \(O((\log\log\log X)^{3})\). Thus there are \[\geq\Big{(}\frac{1}{4}-\delta\Big{)}\Big{(}\frac{1}{\sqrt{2\pi}}\int_{\alpha} ^{\beta}e^{-t^{2}/2}dt+o(1)\Big{)}|\{d\in\mathcal{E}:X\leq|d|\leq 2X\},\] fundamental discriminants \(d\in\mathcal{E}\) with \(X\leq|d|\leq 2X\) for which \[\frac{\log L(\tfrac{1}{2},E_{d})+\tfrac{1}{2}\log\log X}{\sqrt{\log\log X}}+O \Big{(}\frac{(\log\log\log X)^{3}}{\sqrt{\log\log X}}\Big{)}\in(\alpha,\beta),\] which completes the proof. ## 4. Proof of Proposition 1 A straight-forward adaptation of Lemma 1 from [14] (itself based on an identity of Selberg) shows that for any \(\sigma\geq\tfrac{1}{2}\) with \(L(\sigma,E_{d})\neq 0\), and any \(x\geq 3\) one has \[-\frac{L^{\prime}}{L}(\sigma,E_{d})=\sum_{n\leq x}\frac{\Lambda_{E}(n)}{n^{ \sigma}}\chi_{d}(n)\frac{\log(x/n)}{\log x}+\frac{1}{\log x}\Big{(}\frac{L^{ \prime}}{L}\Big{)}^{\prime}(\sigma,E_{d})+\frac{1}{\log x}\sum_{\rho_{d}} \frac{x^{\rho_{d}-\sigma}}{(\rho_{d}-\sigma)^{2}}+O\Big{(}\frac{1}{x^{\sigma }\log x}\Big{)}. \tag{11}\] Here \(\rho_{d}\) runs over the non-trivial zeros of \(L(s,E_{d})\), and this identity in fact holds unconditionally. Now assume GRH for \(L(s,E_{d})\) and write \(\rho_{d}=\tfrac{1}{2}+i\gamma_{d}\). If \(L(\tfrac{1}{2},E_{d})\neq 0\), then integrating both sides of (11) from \(\tfrac{1}{2}\) to \(\infty\) yields \[\log L(\tfrac{1}{2},E_{d})=\sum_{n\leq x}\frac{\Lambda_{E}(n)}{ \sqrt{n}\log n}\chi_{d}(n)\frac{\log(x/n)}{\log x}-\frac{1}{\log x}\frac{L^{ \prime}}{L}(\tfrac{1}{2},E_{d})\\ +\frac{1}{\log x}\sum_{\gamma_{d}}\operatorname{Re}\int_{\tfrac{1 }{2}}^{\infty}\frac{x^{\rho_{d}-\sigma}}{(\rho_{d}-\sigma)^{2}}d\sigma+O \Big{(}\frac{1}{\sqrt{x}(\log x)^{2}}\Big{)}. \tag{12}\] We may restrict attention to the real part of the integral above since all the other terms involved are real, or noting that the zeros \(\rho_{d}\) appear in conjugate pairs. Consider first the sum over \(n\) in (12). The contribution from prime powers \(n=p^{k}\) with \(k\geq 3\) is plainly \(O(1)\). The contribution of the terms \(n=p\) is \(\mathcal{P}(d;x)+O(1)\), where the error term \(O(1)\) arises from the primes dividing \(N_{0}\). Finally, by Rankin-Selberg theory (see for instance [4]) it follows that \[\sum_{\begin{subarray}{c}p\leq y\\ p\nmid N_{0}\end{subarray}}\frac{(\alpha_{p}^{2}+\overline{\alpha_{p}}^{2})\log p }{p}=\sum_{\begin{subarray}{c}p\leq y\\ p\nmid N_{0}\end{subarray}}\frac{(a(p)^{2}-2)\log p}{p}=-\log y+O(1), \tag{13}\] so that, by partial summation, the contribution of the terms \(n=p^{2}\) equals \[\sum_{\begin{subarray}{c}p\leq\sqrt{x}\\ p\nmid N_{0}\end{subarray}}\frac{(\alpha_{p}^{2}+\overline{\alpha_{p}}^{2})}{2p }\frac{\log(x/p^{2})}{\log x}+O(1)=\sum_{\begin{subarray}{c}p\leq\sqrt{x}\\ p\nmid N_{0}\end{subarray}}\frac{a(p)^{2}-2}{2p}\frac{\log(x/p^{2})}{\log x}+O(1 )=-\frac{1}{2}\log\log x+O(1).\] Thus the contribution of the sum over \(n\) in (12) is \[\mathcal{P}(d;x)-\tfrac{1}{2}\log\log x+O(1). \tag{14}\] Next we turn to the sum over zeros in (12). If \(|\gamma_{d}\log x|\geq 1\), then note that \[\int_{\frac{1}{2}}^{\infty}\frac{x^{\rho_{d}-\sigma}}{(\rho_{d}-\sigma)^{2}}d \sigma=O\Big{(}\frac{1}{\gamma_{d}^{2}}\int_{\frac{1}{2}}^{\infty}x^{\frac{1}{ 2}-\sigma}d\sigma\Big{)}=O\Big{(}\frac{1}{\gamma_{d}^{2}\log x}\Big{)}=O \Big{(}\log x\log\Big{(}1+\frac{1}{\gamma_{d}^{2}(\log x)^{2}}\Big{)}\Big{)}.\] If \(|\gamma_{d}\log x|\leq 1\), then we split into the ranges \(\frac{1}{2}\leq\sigma\leq\frac{1}{2}+\frac{1}{\log x}\) and larger values of \(\sigma\). The first range contributes \[\int_{\frac{1}{2}}^{\frac{1}{2}+\frac{1}{\log x}}\text{Re}\frac{ x^{\rho_{d}-\sigma}}{(\rho_{d}-\sigma)^{2}}d\sigma =\int_{\frac{1}{2}}^{\frac{1}{2}+\frac{1}{\log x}}\text{Re}\Big{(} \frac{1}{(\rho_{d}-\sigma)^{2}}+\frac{\log x}{(\rho_{d}-\sigma)}+O((\log x)^{ 2})\Big{)}d\sigma\] \[=\text{Re}\Big{(}-\frac{1}{i\gamma_{d}}-\frac{1}{1/\log x-i \gamma_{d}}+\log x\log\frac{-i\gamma_{d}}{1/\log x-i\gamma_{d}}+O(\log x) \Big{)}\] \[=O\Big{(}\log x\log\Big{(}1+\frac{1}{\gamma_{d}^{2}(\log x)^{2}} \Big{)}\Big{)},\] while the second range contributes \[\ll\int_{\frac{1}{2}+\frac{1}{\log x}}^{\infty}\frac{x^{\frac{1}{2}-\sigma}}{( \frac{1}{2}-\sigma)^{2}}d\sigma\ll\log x=O\Big{(}\log x\log\Big{(}1+\frac{1}{ \gamma_{d}^{2}(\log x)^{2}}\Big{)}\Big{)}.\] Thus in all cases the sum over zeros in (12) is \[O\Big{(}\log x\log\Big{(}1+\frac{1}{\gamma_{d}^{2}(\log x)^{2}}\Big{)}\Big{)}. \tag{15}\] Finally, taking logarithmic derivatives in the functional equation we find that \[\frac{L^{\prime}}{L}(\tfrac{1}{2},E_{d})=-\log(\sqrt{N}|d|)+O(1).\] The proposition follows upon combining this with (12), (14), and (15). ## 5. Proof of Proposition 2 The proof of Proposition 2 is based on the explicit formula, which we first recall in our context. **Lemma 3**.: _Let \(h\) be a function with \(h(x)\ll(1+x^{2})^{-1}\) and with compactly supported Fourier transform \(\widehat{h}(\xi)=\int_{-\infty}^{\infty}h(t)e^{-2\pi i\xi t}dt\). Then, for any fundamental discriminant \(d\in\mathcal{E}\)_ \[\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}}{2\pi}\Big{)}=\frac{1}{ 2\pi}\int_{-\infty}^{\infty}h\Big{(}\frac{t}{2\pi}\Big{)}\Big{(}\log\frac{Nd^{ 2}}{(2\pi)^{2}}+2\text{Re}\frac{\Gamma^{\prime}}{\Gamma}(1+it)\Big{)}dt\] \[-\sum_{n}\frac{\Lambda_{E}(n)}{\sqrt{n}}\chi_{d}(n)\Big{(} \widehat{h}(\log n)+\widehat{h}(-\log n)\Big{)},\] _where the sum is over all ordinates of non-trivial zeros \(1/2+i\gamma_{d}\) of \(L(s,E_{d})\)._ Applying the explicit formula to the dilated function \(h_{L}(x)=h(xL)\) whose Fourier transform is \(\frac{1}{L}\widehat{h}(x/L)\), we obtain \[\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}L}{2\pi}\Big{)}=\frac{1 }{2\pi}\int_{-\infty}^{\infty}h\Big{(}\frac{tL}{2\pi}\Big{)}\Big{(}\log\frac{ Nd^{2}}{(2\pi)^{2}}+2\text{Re}\frac{\Gamma^{\prime}}{\Gamma}(1+it)\Big{)}dt\] \[-\frac{1}{L}\sum_{n}\frac{\Lambda_{E}(n)}{\sqrt{n}}\chi_{d}(n) \Big{(}\widehat{h}\Big{(}\frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}-\frac{ \log n}{L}\Big{)}\Big{)}. \tag{16}\] We multiply this expression by \(\chi_{d}(\ell)\) and sum over \(d\) with suitable weights. Thus we find \[\sum_{d\in\mathcal{E}(\kappa,a)}\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}L}{2 \pi}\Big{)}\chi_{d}(\ell)\Phi\Big{(}\frac{\kappa d}{X}\Big{)}=S_{1}-S_{2}, \tag{17}\] where \[S_{1}=\frac{1}{2\pi}\int_{-\infty}^{\infty}h\Big{(}\frac{tL}{2\pi}\Big{)}\sum _{d\in\mathcal{E}(\kappa,a)}\chi_{d}(\ell)\Big{(}\log\frac{Nd^{2}}{(2\pi)^{2}} +2\text{Re}\ \frac{\Gamma^{\prime}}{\Gamma}(1+it)\Big{)}\Phi\Big{(}\frac{ \kappa d}{X}\Big{)}dt, \tag{18}\] and \[S_{2}=\frac{1}{L}\sum_{n}\frac{\Lambda_{E}(n)}{\sqrt{n}}\Big{(}\widehat{h} \Big{(}\frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}-\frac{\log n}{L}\Big{)} \Big{)}\sum_{d\in\mathcal{E}(\kappa,a)}\chi_{d}(\ell n)\Phi\Big{(}\frac{ \kappa d}{X}\Big{)}. \tag{19}\] The term \(S_{1}\) is relatively easy to handle. If \(\ell\) is a square, it amounts to counting square-free integers \(d\) lying in a suitable progression \(\bmod N_{0}\) and coprime to \(\ell\). While if \(\ell\) is not a square, the resulting sum is a non-trivial character sum, which exhibits substantial cancellation. A more general term of this type is handled in Proposition 1 of [7], which we refer to for a detailed proof. Thus when \(\ell\) is not a square we find \[S_{1}=O(X^{\frac{1}{2}+\epsilon}\sqrt{\ell}), \tag{20}\] while if \(\ell\) is a square \[S_{1}=\frac{X}{N_{0}}\prod_{p|\ell}\Big{(}1+\frac{1}{p}\Big{)}^{-1}\prod_{p|N_{0} }\Big{(}1-\frac{1}{p^{2}}\Big{)}\tilde{\Phi}(0)(2\log X+O(1))\frac{\widehat{h}(0 )}{L}+O(X^{\frac{1}{2}+\epsilon}\sqrt{\ell}). \tag{21}\] We now turn to the more difficult term \(S_{2}\). First we dispose of terms \(n\) (which we may suppose is a prime power) that have a common factor with \(N_{0}\). Note that since \(d\) is fixed in a residue class \(\,\mathrm{mod}\,N_{0}\), if \(n\) is the power of a prime dividing \(N_{0}\) then \(\chi_{d}(n)\) is determined by the congruence condition on \(d\). Thus the contribution of these terms is \[\ll\frac{1}{L}\sum_{(n,N_{0})>1}\frac{\Lambda(n)}{\sqrt{n}}\Big{|}\sum_{d\in \mathcal{E}(\kappa,a)}\chi_{d}(\ell)\Phi\Big{(}\frac{\kappa d}{X}\Big{)} \Big{|}\ll\delta(\ell=\square)\frac{X}{L}+X^{\frac{1}{2}+\epsilon}\sqrt{\ell}, \tag{22}\] where \(\delta(\ell=\square)\) denotes \(1\) when \(\ell\) is a square, and \(0\) otherwise. Henceforth we restrict attention to the terms in \(S_{2}\) where \((n,N_{0})=1\). Note that if \(d\equiv a\,\mathrm{mod}\,\,N_{0}\) then \(d\) is automatically \(1\,\mathrm{mod}\,\,4\), and the condition that \(d\) is a fundamental discriminant amounts to \(d\) being square-free. We express the square-free condition by Mobius inversion \(\sum_{\alpha^{2}|d}\mu(\alpha)\), and then split the sum into the cases where \(\alpha>A\) is large, and when \(\alpha\leq A\) is small, for a suitable parameter \(A\leq X\). We first handle the case when \(\alpha>A\) is large. These terms give \[\sum_{\alpha>A}\mu(\alpha)\sum_{\begin{subarray}{c}d\equiv a\, \mathrm{mod}\,\,N_{0}\\ \alpha^{2}|d\end{subarray}}\Phi\Big{(}\frac{\kappa d}{X}\Big{)}\frac{1}{L} \sum_{(n,N_{0})=1}\frac{\Lambda_{E}(n)}{\sqrt{n}}\Big{(}\widehat{h}\Big{(} \frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}-\frac{\log n}{L}\Big{)}\Big{)} \chi_{d}(\ell n)\] \[\ll\sum_{\alpha>A}\sum_{\begin{subarray}{c}d\equiv a\,\mathrm{ mod}\,\,N_{0}\\ \alpha^{2}|d\end{subarray}}\Phi\Big{(}\frac{\kappa d}{X}\Big{)}(\log X)\ll\frac{ X}{N_{0}A}\log X, \tag{23}\] upon using GRH to estimate the sum over \(n\) and then estimating the sum over \(d\) trivially. We are left with the terms with \(\alpha\leq A\), and writing \(d=k\alpha^{2}\) we may express these terms as \[\frac{1}{L}\sum_{(n,N_{0})=1}\frac{\Lambda_{E}(n)}{\sqrt{n}}\Big{(}\widehat{h }\Big{(}\frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}-\frac{\log n}{L}\Big{)} \Big{)}\sum_{\begin{subarray}{c}\alpha\leq A\\ (\alpha,n\ell N_{0})=1\end{subarray}}\mu(\alpha)\sum_{k\equiv a\alpha^{2} \,\mathrm{mod}\,\,N_{0}}\Big{(}\frac{k}{\ell n}\Big{)}\Phi\Big{(}\frac{ \kappa k\alpha^{2}}{X}\Big{)}. \tag{24}\] We now apply the Poisson summation formula to the sum over \(k\) above, as in Lemma 7 of [7]. This transforms the sum over \(k\) above to \[\frac{X}{n\ell N_{0}\alpha^{2}}\Big{(}\frac{N_{0}}{n\ell}\Big{)}\sum_{v}e \Big{(}\frac{va\overline{\alpha^{2}n\ell}}{N_{0}}\Big{)}\tau_{v}(n\ell)\widehat {\Phi}\Big{(}\frac{Xv}{n\ell\alpha^{2}N_{0}}\Big{)}, \tag{25}\] where \(\tau_{v}(n\ell)\) is a Gauss sum given by \[\tau_{v}(n\ell)=\sum_{b\,\mathrm{mod}\,\,n\ell}\Big{(}\frac{b}{n\ell}\Big{)}e \Big{(}\frac{vb}{n\ell}\Big{)}.\] The Gauss sum \(\tau_{v}(n\ell)\) can be described explicitly, see Lemma 6 of [7] which gives an evaluation of \[G_{v}(n\ell)=\Big{(}\frac{1-i}{2}+\Big{(}\frac{-1}{n\ell}\Big{)}\frac{1+i}{2} \Big{)}\tau_{v}(n\ell),\] from which \(\tau_{v}(n\ell)\) may be obtained via \[\tau_{v}(n\ell)=\Big{(}\frac{1+i}{2}+\Big{(}\frac{-1}{n\ell}\Big{)}\frac{1-i}{ 2}\Big{)}G_{v}(n\ell). \tag{26}\] The term \(v=0\) in (25) leads to a main term; we postpone its treatment, and first consider the contribution of terms \(v\neq 0\). Since \(\widehat{h}\) is supported in \([-1,1]\), we may suppose that \(n\leq e^{L}\). The rapid decay of the Fourier transform \(\widehat{\Phi}(\xi)\) allows us to restrict attention to the range \(|v|\leq\ell e^{L}A^{2}X^{-1+\epsilon}\), with the total contribution to \(S_{2}\) of terms with larger \(|v|\) being estimated by \(O(1)\). For the smaller values of \(v\), we interchange the sums over \(v\), performing first the sum over \(n\) using GRH. Thus these terms contribute \[\frac{X}{\ell LN_{0}}\sum_{0<|v|\leq\ell e^{L}A^{2}X^{-1+\epsilon}}\sum_{ \begin{subarray}{c}\alpha\leq A\\ (\alpha,\ell N_{0})=1\end{subarray}}\frac{\mu(\alpha)}{\alpha^{2}}\] \[\sum_{(n,\alpha N_{0})=1}\frac{\Lambda_{E}(n)}{n\sqrt{n}}\Big{(}\frac{N_{0}}{ n\ell}\Big{)}e\Big{(}\frac{va\overline{\alpha^{2}n\ell}}{N_{0}}\Big{)}\tau_{v}(n \ell)\Big{(}\widehat{h}\Big{(}\frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}- \frac{\log n}{L}\Big{)}\Big{)}\widehat{\Phi}\Big{(}\frac{Xv}{n\ell\alpha^{2}N _{0}}\Big{)}.\] We now claim that (on GRH) the sum over \(n\) above is \[\ll\frac{\alpha\ell^{\frac{3}{2}}}{\sqrt{X|v|}}X^{\epsilon}, \tag{27}\] so that the contribution of the terms with \(v\neq 0\) is \[\ll X^{\frac{1}{2}+\epsilon}\ell^{\frac{1}{2}}\sum_{1\leq|v|\leq\ell e^{L}A^{ 2}X^{-1+\epsilon}}|v|^{-\frac{1}{2}}\log A\ll\ell e^{L/2}AX^{\epsilon}. \tag{28}\] To minimize the combined contributions of the error terms in (28) and (23), we shall choose \(A=(X/\ell)^{\frac{1}{2}}e^{-\frac{L}{4}}\), so that the effect of both these error terms is \[\ll X^{\frac{1}{2}+\epsilon}\ell^{\frac{1}{2}}e^{\frac{L}{4}}. \tag{29}\] To justify the claim (27) we first use (26) to replace \(\tau_{v}(n\ell)\) by \(G_{v}(n\ell)\) so that we must bound (for both choices of \(\pm\)) \[\sum_{(n,\alpha N_{0})=1}\frac{\Lambda_{E}(n)}{n\sqrt{n}}\Big{(}\frac{\pm N_{ 0}}{n\ell}\Big{)}e\Big{(}\frac{va\overline{\alpha^{2}n\ell}}{N_{0}}\Big{)}G_{ v}(n\ell)\Big{(}\widehat{h}\Big{(}\frac{\log n}{L}\Big{)}+\widehat{h}\Big{(}- \frac{\log n}{L}\Big{)}\Big{)}\widehat{\Phi}\Big{(}\frac{Xv}{n\ell\alpha^{2} N_{0}}\Big{)}.\] First consider the generic case when \(n\) is a prime power with \((n,v)=1\). Here (using Lemma 6 of [7]) \(G_{v}(n\ell)=0\) unless \(n\) is a prime \(p\) not dividing \(\ell\) in which case \(G_{v}(p\ell)=(\frac{v}{p})\sqrt{p}G_{v}(\ell)\). Thus such terms contribute to the above \[\Big{(}\frac{\pm N_{0}}{\ell}\Big{)}G_{v}(\ell)\sum_{p\mid\alpha v\ell N_{0}}\frac {\Lambda_{E}(p)}{p}\Big{(}\frac{\pm vN_{0}}{p}\Big{)}e\Big{(}\frac{va\overline{ \alpha^{2}p\ell}}{N_{0}}\Big{)}\Big{(}\widehat{h}\Big{(}\frac{\log p}{L}\Big{)} +\widehat{h}\Big{(}-\frac{\log p}{L}\Big{)}\Big{)}\widehat{\Phi}\Big{(}\frac{Xv }{p\ell\alpha^{2}N_{0}}\Big{)}\] The rapid decay of \(\widehat{\Phi}(\xi)\) implies that we may restrict attention above to the range \(p>X^{1-\epsilon}|v|/(\ell\alpha^{2}N_{0})\). Then splitting \(p\) into progressions \(\operatorname{mod}N_{0}\) and using GRH (it is here that we need GRH for twists of \(L(s,E)\) by quadratic characters, as well as all Dirichlet characters modulo \(N_{0}\)) we obtain the bound \[\ll|G_{v}(\ell)|\frac{X^{\epsilon}\ell^{\frac{1}{2}}\alpha N_{0}}{\sqrt{X|v|} }\ll\frac{\ell^{\frac{3}{2}}\alpha X^{\epsilon}}{\sqrt{X|v|}},\] which is in keeping with (27). Now consider the non-generic case when \(n\) is the power of some prime dividing \(v\). We may assume that \(n|v^{2}\) (else \(G_{v}(n\ell)=0\) by Lemma 6 of [7]) and also that \(n\geq X^{1-\epsilon}|v|/(\ell\alpha^{2}N_{0})\) else the Fourier transform \(\widehat{\Phi}\) is negligible. Using that \(|G_{v}(n\ell)|\leq(v,n\ell)^{\frac{1}{2}}(n\ell)^{\frac{1}{2}}\leq(|v|n\ell)^ {\frac{1}{2}}\) (which again follows from Lemma 6 of [7]) we may bound the contribution of these terms by \[\ll\sum_{n|v^{2}}\Lambda(n)\frac{(|v|\ell)^{\frac{1}{2}}}{(X^{1-\epsilon}v/( \ell\alpha^{2}N_{0}))}\ll(\log v)X^{\epsilon}\frac{\ell^{\frac{3}{2}}\alpha^{ 2}}{X\sqrt{|v|}}\ll\frac{\ell^{\frac{3}{2}}\alpha X^{\epsilon}}{\sqrt{X|v|}},\] since \(\log v\ll\log X\ll X^{\epsilon}\) and \(\alpha\leq A\leq\sqrt{X}\). Thus these terms also satisfy the claimed bound (27). Now we handle the main term contribution from \(v=0\), noting that \(\tau_{0}(n\ell)=0\) unless \(n\ell\) is a square, in which case it equals \(\phi(n\ell)\). Thus the main term contribution from \(v=0\) is \[\frac{X}{LN_{0}}\sum_{\begin{subarray}{c}(n,N_{0})=1\\ n\ell=\square\end{subarray}}\frac{\Lambda_{E}(n)}{\sqrt{n}}\frac{\phi(n\ell)} {n\ell}\Big{(}\sum_{\begin{subarray}{c}\alpha\leq A\\ (\alpha,\ell N_{0})=1\end{subarray}}\frac{\mu(\alpha)}{\alpha^{2}}\Big{)} \widehat{\Phi}(0)\Big{(}\widehat{h}\Big{(}\frac{\log n}{L}\Big{)}+\widehat{h} \Big{(}-\frac{\log n}{L}\Big{)}\Big{)}.\] Thus this main term only exists if \(\ell\) is a square (so that \(n\) is a square), or if \(\ell\) is \(q\) times a square for a unique prime \(q\) (so that \(n\) is an odd power of \(q\)). In the case \(\ell\) is a square, writing \(n=m^{2}\) and performing the sum over \(\alpha\), we obtain that the main term is \[\frac{X}{LN_{0}}\widehat{\Phi}(0)\sum_{(m,N_{0})=1}\frac{\Lambda_{E}(m^{2})}{m }\Big{(}\prod_{p\mid m\ell}\Big{(}1+\frac{1}{p}\Big{)}^{-1}\prod_{p\mid N_{0} }\Big{(}1-\frac{1}{p^{2}}\Big{)}+O\Big{(}\frac{1}{A}\Big{)}\Big{)}\Big{(} \widehat{h}\Big{(}\frac{2\log m}{L}\Big{)}+\widehat{h}\Big{(}-\frac{2\log m}{ L}\Big{)}\Big{)}.\] Using (13) and partial summation we conclude that the main term when \(\ell\) is a square is \[= -\frac{X}{N_{0}}\widehat{\Phi}(0)\frac{h(0)}{2}\prod_{p\mid\ell} \Big{(}1+\frac{1}{p}\Big{)}^{-1}\prod_{p\mid N_{0}}\Big{(}1-\frac{1}{p^{2}} \Big{)}+O\Big{(}\frac{X}{A}+\frac{X}{L}\Big{)}. \tag{30}\] Suppose now that \(\ell\) is \(q\) times a square, for a (unique) prime \(q\). Here the main term may be bounded by \[\ll\frac{X}{LN_{0}}\frac{\log q}{\sqrt{q}}\prod_{p\mid\ell}\Big{(}1+\frac{1}{p} \Big{)}^{-1}\prod_{p\mid N_{0}}\Big{(}1-\frac{1}{p^{2}}\Big{)}; \tag{31}\] naturally we can be more precise here, but this bound suffices. ## 6. Proof of Proposition 3 The \(k\)-th moment in (8) is treated in Proposition 6 of [7]. Briefly, expanding out \(\mathcal{P}(d;x)^{k}\) we must handle \[\sum_{\begin{subarray}{c}p_{1},\ldots,p_{k}\leq x\\ p_{i}\nmid N_{0}\end{subarray}}\frac{a(p_{1})\cdots a(p_{k})}{\sqrt{p_{1} \cdots p_{k}}}\sum_{d\in\mathcal{E}(\kappa,a)}\chi_{d}(p_{1}\cdots p_{k})\Phi \Big{(}\frac{\kappa d}{X}\Big{)}.\] When \(p_{1}\cdots p_{k}\) is not a perfect square, the sum over \(d\) exhibits substantial cancellation (as mentioned earlier in (21)). The main term arises from terms where \(p_{1}\cdots p_{k}\) is a perfect square, which cannot happen when \(k\) is odd. When \(k\) is even, the contribution to the main term comes essentially from the case when there are \(k/2\) distinct primes among \(p_{1}\), \(\ldots\), \(p_{k}\) with each distinct prime appearing twice. The number of such pairings leads to the coefficient \(M_{k}\), and Rankin-Selberg theory is used to obtain \(\sum_{p\leq x}a(p)^{2}/p=\log\log x+O(1)\sim\log\log X\). To establish (9), once again we expand \(\mathcal{P}(d;x)^{k}\) and are faced with evaluating \[\sum_{\begin{subarray}{c}p_{1},\ldots,p_{k}\leq x\\ p_{i}\nmid N_{0}\end{subarray}}\frac{a(p_{1})\cdots a(p_{k})}{\sqrt{p_{1} \cdots p_{k}}}\sum_{d\in\mathcal{E}(\kappa,a)}\chi_{d}(p_{1}\cdots p_{k}) \Big{(}\sum_{\gamma_{d}}h\Big{(}\frac{\gamma_{d}L}{2\pi}\Big{)}\Big{)}.\] We now appeal to Proposition 2. The terms where \(p_{1}\cdots p_{k}\) is neither a square nor a prime times a square contribute, using (5), \[\ll X^{\frac{1}{2}+\epsilon}e^{\frac{L}{4}}\sum_{p_{1},\ldots,p_{k}\leq x}1 \ll X^{\frac{1}{2}+\epsilon}e^{\frac{L}{4}}.\] It remains to consider the cases when this product is a square (which can only happen when \(k\) is even) and when it is a prime times a square (which can only happen for odd \(k\)). In the first case, we obtain (by (6)) a main term \[\frac{X}{N_{0}}\prod_{p\mid N_{0}}\Big{(}1-\frac{1}{p^{2}}\Big{)}\widehat{ \Phi}(0)\Big{(}\frac{2\log X}{L}\widehat{h}(0)+\frac{h(0)}{2}+O\Big{(}\frac{1 }{L}\Big{)}\Big{)}\sum_{\begin{subarray}{c}p_{1},\ldots,p_{k}\leq x\\ p_{i}\nmid N_{0}\\ p_{1}\cdots p_{k}=\square\end{subarray}}\frac{a(p_{1})\cdots a(p_{k})}{\sqrt{p_ {1}\cdots p_{k}}}\prod_{p\mid p_{1}\cdots p_{k}}\Big{(}1+\frac{1}{p}\Big{)}^{ -1}.\] As before, this main term is dominated by the contribution of terms where there are \(k/2\) distinct primes among \(p_{1}\), \(\ldots\), \(p_{k}\) each appearing twice, and thus we obtain \[\frac{X}{N_{0}}\prod_{p\mid N_{0}}\Big{(}1-\frac{1}{p^{2}}\Big{)}\widehat{ \Phi}(0)\Big{(}\frac{2\log X}{L}\widehat{h}(0)+\frac{h(0)}{2}+O\Big{(}\frac{1 }{L}\Big{)}\Big{)}(M_{k}+o(1))(\log\log X)^{\frac{k}{2}}.\] This establishes the result (9) for the case \(k\) even. When \(k\) is odd, the contribution of the terms when \(p_{1}\cdots p_{k}\) is a prime times a square may be bounded by (using (7) of Proposition 2) \[\ll\frac{X}{LN_{0}}\sum_{q\leq x}\frac{\log q}{q}\Big{(}\sum_{ \begin{subarray}{c}p\leq x\\ p\nmid N_{0}\end{subarray}}\frac{a(p)^{2}}{p}\Big{)}^{\frac{k-1}{2}}\ll\frac{X }{N_{0}}\frac{\log x}{L}(\log\log X)^{\frac{k-1}{2}}\ll\frac{X}{N_{0}}(\log \log X)^{\frac{k-1}{2}},\] which establishes (9) since \(M_{k}=0\) here.
2310.00323
Relative Weyl Character formula, Relative Pieri formulas and Branching rules for Classical groups
We give alternate proofs of the classical branching rules for highest weight representations of a complex reductive group $G$ restricted to a closed regular reductive subgroup $H$, where $(G,H)$ consist of the pairs $(GL(n+1),GL(n))$, $ (Spin(2n+1), Spin(2n)) $ and $(Sp(2n),Sp(2)\times Sp(2n-2))$. Our proof is essentially a long division. The starting point is a relative Weyl character formula and our method is an inductive application of a relative Pieri formula. We also give a proof of the branching rule for the case of $ (Spin(2n), Spin(2n-1))$, by a reduction to the case of $(GL(n),GL(n-1))$.
C. S. Rajan, Sagar Shrivastava
2023-09-30T09:43:56Z
http://arxiv.org/abs/2310.00323v1
# Relative Weyl character formula, relative Pieri formulas and branching rules for classical groups ###### Abstract. We give alternate proofs of the classical branching rules for highest weight representations of a complex reductive group \(G\) restricted to a closed regular reductive subgroup \(H\), where \((G,H)\) consist of the pairs \((GL(n+1),GL(n))\), \((Spin(2n+1),Spin(2n))\) and \((Sp(2n),Sp(2)\times Sp(2n-2))\). Our proof is essentially a long division. The starting point is a relative Weyl character formula and our method is an inductive application of a relative Pieri formula. We also give a proof of the branching rule for the case of \((Spin(2n),Spin(2n-1))\), by a reduction to the case of \((GL(n),GL(n-1))\). ## 1. Introduction Let \(G\) be a connected reductive algebraic group over \(\mathbb{C}\) and \(H\) be a closed, connected reductive subgroup of \(G\). In this article, we consider the branching problem, that of understanding the restriction of a finite dimensional rational representation \(\pi\) of \(G\) to \(H\). For a dominant weight \(\lambda\) of \(G\), let \(\pi_{\lambda}\) be a corresponding irreducible representation of \(G\) with highest weight \(\lambda\) and \(\chi_{\lambda}\) be its character. By complete reducibility of representations of \(H\), \[\chi_{\lambda}|_{H}=\sum_{\mu}m(\lambda,\mu)\chi_{\mu}, \tag{1}\] where \(m(\lambda,\mu)\) is the multiplicity of the the irreducible highest weight representation \(\pi_{\mu}\) of \(H\) (corresponding the the weight \(\mu\) of \(H\)) in \(\pi_{\lambda}|_{H}\). The branching problem is to understand \(m(\lambda,\mu)\) as a function of \(\lambda\) and \(\mu\). These have a rich and classical history, going back to the work of Herman Weyl [20], where he considers \(G=GL(n+1)\) and \(H=GL(n)\). The starting point for a study of branching rules is the Weyl character formula, which gives a formula for the multiplicity of the weights with respect to a maximal torus \(T\) of \(G\), occurring in an irreducible representation \(\pi\) of \(G\). A systematic approach to the branching rules is to make use of Kostant's partition function, which effectively corresponds to inverting the denominator appearing in the Weyl character formula. We refer to the books of [10] and [17], for a more detailed exposition of the classical branching rules. In this paper, we modify this approach. As an example, let us consider the branching from \(GL(n+1)\) to \(GL(n)\). A highest weight \(\lambda\) for \(GL(n+1)\) is given by an \((n+1)\)-tuple \(\lambda=(\lambda_{1},\lambda_{2}\ldots,\lambda_{n+1})\in\mathbb{Z}^{n+1}\) with \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{n+1}\). Weyl's branching formula is the following theorem: **Theorem 1**.: _Let \(\lambda\) be a dominant weight of \(G=GL(n+1)\) and \(\mu\) a dominant weight of \(H=GL(n)\). Then \(m(\lambda,\mu)=1\) if \(\mu\) interlaces \(\lambda\) (\(\mu\preceq\lambda\)) i.e. \(\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\geq\ldots\geq\mu_{n}\geq\lambda_{n+1}\), and zero otherwise._ Our starting point for the proof of the foregoing theorem is to consider a relative Weyl character formula. We take for \(T\), the maximal torus consisting of diagonal matrices \(g\) with diagonal entries \(x_{1},x_{2},\ldots,x_{n+1}\). The Schur-Weyl character formula corresponding to the highest weight \(\lambda\), gives the formula for the character \(\chi_{\lambda}\) restricted to \(T\): \[\chi_{\lambda}(g)=\frac{\det|x_{j}^{\lambda_{i}+n+1-i}|}{\det|x_{j}^{n+1-i}|}. \tag{2}\] The relative Weyl character formula we consider for the branching rules for \((G,H)=(GL(n+1),GL(n)\times GL(1))\) is the co-factor expansion of the above determinantal formula for the character, leading to a rational expression in one variable with coefficients characters of \(GL(n)\). Such a co-factor expansion of the Schur-Weyl character formula was used in [14, 4.1]. For \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n+1})\in\mathbb{Z}^{n+1}\), where \(\mu_{1}>\mu_{2}>\cdots>\mu_{n+1}\), let \[S(\mu)=\det|x_{j}^{\mu_{i}}|.\] With this notation, \(\chi_{\lambda}=S(\lambda+\rho_{n+1})/S(\rho_{n+1})\), where \(\rho_{n+1}=(n,n-1,\ldots,0)\). The co-factor expansion for the determinant \(S(\mu)\) is given by: \[(-1)^{n+1}S(\mu)=x_{n+1}^{\mu_{n+1}}S(\mu^{(n+1)})-x_{n+1}^{\mu_{n}}S(\mu^{(n) })+\ldots+(-1)^{n}x_{n+1}^{\mu_{1}}S(\mu^{(1)}),\] where \(\mu^{(i)}=(\mu_{1}+1,\mu_{2}+1,\ldots,\mu_{i-1}+1,\mu_{i+1},\ldots,\mu_{n+1})\) (by an abuse of notation, we use \(S(\mu)\) in \(n\) variables as well). Upon substituting \(t=x_{n+1}\), the Schur-Weyl character formula becomes \[\chi_{\lambda}=\frac{t^{\lambda_{n+1}}S(\lambda^{(n+1)}+\rho_{n})-t^{\lambda_ {n}+1}S(\lambda^{(n)}+\rho_{n})+\ldots+(-1)^{n}t^{\lambda_{1}+n}S(\lambda^{(1 )}+\rho_{n})}{S(\omega_{n}+\rho_{n})-tS(\omega_{n-1}+\rho_{n})+\ldots+(-1)^{n} t^{n}S(\rho_{n})},\] where \(\omega_{k}=(1,1,1,\ldots,1,0,\ldots,0)\) (\(k\) many 1's) corresponds to the \(k^{th}\) fundamental weight for \(GL(n)\). In the above expression, we can divide the numerator and denominator by the Weyl denominator \(S(\rho_{n})\) of \(GL(n)\), to get \[\chi_{\lambda}=\frac{t^{\lambda_{n+1}}\chi_{\lambda^{(n+1)}}+\ldots+(-1)^{n}t ^{\lambda_{1}+n}\chi_{\lambda^{(1)}}}{\chi_{\omega_{n}}+\ldots+(-1)^{n}t^{n}}. \tag{3}\] The above expression is the relative Weyl character formula for \((GL(n+1),GL(n)\times GL(1))\), where we express the character \(\chi_{\lambda}\), as a rational expression in terms of the irreducible characters of \(GL(n)\times GL(1)\). The denominator (resp. numerator) in the relative Weyl character formula would be called the relative Weyl denominator (resp. numerator) for the pair \((GL(n+1),GL(n)\times GL(1))\). In order to obtain the branching laws, we essentially carry out a long division of the relative Weyl character formula. For this, we need to understand a relative Pieri formula (see Proposition 2), i.e., to understand the tensor product decomposition of an irreducible representation of \(GL(n)\), with the relative Weyl denominator. In this case, this amounts to understanding the tensor product decomposition of an irreducible representation of \(GL(n)\) with all of the fundamental representations, which are exterior powers of the standard representation of \(GL(n)\). In order to carry out the long division, the relative Pieri is used in an inductive manner. This leads to some combinatorial identities which then yield the branching laws in an inductive manner. We refer to Section 2 for the details of the proof. ### Classical branching: regular case In the case of orthogonal groups, the branching rules for \((Spin(n+1),Spin(n))\) were first proved by F.D.Murnaghan [14, Ch-IX] (see also [15]). We recall the branching rule when \(H\) is regular in \(G\), i.e., when the ranks of \(G\) and \(H\) are equal: **Theorem 2**.: _Let \(\lambda\) be a dominant weight of \(G=Spin(2n+1)\) and \(\mu\) is a dominant weight of \(H=Spin(2n)\). Then \(m(\lambda,\mu)=1\) if \(\mu\) interlaces \(\lambda\) (\(\mu\preceq\lambda\)) i.e. \(\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\geq\ldots\geq\mu_{n-1}\geq\lambda_{n} \geq|\mu_{n}|\), and zero otherwise._ Branching rules in the symplectic case for \((Sp(2n),Sp(2)\times Sp(2n-2))\) have a fairly long history (see [15], [16], [17] and [18]). A description of the multiplicities in terms of \(SL_{2}\) representations was given by N.R.Wallach and O. Yacobi [19]. **Theorem 3**.: _Let \(G=Sp(2n)\) and \(H=Sp(2)\times Sp(2n-2)\). Let \(\lambda\) be a dominant weight for \(Sp(2n)\) and \(\mu\) is a dominant weight of \(Sp(2n-2)\). Denote by \(S^{(k)}\) the \((k+1)\)-dimensional irreducible representation of \(SL_{2}\) (isomorphic to \(Sp(2)\)). Let_ \[\chi_{\lambda}|_{H}=\sum_{\mu}\sum_{k}m(\lambda,\mu,k)S^{(k)}\chi_{\mu}.\] _Then representation \(V(\lambda,\mu):=\sum_{k}m(\lambda,\mu,k)S^{(k)}\) of \(SL_{2}\) is non zero if and only if_ \[\lambda_{j}\geq\mu_{j}\geq\lambda_{j+2},\ \ \text{for}\ 1\leq j\leq n-1\] _(here \(\lambda_{n+1}=0\)). When the inequalities are satisfied, let_ \[x_{1}\geq y_{1}\geq x_{2}\geq y_{2}\geq\cdots\geq x_{n}\geq y_{n},\] _be the non decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\mu_{1},\ldots,\mu_{n-1},0\}\). Then as \(SL_{2}\)-modules,_ \[V(\lambda,\mu)=\bigotimes_{i=1}^{n}S^{(x_{i}-y_{i})}.\] In both the foregoing theorems, the proof proceeds in a manner similar to that of \(GL(n)\). A relative Weyl character formula can be obtained directly from the determinantal versions of the Weyl character formula. More generally, in the case when \(H\) is regular in \(G\), a relative Weyl character formula has been obtained in [11] (see Section 3). When \(H\) is the Levi component of a parabolic, a relative Weyl character formula is given in [12]. To obtain relative Pieri formulas, we follow a method of [1]. In this, rather than working out the individual tensor product decompositions of an irreducible representation of \(H\) with that of a representation 'occurring' in the relative Weyl denominator, we use the determinental expression for the Weyl denominator to derive a 'weaker' relative Pieri formula, that of decomposing the tensor product of an irreducible representation of \(H\) with the full virtual representation given by the relative Weyl denominator. From the relative Pieri formula, following the process of long division, the branching rules are derived in an inductive manner. We first observe that there is a unique term in the relative Weyl denominator, which is the largest (or the smallest) with respect to a suitable ordering (for example, either the degree in the case of \(GL(n+1)\), or in general the lexicographic ordering). Upon cross multiplying the restriction of the character \(\chi_{\lambda}\) to \(H\) by the relative Weyl denominator and making use of the relative Pieri formula, we get an expression for the multiplicity with which a dominant weight \(\mu\) of \(H\) occurs in \(\chi_{\lambda}\), in terms of multiplicities of dominant weights which are larger (smaller) in the ordering. By an induction hypothesis, these multiplicities are as expected. This leads to a combintorial expression, which can be solved to proceed inductively with a proof of the branching formula. This is essentially, the process of long division. The proof proceeds by first establishing a weak interlacing property. This property limits the dominant weights \(\mu\) of \(H\) to be considered to be amongst the expected weights occurring in the branching formula, together with the weights occurring in the relative Weyl numerator and some boundary cases. A further analysis of these three cases, yields the proof of the branching laws. We would like to reiterate that other than the determinantal formulas for Weyl character formula, everything else in this paper is self contained. We will prove Theorem 2 in Section 4 and Theorem 3 in Section 6. ### Classical branching: non-regular case For the non-regular case of \((Spin(2n),Spin(2n-1))\), we do not have at our disposal a relative Weyl character formula. Instead we express the Weyl character formula as a formal sum of Weyl character type formula for \(GL(n)\), and then apply branching rules for \((GL(n),GL(n-1))\) to get the desired branching rules for \((Spin(2n),Spin(2n-1))\). We give a proof of the branching theorem in Section 5: **Theorem 4**.: _Let \(\lambda\) be a dominant weight of \(G=Spin(2n)\) and \(\mu\) is a dominant weight of \(H=Spin(2n-1)\). Then \(m(\lambda,\mu)=1\) if \(\mu\) interlaces \(\lambda\) (\(\mu\preceq\lambda\)) i.e. \(\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\geq\ldots\geq\mu_{n-1}\geq|\lambda_{n}|\), and zero otherwise._ We remark that the known proofs of the foregoing theorem ([1]) also make use of the corresponding proof of branching for \(GL(n)\). ## 2. Branching rules for \((GL(n+1),GL(n))\) In this Section, we give a proof of Theorem 1, following the schema given in the introduction. The starting point is the relative Weyl character formula for \((GL(n+1),GL(n))\) given by equation (3). **Proposition 1**.: _(Relative Weyl character formula) Let \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n+1})\) be a dominant weight for \(G=GL(n+1)\) and \(H=GL(1)\times GL(n)\). Let \(\chi_{\lambda}\) be the irreducible highest weight representation of \(G\) with highest weight \(\lambda\). We have the following expression for restriction of \(\chi_{\lambda}\):_ \[\chi_{\lambda}|_{H}=\frac{t^{\lambda_{n+1}}\chi_{\lambda^{(n+1)}}+\ldots+(-1)^ {n}t^{\lambda_{1}+n}\chi_{\lambda^{(1)}}}{\chi_{\omega_{n}}+\ldots+(-1)^{n}t^{ n}}. \tag{4}\] _where_ * \(t\) _is the character of_ \(GL(1)\)_,_ * \(\omega_{k}=(1,1,1,\ldots,1,0,\ldots,0)\) _(_\(k\) _many 1's) corresponds to the_ \(k^{th}\) _fundamental weight for_ \(GL(n)\)_,_ * \(\lambda^{(i)}=(\lambda_{1}+1,\lambda_{2}+1,\ldots,\lambda_{i-1}+1,\lambda_{i +1},\ldots,\lambda_{n+1})\) _are dominant weights of_ \(GL(n)\)_._ ### Relative Pieri formula The second step is to consider a relative Pieri formula. Let \(\Delta=\chi_{\omega_{n}}+\ldots+(-1)^{n}t^{n}\) be the relative Weyl denominator as in equation (4). Our aim now is to obtain a relative Pieri formula, giving the decomposition of \(\chi_{\mu}\Delta\), for a dominant weight \(\mu\) of \(GL(n)\). We see that the representations occurring in the relative Weyl denominator are exactly the fundamental representations of \(GL(n)\). Hence the relative Pieri formula is a signed sum of the skew/dual Pieri formula for \(GL(n)\) which gives a decomposition of \(\chi_{\mu}\chi_{\omega_{i}}\). For the proof of relative Pieri, we follow [1], who proves the dual Pieri formula for the symplectic case. Instead of considering the individual dual Pieri, it is convenient to work with the full relative Weyl denominator. The reason for doing so is the product formula for \(\Delta\), given by the following lemma: **Lemma 1**.: _With the above notation, the relative Weyl denominator has a product expansion, \(\Delta=\prod_{i}(x_{i}-t)\)._ Proof.: The denominator in the Schur-Weyl character formula (equation 2) is the Vandermonde determinant, which has a product formula \(\prod_{i<j}(x_{i}-x_{j})\). Upon dividing by the Weyl denominator of \(GL(n)\), this gives us that the relative Weyl denominator \(\Delta=\prod_{i}(x_{i}-t)\). We use the following notation for length: for \(\xi\in\mathbb{R}^{n}\), let \(|\xi|=\sum_{i}|\xi_{i}|\) be the length of \(\xi\). If \(\mu\) is not a dominant weight, we take \(\chi_{\mu}=0\). Given \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n+1})\) be a dominant weight of \(GL(n+1)\) and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n})\) be a dominant weight of \(GL(n)\). Then \(\mu\preceq\lambda\) is used to say that \(\mu\) interlaces \(\lambda\), i.e. \(\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\geq\ldots\geq\mu_{n}\geq\lambda_{n+1}\). **Proposition 2** (Relative Pieri formula).: _Let \(\nu\) is a dominant weight for \(GL(n)\). With the above notation, we have the following tensor product decomposition_ \[\chi_{\nu}\Delta=\chi_{\nu}\left(\sum_{i=0}^{n}(-t)^{i}\chi_{\omega_{n-i}} \right)=\sum_{\begin{subarray}{c}\varepsilon\in\{0,1\}^{n}\\ \nu+\varepsilon\text{ dominant}\end{subarray}}(-t)^{n-|\varepsilon|}\chi_{\nu+ \varepsilon}. \tag{5}\] Proof.: From the Weyl character formula equation (2), we get, \[\Delta\chi_{\nu} =\left(\prod_{k=1}^{n}(x_{k}-t)\right)\frac{S(\nu+\rho_{n})}{S(\rho _{n})}\] \[=\frac{\prod_{k=1}^{n}(x_{k}-t)\det|x_{j}^{\nu_{i}+n-i}|}{S(\rho_{n })}.\] Using the multilinearity of the determinant, we bring the factor \((x_{i}-t)\) to the \(i^{th}\) column. \[\prod_{k=1}^{n}(x_{k}-t)\det\begin{vmatrix}x_{1}^{\nu_{1}+n-1}&\cdots&x_{n}^{ \nu_{1}+n-1}\\ x_{1}^{\nu_{2}+n-2}&\cdots&x_{n}^{\nu_{2}+n-2}\\ \vdots&\ddots&\vdots\\ x_{1}^{\nu_{n}}&\cdots&x_{n}^{\nu_{n}}\end{vmatrix}=\det\begin{vmatrix}x_{1}^ {\nu_{1}+n-1}(x_{1}-t)&\cdots&x_{n}^{\nu_{1}+n-1}(x_{n}-t)\\ x_{1}^{\nu_{2}+n-2}(x_{1}-t)&\cdots&x_{n}^{\nu_{2}+n-2}(x_{n}-t)\\ \vdots&\ddots&\vdots\\ x_{1}^{\nu_{n}}(x_{1}-t)&\cdots&x_{n}^{\nu_{n}}(x_{n}-t)\end{vmatrix}.\] The individual terms in the determinant can be expanded to give us \[x_{j}^{\nu_{i}+n-i+1}-tx_{j}^{\nu_{i}+n-i}.\] Every row is a sum of two rows, with one having a linear factor of \(t\). We now expand along the rows. In the rows with the factor of \(t\), the exponent of \(x_{j}\) remains invariant, whereas in the row without the factor of \(t\), the exponent increases by \(1\). In a given determinant, if there are \(r\) rows that have a \((-t)\) factor, we know that the other \(n-r\) rows would have the corresponding exponents incremented by \(1\). This gives us \[\Delta\chi_{\nu}=\sum_{\varepsilon\in\{0,1\}^{n}}(-t)^{n-|\varepsilon|}\frac{ \det|x_{j}^{\nu_{i}+n-i+\varepsilon_{i}}|}{S(\rho_{n})}. \tag{6}\] In the above sum, corresponding to \(\varepsilon\), the determinant is in a standard form for the numerator in the Weyl character formula for irreducible representation of \(GL(n)\), with highest weight \(\nu+\varepsilon\). If \(\nu+\varepsilon\) is not dominant, then there is an \(i\) such that \(\nu_{i}+\varepsilon_{i}<\nu_{i+1}+\varepsilon_{i+1}\). This forces \(\nu_{i}=\nu_{i+1}\) and \(\varepsilon_{i}=0,\varepsilon_{i+1}=1\), which tells us that \(S(\nu+\varepsilon+\rho_{n})\) has two rows having the same entries, therefore vanishes. Hence the summands are non zero only if \(\nu+\varepsilon\) is a dominant weight, which gives us \[\Delta\chi_{\nu}=\sum_{\begin{subarray}{c}\varepsilon\in\{0,1\}^{n}\\ \nu+\varepsilon\text{ dominant}\end{subarray}}(-t)^{n-|\varepsilon|}\chi_{\nu+ \varepsilon}. \tag{7}\] This completes the proof of the relative Pieri formula for \(GL(n)\) **Corollary 1** (Dual Pieri formula).: _Let \(\nu\) be a dominant weight for \(GL(n)\) and \(\omega_{i}\) be the \(i^{th}\) fundamental weight, corresponding the \(i^{th}\) exterior power of the defining representation. Then_ \[\chi_{\nu}\mathcal{X}_{\omega_{i}}=\sum_{\begin{subarray}{c}\varepsilon\in\{0, 1\}^{n},|\varepsilon|=i\\ \nu+\varepsilon\text{ dominant}\end{subarray}}\chi_{\nu+\varepsilon}.\] Proof.: Follows from comparing the graded components of the sum as polynomial in \(t\) in the relative Pieri formula. ### Proof of \((GL(n+1),GL(n))\) branching rule We now prove the branching rule for \((GL(n+1),GL(1)\times GL(n))\). By equation (4), we see that the character \(\chi_{\lambda}\) is a homogeneous polynomial in \(x_{i}^{\prime}s\) (and \(t=x_{n+1}\)) of total degree \(|\lambda|\). Hence by equation (1), we get : \[\chi_{\lambda}|_{H}=\sum_{\nu}m(\nu)\chi_{\nu}t^{|\lambda|-|\nu|}=\sum_{r=0}^{ \lambda_{1}+n}t^{r}\left(\sum_{r+|\nu|=|\lambda|}m(\nu)\chi_{\nu}\right). \tag{8}\] By tensoring with a suitable power of the determinant, we can assume that \(\lambda_{n+1}=0\). As the numerator and denominator in the relative Weyl character are polynomials in \(t\), and the relative Weyl denominator has a non-zero constant term, it follows \(\chi_{\lambda}|_{H}\) is a polynomial in \(t\). We prove the theorem by increasing induction on the degree of \(t\) (we can also prove it by decreasing induction on the degree of \(t\)). Cross multiplying equation (8) by \(\Delta\) and comparing constant coefficients, we get \[\chi_{\lambda^{(n+1)}}=\sum_{|\nu|=|\lambda|}m(\nu)\chi_{\nu}\chi_{\omega_{n}}.\] This gives us, \[m(\lambda^{(n+1)}-(1,1,\ldots,1))=m(\lambda_{1},\lambda_{2},\ldots,\lambda_{n })=1.\] For our induction hypothesis, we assume that the coefficients of \(t^{r}\) in Equation (8) for all \(r<k\) satisfy the hypothesis in the theorem. We have shown the induction hypothesis to be true for \(k=1\). Note that by homogeneity, the sum of the degree of \(t\) and \(|\nu|\) is equal the length of \(|\lambda|\). By the induction hypothesis, we know that \(m(\nu)=1\) for those highest weights \(\nu\), for which \(|\nu|>|\lambda|-k\), and interlace \(\lambda\). We can rewrite the above equation (8) as \[\chi_{\lambda}|_{H}=\sum_{r=0}^{k-1}t^{r}\left(\sum_{\begin{subarray}{c}r+| \nu^{\prime}|=|\lambda|\\ \nu\preceq\lambda\end{subarray}}\chi_{\nu^{\prime}}\right)+\sum_{r\geq k}t^{ r}\left(\sum_{r+|\nu|=|\lambda|}m(\nu)\chi_{\nu}\right). \tag{9}\] We would like to understand the multiplicity \(m(\nu)\) for a highest weight \(\nu\) with \(|\nu|+k=|\lambda|\). Upon cross multiplying the above equation by \(\Delta\), and using the fact that the constant term of \(\Delta\) is \(\chi_{\omega_{n}}\), we see that the character \(\chi_{\nu+\omega_{n}}\) occurs with multiplicity \(m(\nu)\) in the second part of the above sum. Applying relative Pieri to the first term in the above expression, we get \[\sum_{r=0}^{k-1}t^{r}\left(\sum_{\begin{subarray}{c}r+|\nu^{\prime}|=|\lambda|\\ \nu\preceq\lambda\end{subarray}}\chi_{\nu^{\prime}}\right)\Delta=\sum_{r=0}^{k- 1}t^{r}\left(\sum_{\begin{subarray}{c}\nu\preceq\lambda\\ r+|\nu^{\prime}|=|\lambda|\end{subarray}}\sum_{\varepsilon\in\{0,1\}^{n}}(-t)^ {n-|\varepsilon|}\chi_{\nu^{\prime}+\varepsilon}\right).\] We see by homogeneity that a degree \(k\) term occurs when \(r+n-|\varepsilon|=k\), i.e. \(|\lambda|+n=k+|\nu^{\prime}+\varepsilon|\). Note that if \(|\varepsilon|=n\), then it can't contribute to the coefficient of \(t^{k}\), as \(r<k\). Hence the contribution to the coefficient of \(t^{k}\) term is given by \[\sum_{|\varepsilon|<n}(-1)^{n-|\varepsilon|}\left(\sum_{\begin{subarray}{c}| \nu^{\prime}+\varepsilon|+k=|\lambda|+n\\ \nu^{\prime}\preceq\lambda\end{subarray}}\chi_{\nu^{\prime}+\varepsilon} \right). \tag{10}\] To understand the multiplicity \(m(\nu)\), we need to know the multiplicity say \(n(\nu)\) of \(\chi_{\nu+\omega_{n}}\) in the above expression. By the foregoing equation, this amounts to counting the (signed) number of ways \(\nu+\omega_{n}\) can be written as \(\nu^{\prime}+\varepsilon\) with \(|\varepsilon|<n\) and \(\nu^{\prime}\) interlacing \(\lambda\). Note that this can be seen as \(\nu+\varepsilon^{\prime}=\nu^{\prime}\), where \(\varepsilon^{\prime}=\omega_{n}-\varepsilon\), \(|\varepsilon^{\prime}|=n-|\varepsilon|\) and \(|\varepsilon^{\prime}|>0\). The key step in the proof of branching is to count the number of ways a given \(\nu\) can be modified with \(\varepsilon^{\prime}\in\{0,1\}^{n}\), such that \(\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). To illustrate the proof, suppose \(\nu\) is "generic", i.e. \(\lambda_{i}>\nu_{i}\geq\lambda_{i+1}\) for all \(i\). In this case, we can arbitrarily assign \(\varepsilon^{\prime}_{i}=1\) at any set of \(k\)-indices with \(0<k\leq n\) so that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). The total number of such \(\varepsilon^{\prime}\) is given by the binomial coefficient \(\binom{n}{|\varepsilon^{\prime}|}=\binom{n}{k}\). Thus the multiplicity with which the weight \(\nu+\omega_{n}\) occurs in equation (10) is given by, \[n(\nu)=\sum_{k=1}^{n}(-1)^{k}\binom{n}{k}=-1.\] From the expression for the relative Weyl numerator, the co-ordinates weights \((\lambda^{j}+\rho_{n})_{j}>\lambda_{j}\) for \(j<i\). Hence the weights \(\nu+\omega_{n}\) do not occur in the Weyl numerator and this gives that \(m(\nu)+n(\nu)=0\). Thus for generic \(\nu\), we get \(m(\nu)=1\). We modify the above argument for general \(\nu\). We first show that \(\nu\)_weakly interlaces_\(\lambda\) in the following sense: **Lemma 2**.: _Let \(\nu\) be such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\) for some \(\varepsilon^{\prime}\in\{0,1\}^{n},\) with \(|\varepsilon^{\prime}|>0.\) Then_ \[\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1. \tag{11}\] _._ Proof.: Given that \(\nu^{\prime}_{i}=\nu_{i}+\varepsilon^{\prime}_{i}\), and as \(\nu^{\prime}\preceq\lambda\), we get that \(\lambda_{i}\geq\nu^{\prime}_{i}\geq\lambda_{i+1}.\) Putting the two of them together, we get that \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) Let \(M\) be the set of indices \(i\) such that \(\nu_{i}=\lambda_{i+1}-1\) and \(M^{\prime}\) be the set of indices \(i\) such that \(\nu_{i}=\lambda_{i}\). Let \(m\) and \(m^{\prime}\) be their respective cardinalities. The generic case is when \(m=m^{\prime}=0\). \(\nu\) interlaces \(\lambda\) if and only if \(m=0\). Note that the weights occurring in the relative Weyl numerator corresponds to the extremal case \(m+m^{\prime}=n\) and \(m>0\). We first observe, **Lemma 3**.: _Let \(\nu\) satisfy the conclusion of Lemma 2, i.e. \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\). Suppose there exists an \(\varepsilon^{\prime}\in\{0,1\}^{n}\) such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). Then_ \[m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}.\] Proof.: Suppose \(\nu^{\prime}=\nu+\varepsilon^{\prime}\), where \(\nu^{\prime}\preceq\lambda\). For \(i\in M\), \(\nu_{i}+1=\lambda_{i+1}\), which forces \(\nu^{\prime}_{i}=\lambda_{i+1}\) and \(\varepsilon^{\prime}_{i}=1\). Similarly, if \(i\in M^{\prime}\), then \(\nu_{i}=\lambda_{i}\) and hence \(\nu^{\prime}_{i}=\lambda_{i}\) and \(\varepsilon^{\prime}_{i}=0\). Thus out of the \(|\varepsilon^{\prime}|\) many 1's, \(m\) of them are fixed, and out of the \(n-|\varepsilon^{\prime}|\) many 0's, \(m^{\prime}\) of them are fixed. Hence if \(|\varepsilon^{\prime}|<m\) or \(n-|\varepsilon^{\prime}|<m^{\prime}\), then there is no possible \(\varepsilon^{\prime}\) satisfying the requirements of the hypothesis. If \(m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}\), then we can get a \(\varepsilon^{\prime}=\nu^{\prime}-\nu\) satisfying the hypothesis. We now count the number of ways \(\nu\) can be modified by \(\varepsilon^{\prime}\in\{0,1\}^{n}\), such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). **Lemma 4**.: _Given a weight \(\nu\) such that \(|\nu|+k=|\lambda|\), then the multiplicity of \(\mathcal{X}_{\nu+\omega_{n}}\) in \(\sum_{\begin{subarray}{c}k+|\nu^{\prime}+\varepsilon|=|\lambda|+n\\ \nu^{\prime}\preceq\lambda\end{subarray}}\mathcal{X}_{\nu^{\prime}+\varepsilon}\) is given by_ \[\binom{n-m^{\prime}-m}{|\varepsilon^{\prime}|-m},\] _if \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) and zero otherwise._ Proof.: For such a \(\nu\), we want to count \(\varepsilon^{\prime}\in\{0,1\}^{n}\), such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). By Lemma 2, we get that multiplicity of \(\mathcal{X}_{\nu}\) is non zero only if \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\). We are now reduced to the case \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) for all \(i\). By Lemma 3, we get that if \(|\varepsilon^{\prime}|<m\) or \(n-|\varepsilon^{\prime}|<m^{\prime}\), then the multiplicity of \(\mathcal{X}_{\nu}\) is zero. As in the generic case, we can now freely choose \(|\varepsilon^{\prime}|-m\) indices amongst \(\{1,2,\ldots,n\}\backslash(M\cup M^{\prime})\) positions. This gives the multiplicity as \[\binom{n-m-m^{\prime}}{|\varepsilon^{\prime}|-m}=\binom{n-m-m^{\prime}}{n-| \varepsilon^{\prime}|-m^{\prime}}.\] **Corollary 2**.: _Given \(\nu\) with \(|\nu|+k=|\lambda|\), and \(\nu\) satisfies equation 11, the multiplicity \(n(\nu)\) of \(\mathcal{X}_{\nu+\omega_{n}}\) in_ \[\sum_{|\varepsilon|<n}(-1)^{n-|\varepsilon|}\left(\sum_{\begin{subarray}{c}| \nu^{\prime}+\varepsilon|+k=|\lambda|+n\\ \nu^{\prime}\preceq\lambda\end{subarray}}\mathcal{X}_{\nu^{\prime}+\varepsilon} \right)=\begin{cases}-1&\text{if }m=0\\ (-1)^{m}&\text{if }m+m^{\prime}=n,m>0\\ 0&\text{otherwise.}\end{cases}\] Proof.: We want to count \(\varepsilon\in\{0,1\}^{n}\) such that \(\nu+\omega_{n}=\nu^{\prime}+\varepsilon\), which is equivalent to counting \(\varepsilon^{\prime}\in\{0,1\}^{n}\) such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\). By Lemmas 2, 3 and 4, we get that the multiplicity of \(\chi_{\nu+\omega_{n}}\) is given by \[\sum_{m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}|\atop|\varepsilon^{\prime}| >0}(-1)^{|\varepsilon^{\prime}|}{n-m-m^{\prime}\choose|\varepsilon^{\prime}|-m }.\] When \(m>0\) and \(n=m+m^{\prime}\), the only term in the above sum is when \(|\varepsilon^{\prime}|=m\). Hence the sum is equal to \((-1)^{m}\). When \(m>0\), upto a sign, the above sum is equal to \((1-1)^{n-m-m^{\prime}}\) and hence vanishes. When \(m=0\), the above expression differs from \((1-1)^{n-m^{\prime}}\) by \(1\), hence the sum is equal to \(-1\). This covers all the cases, hence completes the proof of the corollary. #### 2.2.1. Proof of Theorem 1 We have the relative Weyl numerator. \[\chi_{\lambda}|_{H}\Delta=t^{\lambda_{n+1}}\chi_{\lambda^{(n+1)}}+\ldots+(-1) ^{n-j+1}\chi_{\lambda^{(j)}}+\ldots+(-1)^{n}t^{\lambda_{1}+n}\chi_{\lambda^{(1 )}}.\] We see that \[\lambda^{(j)}-\omega_{n}=(\lambda_{1},\ldots,\lambda_{j-1},\lambda_{j+1}-1, \ldots,\lambda_{n}-1),\] satisfies equation (11). Since \(n(\nu)\) vanishes if \(\nu\) does not satisfy equation (11), it follows that \(m(\nu)\) vanishes for such weights. Suppose \(\nu\) is of the form \(\lambda^{(j)}-\omega_{n}\) for some \(j\). We have where \[m=n-j+1>0,\,m^{\prime}=j-1\text{ and }n=m+m^{\prime}.\] The multiplicity with which \(\lambda^{(j)}\) occurs in the relative Weyl numerator is \((-1)^{n-j+1}\). As this is equal to \(m(\nu)+(-1)^{m}\), we get that \(m(\nu)=0\). Suppose \(\nu\) satisfies equation (11) and \(\nu+\omega_{n}\neq\lambda^{(j)}\) for any \(j\). Since \(n(\nu)=-1\) precisely when \(\nu\) interlaces \(\lambda\), it follows that \(m(\nu)=1\) if and only if \(\nu\) interlaces \(\lambda\). This completes the proof of Theorem 1. _Remark 1_.: It is possible to do the comparison of coefficients from the opposite side, so the induction would work in the decreasing order of the degree of \(t\). _Remark 2_.: At each stage in the proof, we try to match the least degree coefficient of \(t\) in an inductive order, knowing the values for lesser powers of \(t\). This is essentially a long division of the relative Weyl character formula. _Remark 3_.: Our initial method was to formally invert the relative Weyl denominator. For any natural number \(m\), we have the equation, \[\sum_{i=0}^{n}(-1)^{i}\Lambda^{i}(V)S^{m-i}(V)=0,\] in the category of virtual representations of \(GL(V)\). Here, \(\Lambda^{i}(V)\) (resp. \(S^{i}(V)\) denotes the \(i\)-th exterior (resp. symmetric) power representation of the standard representation of \(GL(V)\) on \(V\) and \(n\) is the dimension of \(V\). Using the above equation, the formal inverse of the Weyl denominator turns out to be, \[t^{-n}\sum_{j=0}^{\infty}t^{j}S^{j}(V).\] This can also be seen directly from the product form of the relative Weyl denominator. Now we appeal to (usual) Pieri formula involving tensor products with symmetric power representation and argue inductively to get at the branching. See [11] for a proof along these lines. However, when considering branching from odd to even orthogonal groups of the same rank (following section), we run into the problem of making sense of inverting the relative Weyl denominator. This led us to consider the more direct approach outlined here to proving the branching formula, which has the advantage that it extends to other cases as well. ## 3. Branching rules for \((Spin(2n+1),Spin(2n))\) In this Section, we give a proof of Theorem 2, giving the classical branching rules for orthogonal groups, with the subgroup \(H\) being regular. We follow the schema of proof as given for \((GL(n+1),GL(n))\): give a \((G,H)\)-relative Weyl character formula, a \((G,H)\)-relative Pieri formula and finally the branching rule is derived in an inductive manner following the process of long division. Although the relative Weyl character formula looks different from that of \(GL(n)\), the combinatorics of the relative Pieri formula is similar to that of the \(GL(n)\) case. Hence, the derivation of the branching rule from that of the relative Pieri is similar to that of the general linear case. ### Weyl character formula We follow the notation and convention as in [13, Chapter 24]. For \(m\geq 4\), let \(Spin(m)\) be the simply connected double cover of \(SO(m,\mathbb{C})\). We consider \(\mathbb{R}^{n}\) with the standard basis \(L_{i}\). Denote by \(x_{i}^{\pm 1}=e^{\pm L_{i}}\) the formal exponents of \(\pm L_{i}\). Given a tuple \(\eta=(\eta_{1},\eta_{2}\ldots,\eta_{n})=\sum_{i}\eta_{i}L_{i}\), define \[D^{+}(\eta)=\det|x_{j}^{\eta_{i}}+x_{j}^{-\eta_{i}}|\quad\text{ and }\quad D^{-}( \eta)=\det|x_{j}^{\eta_{i}}-x_{j}^{-\eta_{i}}|.\] We observe that \(D^{+}(\eta)\) is invariant under sign changes of \(\eta\), whereas \(D^{-}(\eta)\) is alternating with respect to sign changes. #### 3.1.1. \(Spin(2n+1)\) Let \(G=Spin(2n+1)\) and \(H=Spin(2n)\). The set of positive roots of \(Spin(2n+1)\) is given by: \[\Phi^{+}(B_{n})=\{L_{i}\pm L_{j}\mid 1\leq i<j\leq n\}\cup\{L_{i}\mid 1\leq i \leq n\}.\] The highest weights of \(G\) are described by \(n-\)tuples \[\lambda=(\lambda_{1},\lambda_{2}\ldots,\lambda_{n}),\] where \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{n}\geq 0\) are all integers or all half integers. Let \[\rho_{G}=(n-1+1/2,n-2+1/2,\ldots,1/2),\] be half the sum of positive roots for \(G\). The character for the highest weight representation with highest weight \(\lambda\) is given by, \[\chi_{\lambda}=\frac{D^{-}(\lambda+\rho_{G})}{D^{-}(\rho_{G})}. \tag{12}\] #### 3.1.2. \(Spin(2n)\) The set of positive roots of \(Spin(2n)\) is given by: \[\Phi^{+}(D_{n})=\{L_{i}\pm L_{j}|1\leq i<j\leq n\}.\] The highest weights of \(H\) are given by an \(n-\)tuple \[\mu=(\mu_{1},\mu_{2},\ldots,\mu_{n}),\] where \(\mu_{1}\geq\mu_{2}\geq\ldots\geq|\mu_{n}|\geq 0\) are all integers or all half integers. Let \[\rho_{H}=\sum(n-i)L_{i}=(n-1,n-2,\ldots,0)\] be half the sum of positive roots of \(H\). The Weyl character formula for the irreducible representation of \(H\) with highest weight \(\mu\) is given by, \[\chi_{\mu}=\frac{D^{-}(\mu+\rho_{H})+D^{+}(\mu+\rho_{H})}{D^{+}(\rho_{H})}. \tag{13}\] ### Relative Weyl character formula The numerator and denominator of the Weyl character formula for \(G\) involve a sum of weights of \(G\). Like in the case of \(GL(n)\), we now get a relative Weyl character formula for \(G\) with respect to \(H\), where the numerator and denominator are sums of characters of \(H\). As above, we identify the tori of \(g\) and \(H\). **Proposition 3** (Relative Weyl character formula).: _Given \(\lambda=(\lambda_{1},\lambda_{2}\ldots,\lambda_{n})\) is a highest weight for \(G\), then_ \[\chi_{\lambda}|_{H}=\frac{\chi_{\lambda^{+}}-\chi_{\lambda^{-}}}{\Delta},\] \[\text{where}\;\;\lambda^{+}=\left(\lambda_{1}+\frac{1}{2},\lambda_{2}+\frac{ 1}{2},\ldots,\lambda_{n}+\frac{1}{2}\right),\] \[\lambda^{-}=\left(\lambda_{1}+\frac{1}{2},\lambda_{2}+\frac{1}{2},\ldots,- \lambda_{n}-\frac{1}{2}\right),\] _are highest weights for \(H\). Here,_ \[\Delta=\prod_{i=1}^{n}(x_{i}^{1/2}-x_{i}^{-1/2})=S^{+}-S^{-},\] _is the relative Weyl denominator for the pair \((G,H)\), where \(S^{\pm}\) are the half spin representations of \(H\) corresponding to the highest weights \(\lambda^{\pm}\) with \(\lambda=(0,\ldots,0)\)._ Proof.: By the product formula for the Weyl denominator, we observe that \[D^{-}(\rho_{G}) =\prod_{\alpha\in\Phi^{+}(B_{n})}(e^{\alpha/2}-e^{-\alpha/2})\] \[=\prod_{\alpha\in\Phi^{+}(D_{n})}(e^{\alpha/2}-e^{-\alpha/2})\prod_ {i=1}^{n}(x_{i}^{1/2}-x_{i}^{-1/2})\] \[=\frac{D^{+}(\rho_{H})}{2}\prod_{i=1}^{n}(x_{i}^{1/2}-x_{i}^{-1/2}),\] where \(e^{\pm\alpha/2}\) is the formal exponent corresponding to half the root \(\pm\alpha\). Note that \(\lambda^{+}\) and \(\lambda^{-}\) are equal in all but the last entry, where they are negative of each other. As \(D^{-}\) is alternating under sign changes in \(\eta\), we have that \[D^{-}(\lambda^{+}+\rho_{H})=-D^{-}(\lambda^{-}+\rho_{H}),\] whereas \(D^{+}\) is invariant under sign changes, giving us \[D^{+}(\lambda^{+}+\rho_{H})=D^{+}(\lambda^{-}+\rho_{H}).\] We also have that \(\lambda+\rho_{G}=\lambda^{+}+\rho_{H}\), which gives us \[D^{-}(\lambda+\rho_{G})=D^{-}(\lambda^{+}+\rho_{H}).\] Using the above identities, we can express the numerator \(D^{-}(\lambda+\rho_{G})\) of \(\chi_{\lambda}\) as, \[\frac{(D^{-}(\lambda^{+}+\rho_{H})+D^{+}(\lambda^{+}+\rho_{H}))-(D^{-}(\lambda ^{-}+\rho_{H})+D^{+}(\lambda^{-}+\rho_{H}))}{2}.\] Dividing the Weyl numerator with the Weyl denominator gives us the relative Weyl character. The denominator corresponds to taking \(\lambda=(0,\dots,0)\). This gives us the half spin representations. _Remark 1_.: One can arrive at the above Proposition by looking at the expansion of the Weyl character formula with respect to the the cosets of the Weyl group of \(H\) in the Weyl group of \(G\), which has been expounded in [1]. ### Relative Pieri formula We now derive the relative Pieri formula for \((G,H)\), by which we mean the tensor product decomposition of a highest weight representation of \(H\) with that of the relative Weyl denominator \(\Delta\). One can prove the relative Pieri formula by proving the tensor product decompositions with \(S^{\pm}\) individually. However, here we follow the approach of Okada's [1], where we use the determinantal identities for the Weyl character formula along with the product expansion for the relative Weyl denominator. **Proposition 4** (Relative Pieri formula).: _Given \(\mu\) a dominant weight of \(Spin(2n)\), we have the following decomposition:_ \[\chi_{\mu}\Delta=\sum_{\begin{subarray}{c}\varepsilon\in\{\pm 1\}^{n}\\ \mu+\varepsilon/2\text{ dominant}\end{subarray}}(-1)^{\varepsilon}\chi_{\mu+ \varepsilon/2}.\] Proof.: For the dominant weight \(\mu\) of \(H\), by the Weyl character formula for \(H\) (equation (13)) \[\chi_{\mu}\Delta=\frac{\Delta D^{-}(\mu+\rho_{H})+\Delta D^{+}(\mu+\rho_{H})}{D^ {+}(\rho_{H})}.\] We simplify the numerator in the above expression. We first observe that, \[(x_{j}^{l_{i}}-x_{j}^{-l_{i}})(x_{j}^{1/2}-x_{j}^{-1/2}) =(x_{j}^{l_{i}+1/2}+x_{j}^{-(l_{i}+1/2)})-(x_{j}^{l_{i}-1/2}+x_{j} ^{-(l_{i}-1/2)}),\] and \[(x_{j}^{l_{i}}+x_{j}^{-l_{i}})(x_{j}^{1/2}-x_{j}^{-1/2}) =(x_{j}^{l_{i}+1/2}-x_{j}^{-(l_{i}+1/2)})-(x_{j}^{l_{i}-1/2}-x_{j} ^{-(l_{i}-1/2)}),\] where \(l=(l_{1},l_{2},\ldots,l_{n})\) is a tuple of all distinct integers or all half integers. Using the \(n\)-linearlity of the determinant as in the case of \(GL(n)\), along with the above identities gives us: \[\Delta D^{-}(l) =\prod_{k=1}^{n}(x_{k}^{1/2}-x_{k}^{-1/2})\left(\det|x_{i}^{l_{j}} -x_{i}^{-l_{j}}|\right)\] \[=\det|(x_{i}^{l_{j}}-x_{i}^{-l_{j}})(x_{i}^{1/2}-x_{i}^{-1/2})|\] \[=\det|(x_{i}^{l_{j}+1/2}+x_{i}^{-(l_{j}+1/2)})-(x_{i}^{l_{j}-1/2} +x_{i}^{-(l_{j}-1/2)})|\] \[=\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{\varepsilon}\det|x_{i}^{l_ {j}+\varepsilon_{j}/2}+x_{i}^{-(l_{j}+\varepsilon_{j}/2)}|\] \[=\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{\varepsilon}D^{+}(l+ \varepsilon/2), \tag{14}\] where \((-1)^{\varepsilon}=\prod_{i=1}^{n}\varepsilon_{i}.\) With similar calculation, we get \[\Delta D^{+}(l)=\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{\varepsilon}D^{-}(l+ \varepsilon/2). \tag{15}\] For \(l=\mu+\rho_{H}\), adding (15) and (14), and dividing by the Weyl denominator, we get \[\chi_{\mu}\Delta =\frac{\Delta D^{-}(\mu+\rho_{H})+\Delta D^{+}(\mu+\rho_{H})}{D^{ +}(\rho_{H})}\] \[=\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{\varepsilon}\frac{D^{-}( \mu+\varepsilon/2+\rho_{H})+D^{+}(\mu+\varepsilon/2+\rho_{H})}{D^{+}(\rho_{H})}\] \[=\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{\varepsilon}\chi_{\mu+ \varepsilon/2}. \tag{16}\] Note that if there is some \(\varepsilon\) such that \(\mu+\varepsilon/2\) is not dominant, then there exists a \(j\) such that \(\mu_{j}+n-j+\varepsilon_{j}=\mu_{j+1}+n-j-1+\varepsilon_{j+1}\). This gives us that the \(j^{th}\) and \((j+1)^{th}\) rows are identical in both the determinants, therefore the numerator vanishes. Hence the non-zero summands to those \(\varepsilon\) for which \(\mu+\varepsilon/2\) which are dominant. This completes the proof. _Remark 2_.: An alternate way of proving the relative Pieri formula is to consider the tensor product with the half spin representations \(S^{\pm}\) individually, using the fact that \(S^{\pm}\) are both miniscule representations of \(H\). Then using the theorem for tensor product for miniscule representations ([12, Corollary 3.5], we get the desired result. We give now a different derivation using the determinental expressions. **Corollary 3**.: _Let \(\lambda\) be a dominant weight for \(H\) and \(S^{\pm}\) are its half spin representations. Then_ \[\chi_{\mu}S^{+} =\sum_{\begin{subarray}{c}\varepsilon\in\{\pm 1\}^{n},|\varepsilon| \text{ even}\\ \mu+\varepsilon/2\text{ dominant}\end{subarray}}\chi_{\mu+\varepsilon/2}, \tag{17}\] \[\chi_{\mu}S^{+} =\sum_{\begin{subarray}{c}\varepsilon\in\{\pm 1\}^{n},|\varepsilon| \text{ odd}\\ \mu+\varepsilon/2\text{ dominant}\end{subarray}}\chi_{\mu+\varepsilon/2}. \tag{18}\] Proof.: Note that \(S^{+}\) and \(S^{-}\) are irreducible representations of \(H\) corresponding to the weights \((\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2},\frac{1}{2})\) and \((\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2},-\frac{1}{2})\) respectively. Also \(\Delta=S^{+}-S^{-}\) is a virtual character of \(H\). Let the character \(\Delta^{\prime}=\prod_{i=1}^{n}(x_{i}^{1/2}+x_{i}^{-1/2})\), and we can see that \(\Delta^{\prime}=S^{+}+S^{-}\). Using the method as in proposition 4, one can prove its tensor product decomposition with any highest weight representation. Combining this with the result from the proposition gives a proof of the corollary. ### Branching formula We recall the notion of interlacing for the pair \(Spin(2n+1),Spin(2n)\). Suppose \(\lambda=(\lambda_{1},\lambda_{2}\ldots,\lambda_{n})\) (resp. \(\nu=(\nu_{1},\nu_{2},\ldots,\nu_{n})\)) is a dominant weight of \(Spin(2n+1)\) (resp. \(Spin(2n)\)). We say that \(\nu\) interlaces \(\lambda\), denoted by \(\nu\preceq\lambda\), if \(\lambda_{1}\geq\nu_{1}\geq\lambda_{2}\geq\ldots\geq\nu_{n-1}\geq\lambda_{n} \geq|\nu_{n}|\). We will now prove the following branching rule: **Theorem 2**.: _Suppose \(\lambda\) is a dominant weight of \(Spin(2n+1)\)._ \[\chi_{\lambda}|_{H}=\sum_{\nu}m(\lambda,\nu)\chi_{\nu}, \tag{19}\] _where the sum is over the dominant weights \(\nu\) of \(Spin(2n)\)._ _Then \(m(\lambda,\nu)=1\) if and only if \(\nu\preceq\lambda\), and zero otherwise._ The proof will proceed along the same lines as for \(GL(n)\). In the following since we are working with a fixed \(\lambda\), we will use \(m(\nu)\) instead of \(m(\lambda,\nu)\). We will prove the theorem via ( decreasing ) induction on the weights with respect to the lexicographic ordering (\(>_{l}\)). Multiplying the relative Weyl character formula (Proposition 3) by the relative Weyl denominator, we get: \[\chi_{\lambda^{+}}-\chi_{\lambda^{-}} =\chi_{\lambda}|_{H}\Delta\] \[=\sum_{\nu}m(\nu)(\chi_{\nu}\Delta)\] \[=\sum_{\nu}m(\nu)\left(\sum_{\varepsilon\in\{\pm 1\}^{n}}(-1)^{ \varepsilon}\chi_{\nu+\varepsilon/2}\right), \tag{20}\] where we use the relative Pieri formula (Proposition 4 ) in the last equality. Suppose \(\mu\) is the largest in the lexicographic ordering such that \(m(\mu)>0\). By the relative Pieri formula, \(\chi_{\mu}\Delta\) has \(\chi_{\mu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) as sub-representation for which the highest weight is largest in lexicographic ordering. Comparing it in equation (20) with the largest in lexicographic ordering gives us \(\mu=\lambda^{+}-(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})=\lambda\) and \(m(\mu)=m(\lambda)=1\). Let \(\nu<_{l}\lambda\) be a dominant weight for \(H\). For our induction hypothesis, we assume that all weights \(\nu^{\prime}\) such that \(\nu^{\prime}>_{l}\nu\) satisfy the hypothesis in the theorem. We can rewrite equation (19) as: \[\chi_{\lambda}|_{H}=\sum_{\begin{subarray}{c}\nu^{\prime}>_{l}\nu\\ \nu\preceq\lambda\end{subarray}}\chi_{\nu^{\prime}}+m(\nu)\chi_{\nu}+\sum_{ \mu<_{l}\nu}m(\mu)\chi_{\mu}. \tag{21}\] We would like to understand the multiplicity \(m(\nu)\) for the highest weight \(\nu\). We now multiply the above equation by \(\Delta\) and use relative Pieri formula. The middle term in the right hand side contributes a term \(m(\nu)\chi_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\). If \(\mu<_{l}\nu\), then there does not exist any \(\varepsilon\in\{\pm 1\}^{n}\) such that \(\mu+\varepsilon/2=\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})\). This is equivalent to saying that the third sum on the right side does not contribute to \(\chi_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\). Hence the only possible contribution to \(\chi_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) comes from the first two terms. Multiplying the first term on the right side in the above equation by \(\Delta\) and using relative Pieri formula gives us \[\sum_{\begin{subarray}{c}\nu^{\prime}>_{l}\nu\\ \nu\preceq\lambda\end{subarray}}\chi_{\nu^{\prime}}\Delta=\sum_{ \begin{subarray}{c}\nu^{\prime}>_{l}\nu\\ \nu\preceq\lambda\end{subarray}}\left(\sum_{\begin{subarray}{c}\varepsilon\in \{\pm 1\}^{n}\\ \nu^{\prime}+\varepsilon/2\text{ dominant}\end{subarray}}(-1)^{\varepsilon} \chi_{\nu^{\prime}+\varepsilon/2}\right). \tag{22}\] To understand the multiplicity \(m(\nu)\), we need to know the multiplicity say \(n(\nu)\) of \(\chi_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) in the above expression. To do this, we need to understand when \(\nu^{\prime}+\varepsilon/2=\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})\) in the above expression. Note that this can be seen as \(\nu^{\prime}=\nu+\varepsilon^{\prime}\), where \(\varepsilon^{\prime}=(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})-\varepsilon \in\{0,1\}^{n}\). One can see that \(\varepsilon_{i}^{\prime}=0\) if \(\varepsilon_{i}=1\), and \(\varepsilon_{i}^{\prime}=1\) if \(\varepsilon_{i}=-1\), so we have that \((-1)^{\varepsilon^{\prime}}=(-1)^{\varepsilon}\) and \(|\varepsilon^{\prime}|>0\) ( as \(\nu^{\prime}>_{l}\nu\)). In this section, for any \(\varepsilon\in\{\pm 1\}^{n}\), define \(\varepsilon^{\prime}=(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})-\varepsilon/2\). The rest of the proof is similar to the case of \(GL(n)\). We first show that \(\nu\) weakly interlaces \(\lambda\) in the following sense: **Lemma 5**.: _Let \(\nu\) be such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\) for some \(\varepsilon^{\prime}\in\{0,1\}^{n},\) with \(|\varepsilon^{\prime}|>0.\) Then_ \[\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\text{ if }1\leq i\leq n-1\,\text{ and }\lambda_{n}\geq\nu_{n}\geq-\lambda_{n}-1. \tag{23}\] _._ Proof.: Given that \(\nu^{\prime}_{i}=\nu_{i}+\varepsilon^{\prime}_{i},\) and as \(\nu^{\prime}\preceq\lambda,\) we get that \(\lambda_{i}\geq\nu^{\prime}_{i}\geq\lambda_{i+1}\) for \(1\leq i\leq n-1\) and \(\lambda_{n}\geq|\nu^{\prime}_{n}|\). Putting the two of them together, we get that \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) if \(1\leq i\leq n-1\) and \(\lambda_{n}\geq\nu_{n}\geq-\lambda_{n}-1\). Define \(M=\{i\mid\nu_{i}=\lambda_{i+1}-1\text{ for }1\leq i\leq n-1\}\cup\{n\mid\nu_{n}=- \lambda_{n}-1\}\) and \(M^{\prime}=\{i\mid\nu_{i}=\lambda_{i}\}\). Let \(m=|M|\) and \(m^{\prime}=|M^{\prime}|\). We first observe **Lemma 6**.: _Let \(\nu\) satisfy the conclusion of Lemma 5, i.e. \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) for \(1\leq i\leq n-1\) and \(\lambda_{n}\geq\nu_{n}\geq-\lambda_{n}-1\). Suppose there exists an \(\varepsilon^{\prime}\in\{0,1\}^{n}\) such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). Then_ \[m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}.\] Proof.: Given \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) where \(\nu^{\prime}\preceq\lambda\). For \(i\in M\), \(\nu_{i}+1=\lambda_{i+1}\), which forces \(\nu^{\prime}_{i}=\lambda_{i+1}\) and \(\varepsilon^{\prime}_{i}=1\). Similarly, if \(i\in M^{\prime}\), we have that \(\nu_{i}=\lambda_{i}\), which forces \(\nu^{\prime}_{i}=\lambda_{i}\) and \(\varepsilon^{\prime}_{i}=0\). Hence out of the \(|\varepsilon^{\prime}|\) many 1's, \(m\) of them are fixed, and out of the \(n-|\varepsilon^{\prime}|\) many 0's, \(m^{\prime}\) of them are fixed. Hence if \(|\varepsilon^{\prime}|<m\) or \(n-|\varepsilon^{\prime}|<m^{\prime}\), then there is no possible \(\varepsilon^{\prime}\) satisfying the requirements of the hypothesis. If \(m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}\), then we can get a \(\varepsilon^{\prime}=\nu^{\prime}-\nu\) satisfying the hypothesis. We now count the number of ways \(\nu\) can be modified by \(\varepsilon^{\prime}\in\{0,1\}^{n}\), such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). **Lemma 7**.: _Given a weight \(\nu\) and a fixed \(k>0\), the multiplicity of \(\mathcal{X}_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) in \(\sum_{\begin{subarray}{c}\varepsilon\in\{\pm 1\}^{n},|\varepsilon^{\prime}|=k\\ \nu^{\prime}\preceq\lambda\end{subarray}}\mathcal{X}_{\nu^{\prime}+ \varepsilon/2}\) is given by_ \[\binom{n-m^{\prime}-m}{|\varepsilon^{\prime}|-m},\] _if \(\nu\) weakly interlaces \(\lambda\) (equation(23)) and zero otherwise, where \(\binom{l}{s}\) denotes the standard binomial coefficient._ Proof.: For such a \(\nu\), we want to count how many \(\varepsilon^{\prime}\in\{0,1\}^{n}\) such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\) interlaces \(\lambda\). By Lemma 5, we get that multiplicity of \(\mathcal{X}_{\nu}\) is non zero only if \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) for \(1\leq i\leq n-1\) and \(\lambda_{n}\geq\nu_{n}\geq-\lambda_{n}-1\). We are now reduced to the case \(\lambda_{i}\geq\nu_{i}\geq\lambda_{i+1}-1\) for \(1\leq i\leq n-1\) and \(\lambda_{n}\geq\nu_{n}\geq-\lambda_{n}-1\). By Lemma 6, we get that if \(|\varepsilon^{\prime}|<m\) or \(n-|\varepsilon^{\prime}|<m^{\prime}\), then the multiplicity of \(\mathcal{X}_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) is zero. As in the generic case, we can now freely choose \(|\varepsilon^{\prime}|-m\) indices amongst \(\{1,2,\ldots,n\}\backslash(M\cup M^{\prime})\) positions. This gives the multiplicity as \[\binom{n-m-m^{\prime}}{|\varepsilon^{\prime}|-m}=\binom{n-m-m^{\prime}}{n-| \varepsilon^{\prime}|-m^{\prime}}.\] **Corollary 4**.: _Given \(\nu\) that weakly interlaces \(\lambda\) (equation (23)), the multiplicity \(n(\nu)\) of \(\mathcal{X}_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) in_ \[\sum_{k=1}^{n}(-1)^{|\varepsilon^{\prime}|}\left(\sum_{\begin{subarray}{c} \varepsilon\in\{\pm 1\}^{n},|\varepsilon^{\prime}|=k\\ \nu\preceq\lambda\end{subarray}}\mathcal{X}_{\nu^{\prime}+\varepsilon/2} \right)=\begin{cases}-1&\text{if }m=0\\ (-1)^{m}&\text{if }m+m^{\prime}=n,m>0\\ 0&\text{otherwise.}\end{cases}\] Proof.: We want to count \(\varepsilon\in\{0,1\}^{n}\) such that \(\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})=\nu^{\prime}+\varepsilon/2\), which is equivalent to counting \(\varepsilon^{\prime}\in\{0,1\}^{n}\) such that \(\nu^{\prime}=\nu+\varepsilon^{\prime}\). By Lemmas 5, 6 and 7, we get that the multiplicity of \(\mathcal{X}_{\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})}\) is given by \[\sum_{\begin{subarray}{c}m\leq|\varepsilon^{\prime}|\leq n-m^{\prime}\\ |\varepsilon^{\prime}|>0\end{subarray}}(-1)^{|\varepsilon^{\prime}|}\binom{n-m -m^{\prime}}{|\varepsilon^{\prime}|-m}.\] When \(m>0\) and \(n=m+m^{\prime}\), the only term in the above sum is when \(|\varepsilon^{\prime}|=m\). Hence the sum is equal to \((-1)^{m}\). When \(m>0\), upto a sign, the above sum is equal to \((1-1)^{n-m-m^{\prime}}\) and hence vanishes. When \(m=0\), the above expression differs from \((1-1)^{n-m^{\prime}}\) by \(1\), hence the sum is equal to \(-1\). This covers all the cases, hence proves the corollary. #### 3.4.1. Proof of Theorem 2 We have the relative Weyl numerator. \[\mathcal{X}_{\lambda}|_{H}\Delta=\mathcal{X}_{\lambda^{+}}-\mathcal{X}_{ \lambda^{-}}.\] We see that \[\lambda^{-}-(\tfrac{1}{2},\tfrac{1}{2},\ldots,\tfrac{1}{2})=(\lambda_{1}, \ldots,\lambda_{n-1},-\lambda_{n}-1)\] satisfies equation (23). Since \(n(\nu)\) vanishes if \(\nu\) does not satisfy equation (23), it follows that \(m(\nu)\) vanishes for such weights. Suppose \(\nu=\lambda^{-}-(\tfrac{1}{2},\tfrac{1}{2},\ldots,\tfrac{1}{2})\), then we get that \(m=1\) and \(m^{\prime}=n-1\), hence \(n(\nu)=-1\). Also the multiplicity with which \(\mathcal{X}_{\lambda^{-}}\) occurs in the Weyl numerator is \(-1\). As this is equal to \(m(\nu)+(-1)\), we get that \(m(\nu)=0\). Suppose \(\nu\) satisfies equation (23) and \(\nu+(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})\neq\lambda^{-}\). Since \(n(\nu)=-1\) precisely when \(\nu\) interlaces \(\lambda\), it follows that \(m(\nu)=1\) if and only if \(\nu\) interlaces \(\lambda\). This completes the proof of Theorem 2. ## 4. Branching rules for \((Sp(2n),Sp(2)\times Sp(2n-2))\) In this section, we will give a proof of Theorem 3, the 'classical' branching rule for symplectic groups from \(G=Sp(2n)\) to \(H=Sp(2n-2)\times Sp(2)\). Unlike the previous two cases, where the restrictions were multiplicity free, in the case of symplectic the branching multiplicities are more difficult to get at. We follow the approach of Yacobi and Wallach ([11]), describing the branching multiplicities in terms of dimensions of \(SL(2)=Sp(2)\) representations. Our method is similar in outline as in the linear and orthogonal cases. We first get a relative Weyl character formula. We then derive a relative Pieri formula, where we decompose the tensor of a representation of \(H\) with the relative Weyl denominator. However in the proof of branching, we no longer have simple identities like \(t^{k}t^{l}=t^{k+l}\), which we used while carrying out long division in the linear case, where \(t\) is the polynomial parameter coming from \(GL(1)\). In the symplectic case, if we consider the coefficients as representations of \(SL(2)\), we do not have simple equations like \(S^{(k)}S^{(l)}=S^{(k+l)}\), where \(S^{(k)}\) denotes the character of the \(k^{th}\) symmetric power of the standard representation. In place of the simple identities, we prove some cancellative identities involving representations of \(SL(2)\), which allows us to derive the branching multiplicities from the relative Pieri. As in the previous section, our notations follow the convention of [10, Chapter 24]. We would work with \((G,H)=(Sp(2n),Sp(2)\times Sp(2n-2))\), where one can see that \(H\) is a regular subgroup of \(G\). ### Relative Weyl character formula We recall the Weyl character formula for \(Sp(2n)\): The dominant weights for \(G\) are represented by \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\), where \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\geq 0\) and \(\lambda_{i}\) are all integers. The dominant weights for \(H\) are given by a pair \((k,\eta)\), where \(k\) is a non-negative integer and \(\eta\) is a dominant weight for \(Sp(2n-2)\). Denote by \(S^{(k)}\chi_{\eta}\) the character of the irreducible highest weight representation corresponding to the highest weight \((k,\eta)\). This corresponds to the outer tensor product of the symmetric \(k^{th}\) power of the standard representation of \(Sp(2)\) with the highest weight representation of \(Sp(2n-2)\) corresponding to \(\eta\). The character for the irreducible representation corresponding to the highest weight \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) is given by, \[\chi_{\lambda}=\frac{\det|x_{j}^{\lambda_{i}+n-i+1}-x_{j}^{-(\lambda_{i}+n-i+ 1)}|}{\det|x_{j}^{n-i+1}-x_{j}^{-(n-i+1)}|}. \tag{24}\] **Proposition 5**.: _(Relative Weyl character formula) Let \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) be a dominant weight for \(G=Sp(2n)\) and \(H=Sp(2)\times Sp(2n-2))\). Let \(\chi_{\lambda}\) be the irreducible highest weight representation of \(G\) with highest weight \(\lambda\). We have the following expression for restriction of \(\chi_{\lambda}\):_ \[\chi_{\lambda}|_{H}=\frac{\sum_{i=0}^{n-1}(-1)^{i}S^{(\lambda_{n-i}+ i)}\chi_{\lambda^{(n-i)}}}{\sum_{i=0}^{n-1}(-1)^{i}S^{(i)}\chi_{\omega_{n-i-1}}}. \tag{25}\] _where_ * \(S^{(k)}\) _is the symmetric_ \(k^{th}\) _power of the standard representation of_ \(Sp(2)\)_,_ * \(\omega_{k}=(1,1,1,\ldots,1,0,\ldots,0)\) _(k many 1's) corresponds to the_ \(k^{th}\) _fundamental weight for_ \(Sp(2n-2)\)_,_ * \(\lambda^{(i)}=(\lambda_{1}+1,\lambda_{2}+1,\ldots,\lambda_{i-1}+1,\lambda_{i+ 1},\ldots,\lambda_{n})\) _are dominant weights of_ \(Sp(2n-2)\)_._ Proof.: We do a co-factor expansion along the first column from the bottom in the numerator and denominator of (24). We consider \(x_{1}\)-variable for \(Sp(2)\) which gives us: \[\chi_{\lambda}|_{H}=\frac{\sum_{i=0}^{n-1}(-1)^{i}(x_{1}^{ \lambda_{n-1}+i+1}-x_{1}^{-(\lambda_{n-1}+i+1)})\chi_{\lambda^{(n-i)}}}{\sum_ {i=0}^{n-1}(-1)^{i}(x_{1}^{i+1}-x_{1}^{-(i+1)})\chi_{\omega_{n-i-1}}}\] \[=\frac{\sum_{i=0}^{n-1}(-1)^{i}S^{(\lambda_{n-i}+i)}\chi_{\lambda ^{(n-i)}}}{\sum_{i=0}^{n-1}(-1)^{i}S^{(i)}\chi_{\omega_{n-i-1}}}.\] We arrive at the second equality upon dividing both the numerator and denominator by the Weyl denominator of \(H=Sp(2)\times Sp(2n-2))\). We note that the relative Weyl denominator is given by \[\Delta=\sum_{i=0}^{n-1}(-1)^{i}S^{(i)}\chi_{\omega_{n-i-1}}=(-x_{1})^{-(n-1)} \prod_{i=2}^{n}\left(x_{1}-x_{i}\right)\left(x_{1}-x_{i}^{-1}\right).\] ### Relative Pieri formula We now derive a relative Pieri formula for \((G,H)\) where \(G=Sp(2n)\) and \(H=Sp(2)\times Sp(2n-2)\). The proof is a modification of the Okada's method ([Oka16]), where he considers the tensor product of a representation of \(Sp(2n)\) with the formal product \(\prod_{i=2}^{n}\left(x_{1}+x_{i}\right)\left(x_{1}+x_{i}^{-1}\right)\). **Theorem 3** (Relative Pieri formula).: _Let \(S^{(k)}\chi_{\eta}\) be the character of \(H\) corresponding to a highest weight \((k,\eta)\), of \(H\). Then,_ \[\Delta\times\left(S^{(k)}\chi_{\eta}\right)=S^{(k)}\sum_{\begin{subarray}{c} \nu\text{ dominant}\\ \nu-\eta\in\{\pm 1,0\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\nu-\eta|}\chi_{ \nu}.\] Proof.: We have the Weyl character formula, \[\chi_{\eta}=\frac{\det|x_{i}^{\eta_{j}+n-j}-x_{i}^{-(\eta_{j}+n-j)}|}{\det|x_{ i}^{n-j}-x_{i}^{-(n-j)}|}.\] Let \(t=x_{1}\). By the product expansion for the Weyl denominator, \[\Delta=(-t)^{-(n-1)}\prod_{i=2}^{n}(t-x_{i})(t-x_{i}^{-1}).\] In order to make the formulas concise, let \([x_{j}]^{a}:=x_{j}^{a}-x_{j}^{-a}\). We have the following identity: \[(t-x_{j})(t-x_{j}^{-1})[x_{j}]^{a}=(t^{2}+1)[x_{j}]^{a}-t([x_{j}]^{a+1}+[x_{j} ]^{a-1}).\] Let \(D_{H}\) denote the Weyl denominator for \(H\). Then, \[\Delta\times\left(S^{(k)}\chi_{\eta}\right) =\frac{S^{(k)}}{D_{H}(-t)^{n-1}}\left(\prod_{i=2}^{n}(t-x_{i})(t-x _{i}^{-1})\right)\det|[x_{i}]^{\eta_{j}+n-j+1}|\] \[=\frac{S^{(k)}}{D_{H}(-t)^{n-1}}\det|(t^{2}+1)[x_{i}]^{\eta_{j}+n -j+1}-t([x_{i}]^{\eta_{j}+n-j+2}+[x_{i}]^{\eta_{j}+n-j})|\] \[=\frac{S^{(k)}}{D_{H}}\sum_{\varepsilon,\delta\in\{0,1\}^{n-1}}(- t)^{|\varepsilon|+|\delta|-n+1}\det|[x_{i}]^{\eta_{j}+n-j+1+\varepsilon_{j} -\delta_{j}}|, \tag{26}\] where * \(\varepsilon=(\varepsilon_{2},\varepsilon_{3},\ldots,\varepsilon_{n})\), * \(\delta=(\delta_{2},\ldots,\delta_{n})\), * \(|\varepsilon|=\sum_{i=2}^{n}|\varepsilon_{i}|\) and \(|\delta|=\sum_{i=2}^{n}|\delta_{i}|\). We will show that the summands are non zero only if \(\eta+\varepsilon-\delta\) is dominant (call this collection of \((\varepsilon,\delta)\) as \(\mathcal{A}\)). If \(\eta+\varepsilon\) is dominant, but \(\eta+\varepsilon-\delta\) is not, then there exists a \(j\) such that \(\eta_{j}+\varepsilon_{j}=\eta_{j+1}+\varepsilon_{j+1}\) and \(\delta_{j}=1,\delta_{j+1}=0\), which are the \(j^{th}\) and \((j+1)^{th}\) row in \(\det|[x_{i}]^{\eta_{j}+n-j+1+\varepsilon_{j}-\delta_{j}}|\) are identical, hence equal to \(0\). Let \(\mathcal{C}\) be the collection of \((\varepsilon,\delta)\) such that \(\eta+\varepsilon\) and \(\eta+\varepsilon-\delta\) are both not dominant. As \(\eta+\varepsilon\) is not dominant, there is a smallest \(j\) such that \(\eta_{j}=\eta_{j+1}\) and \(\varepsilon_{j}=0,\varepsilon_{j+1}=1\). We define, \[\mathcal{C}_{j} =\{(\varepsilon,\delta)\in\mathcal{C}:j\text{ is the smallest such that }\eta_{j}=\eta_{j+1},\varepsilon_{j}=0,\varepsilon_{j+1}=1\},\] \[\mathcal{C}_{j,1} =\{(\varepsilon,\delta)\in\mathcal{C}_{j}:\delta_{j}=\delta_{j+1}\},\] \[\mathcal{C}_{j,2} =\{(\varepsilon,\delta)\in\mathcal{C}_{j}:\delta_{j}\neq\delta_{j +1}\}.\] The sets \(\mathcal{C}_{j,1}\) and \(\mathcal{C}_{j,2}\) partition the whole \(\mathcal{C}\). If \((\varepsilon,\delta)\in\mathcal{C}_{j,1}\), then the \(j^{th}\) and \((j+1)^{th}\) row in \(\det|[x_{i}]^{\eta_{j}+n-j+1+\varepsilon_{j}-\delta_{j}}|\) are identical. Hence the determinant vanishes. If \((\varepsilon,\delta)\in\mathcal{C}_{j,2}\), let \[\delta^{\prime}=(\delta_{2},\delta_{3},\ldots,\delta_{j-1},\delta_{j+1}, \delta_{j},\ldots,\delta_{n}).\] If \(\xi=\eta+\varepsilon-\delta\) and \(\xi^{\prime}=\eta+\varepsilon-\delta^{\prime}\), then \[\xi_{j}=\eta_{j}-1,\xi_{j+1}=\eta_{j+1}+1\,\text{ and }\,\xi^{\prime}_{j}= \eta_{j},\xi^{\prime}_{j+1}=\eta_{j+1}.\] The respective numerators in (26) have same value but opposite signs. This gives us \[\sum_{(\varepsilon,\delta)\in\mathcal{C}_{j,2}}(-t)^{|\varepsilon|+|\delta|- n+1}\det|[x_{i}]^{\eta_{j}+n-j+1+\varepsilon_{j}-\delta_{j}}|=0.\] Hence we get that \[\Delta\times\Big{(}S^{(k)}\chi_{\eta}\Big{)}=S^{(k)}\sum_{(\varepsilon, \delta)\in\mathcal{A}}(-t)^{|\varepsilon|+|\delta|-n+1}\chi_{\eta+\varepsilon -\delta}.\] For a given \((\varepsilon,\delta)\in\mathcal{A}\), we count all \((\varepsilon^{\prime},\delta^{\prime})\in\mathcal{A}\) such that \(\varepsilon-\delta=\varepsilon^{\prime}-\delta^{\prime}\). If \(\varepsilon_{i}-\delta_{i}=1\), then \(\varepsilon_{i}=1,\ \delta_{i}=0\). Similarly, if \(\varepsilon_{i}-\delta_{i}=-1\) then \(\varepsilon_{i}=0,\ \delta_{i}=1\). If \(\varepsilon_{i}-\delta_{i}=0\), then \(\varepsilon_{i}=\delta_{i}\) and the number of such \(i^{\prime}s\) to be \(m=n-1-|\varepsilon-\delta|\). There are \(2^{m}\) different choices of \((\varepsilon^{\prime},\delta^{\prime})\in\mathcal{A}\) based on the choice of \(0\) or \(1\) in the \(m\) places. Out of the \(m\) places, if \(\varepsilon_{i}=\delta_{i}=1\) at \(k\) many places, then there are \(\binom{m}{k}\) independent possibilities. All of them do occur (for this we get that \(|\varepsilon|+|\delta|-n+1=-m+2k\)). Hence we get that the coefficient of \(\chi_{\eta+\varepsilon-\delta}\) is given by \[(-1)^{m}\left(t^{-m}+\binom{m}{1}t^{-m+2}+\cdots+\binom{m}{m-1}t^{m-2}+t^{m} \right)=(-t-t^{-1})^{m}.\] Putting \(S^{(1)}=(t+t^{-1})\), and taking \(\nu=\eta+\varepsilon-\delta\), we see that \(|\varepsilon-\delta|=|\nu-\eta|\). Hence we get that, \[\Delta\times\Big{(}S^{(k)}\chi_{\eta}\Big{)}=S^{(k)}\sum_{\begin{subarray}{c} \nu\text{ dominant}\\ \nu-\eta\in\{\pm 1,0\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\nu-\eta|}\chi_{ \nu},\] which completes the proof. ### Branching computation **Theorem 4** (Symplectic Branching rule).: _Let \(\lambda\) be a dominant weight for \(G\). Let_ \[\chi_{\lambda}|_{H}=\sum_{\mu}\sum_{k}m(\lambda,\mu,k)S^{(k)}\chi_{\mu}.\] _Then \(m(\lambda,\mu):=\sum_{k}m(\lambda,\mu,k)S^{(k)}\) is non zero if and only if_ \[\lambda_{j}\geq\mu_{i}\geq\lambda_{j+2},\ \text{ for }1\leq j\leq n-1,\] (here \(\lambda_{n+1}=0\)). When the inequalities are satisfied, let_ \[x_{1}\geq y_{1}\geq x_{2}\geq y_{2}\geq\cdots\geq x_{n}\geq y_{n},\] _be the non decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\mu_{1},\ldots,\mu_{n-1},0\}\). Then_ \[m(\lambda,\mu)=\prod_{i=1}^{n}S^{(x_{i}-y_{i})}.\] We will prove the theorem by using the relative Weyl character formula and strong induction the weights ordered by the lexicographic ordering. By the relative Weyl character formula, we have \[\chi_{\lambda}|_{H}=\sum_{\mu}m(\lambda,\mu)\chi_{\mu}=\frac{\sum_{i=0}^{n-1}( -1)^{i}S^{(\lambda_{n-i}+i)}\chi_{\lambda^{(n-i)}}}{\sum_{i=0}^{n-1}( -1)^{i}S^{(i)}\chi_{\omega_{n-i-1}}}. \tag{27}\] By cross multiplying by the relative Weyl denominator, we get \[\Delta\times\left(\sum_{\mu}m(\lambda,\mu)\chi_{\mu}\right)=\sum_{i=0}^{n-1}( -1)^{i}S^{(\lambda_{n-i}+i)}\chi_{\lambda^{(n-i)}}. \tag{28}\] Note that \(m(\lambda,\mu)\) is a representation of \(SL_{2}\). By the relative Pieri formula, \[\sum_{i=0}^{n-1}(-1)^{i}S^{(\lambda_{n-i}+i)}\chi_{\lambda^{(n-i)}}{=}\sum_{ \mu}m(\lambda,\mu)\left(\sum_{\begin{subarray}{c}\nu\text{ dominant}\\ \nu-\mu\in\{\pm 1,0\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\nu-\mu|}\chi_{ \nu}\right). \tag{29}\] Let \(\mu\) be the largest in the lexicographic ordering such that \(m(\lambda,\mu)\neq 0\), then \(\nu=\mu+(1,1,\ldots,1)\) is also dominant, and is the largest by lexicographic ordering present in the expansion in (29). Equating the largest in the lexicographic ordering on the left and right side of equation (29), we get that \[\mu=\lambda^{(n)}-(1,1,\ldots,1)=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n-1 }).\] Hence, \[m(\lambda,\mu)=S^{(\lambda_{n})}.\] **Definition**.: _For a dominant weight \(\lambda\) of \(Sp(2n)\) and \(\xi\) a dominant weight of \(Sp(2n-2)\), we say that \(\xi\) doubly interlaces \(\lambda\) if_ \[\lambda_{j}\geq\xi_{j}\geq\lambda_{j+2}\quad\text{for}\quad 1\leq j\leq n-1.\] We will show that if \(\mu\) does not satisfy the given double interlacing condition, then the corresponding multiplicity is \(0\). We set up the induction hypothesis by assuming that if \(\xi\geq\mu\) in the lexicographic ordering and \(\xi\) doubly interlaces \(\lambda\) then \[m(\lambda,\xi)=\prod_{i}S^{(x_{i}-y_{i})}, \tag{30}\] where \(\{x_{1},y_{1},x_{2},y_{2},\ldots,x_{n},y_{n}\}\) be the non-decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n}\), \(\xi_{1},\ldots,\xi_{n-1},0\}\) as mentioned in the theorem and \(m(\lambda,\xi)=0\) otherwise. We have already shown the first step of the induction hypothesis. We first show a weak double interlacing property for the weights \((\lambda,\mu)\) such that the representations \(m(\lambda,\mu)\) are non-vanishing: **Lemma 8**.: _Suppose \(m(\lambda,\mu)\neq 0\). Then_ \[\lambda_{j}\geq\mu_{j}\geq\lambda_{j+2}-2,\quad\text{for all $j$},\ 1\leq j \leq n-1.\] Proof.: Suppose there is an \(k\) such that \(\mu_{k}>\lambda_{k}\). Let \(\nu=\mu+(1,1,\ldots,1)\), then \(\nu\neq\lambda^{(j)}\) for any \(j\). We look at the coefficients of \(\chi_{\nu}\) in (29). By linear independence of characters, we get that \[0=\sum_{\begin{subarray}{c}\eta\text{ dominant}\\ \nu-\eta\in\{\pm 1,0\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\nu-\mu|}m( \lambda,\eta). \tag{31}\] Note that \(\eta_{i}-\nu_{i}=\eta_{i}-\mu_{i}-1\in\{\pm 1,0\}\). Hence for all \(i\), \[\eta_{i}\in\{\mu_{i},\mu_{i}+1,\mu_{i}+2\}.\] For \(\eta\neq\mu\) occurring in equation (31), we get that \(\mu<\eta\) in lexicographic ordering and \(\eta_{k}\geq\mu_{k}>\lambda_{k}\). This tells us that \(\eta\) does not doubly interlace \(\lambda\). By the induction hypothesis, for \(\eta\neq\mu\), \(m(\lambda,\eta)\) vanishes. Hence equation (31) reduces to \((-S^{(1)})^{n-1}m(\lambda,\mu)=0\), which gives us that \(m(\lambda,\mu)=0\). The proof is similar in the case where \(\mu_{k}<\lambda_{k+2}-2\) for some \(k\). The proof of branching depends on whether the weight \(\mu\) is of the form \(\lambda^{(k)}-(1,1,\ldots,1)\) for some \(k\), where \(\lambda^{(k)}\) are the weights occurring in the numerator of the relative Weyl character formula. From the weak interlacing property given by the foregoing lemma, we break the proof into three cases: 1. (_Generic Case_) For some \(j\) \[\lambda_{j}-2\geq\mu_{j}\geq\lambda_{j+2}-2,\] and \(\mu\neq\lambda^{(k)}-(1,1,\ldots,1)\) for any \(k\). 2. (_Boundary case_) For all \(i\) \[\lambda_{i}\geq\mu_{i}\geq\lambda_{i}-1.\] The weight \((\lambda_{1},\lambda_{2},\ldots,\lambda_{n-1})=\lambda^{(n)}-(1,1,\ldots,1)\) is the base case of the induction hypothesis, which we have already handled before. Hence, we can assume that in the boundary case for some \(i\), \(\mu_{i}<\lambda_{i}\). 3. (_Weyl numerator case_) \(\mu=\lambda^{(k)}-(1,1,\ldots,1)\) for some \(k<n\). We have the following key technical lemma which handles the generic and boundary cases: **Lemma 9**.: _Suppose \(\mu\) is a weight such that for some \(j,\ 1\leq j<n\),_ \[\lambda_{j}-1\geq\mu_{j}\geq\lambda_{j+2}-2,\] \[\text{and}\quad\mu\neq\lambda^{(k)}-(1,1,\ldots,1)\text{ for any }k.\] _Let \(\varepsilon=(\varepsilon_{1},\ldots,\varepsilon_{n-1})\in\{0,1,2\}^{n-1}\) be such that \(\varepsilon\) is non-zero and \(\varepsilon_{j}=0\). Then,_ \[m(\lambda,\mu+\varepsilon)+m(\lambda,\mu+\varepsilon+2e_{j})-S^{(1)}m(\lambda,\mu+\varepsilon+e_{j})=0, \tag{32}\] _where \(e_{j}\) is the standard \(j^{th}\) coordinate \((0,\ldots,0,1,0,\ldots,0)\)._ Proof.: As in the statement of the branching law, let \(\{x_{1},y_{1},x_{2},y_{2},\ldots,x_{n},y_{n}\}\) be the non-decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\mu_{1}+\varepsilon_{1},\ldots,\mu_{n-1}+ \varepsilon_{n-1},0\}\). Note that \(\mu+\varepsilon\) is larger that \(\mu\) in lexicographic ordering. By the induction hypothesis given by equation (30), the lemma follows if \(\mu+\varepsilon\) is not dominant or not doubly interlacing \(\lambda\), since \[m(\lambda,\mu+\varepsilon)=m(\lambda,\mu+\varepsilon+2e_{j})=m(\lambda,\mu+ \varepsilon+e_{j})=0.\] Suppose \(\mu+\varepsilon\) is dominant and doubly interlaces \(\lambda\) (so \(\mu_{j}\geq\lambda_{j+2}\)). We will break the proof into 4 cases. Case 1: \(\mu_{j}\leq\lambda_{j}-2\), \(\mu_{j}\neq\lambda_{j+1}-1\) and \(\mu_{j}=y_{k}\) for some \(k\). By the induction hypothesis given by equation (30), \[m(\lambda,\mu+\varepsilon) =\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)S^{(x_{k}-\mu_{j})}\] \[m(\lambda,\mu+\varepsilon+e_{j}) =\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)S^{(x_{k}-\mu_{j}-1)}\] \[m(\lambda,\mu+\varepsilon+2e_{j}) =\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)S^{(x_{k}-\mu_{j}-2)}\] Upon substituting it in the left side of equation (32), we get \[m(\lambda,\mu+\varepsilon)+m(\lambda,\mu+\varepsilon+2e_{j})-S ^{(1)}m(\lambda,\mu+\varepsilon+e_{j}),\] \[= \left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)(S^{(x_{k}-\mu_{j})} +S^{(x_{k}-\mu_{j}-2)}-S^{(1)}S^{(x_{k}-\mu_{j}-1)}),\] \[= \left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)(S^{(1)}S^{(x_{k}- \mu_{j}-1)}-S^{(1)}S^{(x_{k}-\mu_{j}-1)}).\] \[= \,0.\] Case 2: \(\mu_{j}\leq\lambda_{j}-2\), \(\mu_{j}\neq\lambda_{j+1}-1\) and \(\mu_{j}=x_{k}\) for some \(k\). This is similar to Case 1. Here \(\mu_{j}=x_{k}\) for some \(k\), which can be handled in a similar manner. 1. \(\mu_{j}=\lambda_{j}-1\). When \(\mu_{j}=\lambda_{j}-1\), one sees that \(\mu+2e_{j}\) is not doubly interlacing but larger than \(\mu\) in lexicographic order, giving us \(m(\lambda,\mu+2e_{j})=0\). Since \(x_{j}=\lambda_{j}\), we have \(x_{j}-\mu_{j}=1\). Hence, \[\begin{split}& m(\lambda,\mu+\varepsilon)-S^{(1)}m(\lambda,\mu+ \varepsilon+e_{j})\\ &=\left(\prod_{i\neq j}S^{(x_{i}-y_{i})}\right)(S^{(x_{j}-\mu_{j })}-S^{(1)}S^{(x_{j}-\mu_{j}-1)})\\ &=\left(\prod_{i\neq j}S^{(x_{i}-y_{i})}\right)(S^{(1)}-S^{(1)}S ^{(0)})\\ &=0.\end{split}\] 2. \(\mu_{j}=\lambda_{j+1}-1\). In this case we have \(\lambda_{j+1}-1=\mu_{j}\geq\mu_{j+1}\). By the above cases, we can work with \(\mu_{j+1}\) instead of \(\mu_{j}\) provided \(\mu_{j+1}\neq\lambda_{j+2}-1\). This process can be continued, unless in the extreme case, we have where either \(\mu_{i}=\lambda_{i}\) or \(\mu_{i}=\lambda_{i+1}-1\) for all \(i\). Let \(k\) be the smallest index such that \(\mu_{i}=\lambda_{i}\) for all \(i<k\), and \(\mu_{k}=\lambda_{k+1}-1\). Also \(\mu_{k+1}\leq\mu_{k}=\lambda_{k+1}-1<\lambda_{k+1}\), hence \(\mu_{k+1}=\lambda_{k+2}-1\). Proceeding in a similar manner, we get that for all \(i\geq k\), \(\mu_{i}=\lambda_{i+1}-1\). Hence, \(\mu=\lambda^{(k)}-(1,\ldots,1)\), which contradicts the hypothesis of the lemma (this will be considered in the Weyl numerator case, to be handled later). This completes the proof of the lemma. #### 4.3.1. Branching: Generic case We now proceed to the proof of Theorem 3. We first handle the generic case. For computing the multiplicity of a dominant weight \(\mu\) in this case, let \(\nu=\mu+(1,1,\ldots,1)\). We look at the coefficients of \(\mathcal{X}_{\nu}\) in (29). By linear independence of characters, we get that \[\sum_{\begin{subarray}{c}\eta\text{ dominant}\\ \nu-\eta\in\{\pm 1,0\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\nu-\eta|}m( \lambda,\eta)=0. \tag{33}\] We assume that we are in the hypothesis of Lemma 9, i.e., \(\lambda_{j}-2\geq\mu_{j}\geq\lambda_{j+2}-2\), for some \(j\) and \(\mu_{j}\neq\lambda_{j+1}-1\). Let \[R_{j} =\{\varepsilon\in\{0,1,2\}^{n-1}\ |\ \varepsilon_{j}=0\},\] \[r(\varepsilon) =n-|\varepsilon-(1,1,\ldots,1)|-1.\] By equation (33), the following sum vanishes: \[\sum_{\varepsilon\in R_{j}}(-S^{(1)})^{r(\varepsilon)}\left(m(\lambda,\mu+ \varepsilon)+m(\lambda,\mu+\varepsilon+2e_{j})-S^{(1)}m(\lambda,\mu+\varepsilon +e_{j})\right). \tag{34}\] By Lemma 9, for all non-zero \(\varepsilon\in R_{j}\), the corresponding summands in equation (34) vanish. The only summand remaining is the one corresponding to \(\varepsilon=(0,\ldots,0)\), which gives us \[m(\lambda,\mu)=S^{(1)}m(\lambda,\mu+e_{j})-m(\lambda,\mu+2e_{j}). \tag{35}\] We recall by induction hypothesis, that for all \(\eta\) larger than \(\mu\) in the lexicographic ordering, the multiplicity \(m(\lambda,\eta)\) vanishes unless \(\eta\) doubly interlaces \(\lambda\). Suppose there is an \(i\neq j\) such that \(\mu_{i}\in\{\lambda_{i+2}-1,\lambda_{i+2}-2\}\). Then \(\mu+e_{j}\) and \(\mu+2e_{j}\) do not doubly interlace \(\lambda\). By equation (35), we have \[m(\lambda,\mu)=S^{(1)}m(\lambda,\mu+e_{j})-m(\lambda,\mu+2e_{j})=0+0=0.\] Suppose \(\mu_{j}\) is the only component of \(\mu\) such that \(\mu_{j}\in\{\lambda_{j+2}-1,\lambda_{j+2}-2\}\). Then \(\mu_{j+1}\leq\mu_{j}\leq\lambda_{j+2}-1\), and by the forgoing part, we can assume that \(\lambda_{j+2}-1\geq\mu_{j+1}\geq\lambda_{j+3}\). We consider \(\mu_{j+1}\), and apply the forgoing argument, to conclude that \(m(\lambda,\mu)\) vanishes. Hence we can assume that if \(m(\lambda,\mu)\) does not vanish, then \(\mu\) doubly interlaces \(\lambda\). Let \(\{x_{1},y_{1},x_{2},y_{2},\ldots,x_{n},y_{n}\}\) be the non-decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\)\(\mu_{1},\ldots,\mu_{n-1},0\}\). Then either \(\mu_{j}=y_{k}\) for some \(k\) or \(\mu_{j}=x_{l}\) for some \(l\). We consider \(\mu_{j}=y_{k}\), the other case follows in a similar manner: \[m(\lambda,\mu) =S^{(1)}m(\lambda,\mu+e_{j})-m(\lambda,\mu+2e_{j})\] \[=\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)(S^{(1)}S^{(x_{k}- \mu_{j}-1)}-S^{(x_{k}-\mu_{j}-2)})\] \[=\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)(S^{(x_{k}-\mu_{j} )}+S^{(x_{k}-\mu_{j}-2)}-S^{(x_{k}-\mu_{j}-2)})\] \[=\left(\prod_{i\neq k}S^{(x_{i}-y_{i})}\right)(S^{(x_{k}-\mu_{j} )})\] \[=\prod_{i}S^{(x_{-}y_{i})}.\] This completes the proof in the generic case. #### 4.3.2. Branching: Boundary case We recall that in the boundary case \(\lambda_{i}\geq\mu_{i}\geq\lambda_{i}-1\) for all \(i\). As discussed in the boundary case, we can assume that there is a \(j\) such that \(\mu_{j}=\lambda_{j}-1\). Let \[R^{\prime}_{j} =\{\varepsilon\in\{0,1\}^{n-1}\mid\varepsilon_{j}=0\}\] \[r^{\prime}(\varepsilon) =n-|\varepsilon-(1,1,\ldots,1)|-1\] Hence equation (33) can be seen as: \[\sum_{\varepsilon\in R_{j}^{\prime}}(-S^{(1)})^{r^{\prime}(\varepsilon)}(m( \lambda,\mu+\varepsilon)-S^{(1)}m(\lambda,\mu+\varepsilon+e_{j}))=0. \tag{36}\] By lemma 9, we see that all summands corresponding to \(\varepsilon\neq(0,\ldots,0)\) vanish, which gives us that \[m(\lambda,\mu)-S^{(1)}m(\lambda,\mu+e_{j})=0 \tag{37}\] Let \(\{x_{1},y_{1},x_{2},y_{2},\ldots,x_{n},y_{n}\}\) be the non-decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\)\(\mu_{1},\ldots,\mu_{n-1},0\}\). Then \(x_{i}=\lambda_{i}\) and \(y_{i}=\mu_{i}\) if \(i<n\) and \(y_{n}=0\). Hence for \(i<n\), \(x_{i}-y_{i}=1\) if \(\mu_{i}=\lambda_{i}-1\), and vanishes otherwise. Note that \(x_{n}-y_{n}=\lambda_{n}\). Let \(g(\mu)=\#\{i|\mu_{i}=\lambda_{i}-1\}\). By induction hypothesis, \(m(\lambda,\mu+e_{j})=S^{(\lambda_{n})}(S^{(1)})^{g(\mu)-1}\). Hence by above equation, we get that \(m(\lambda,\mu)=S^{(\lambda_{n})}(S^{(1)})^{g(\mu)}\) which completes the proof in the boundary case. #### 4.3.3. Branching: Weyl numerator case As mentioned, \(j=n\) was done as the base case for induction, hence we take \(j\leq n-1\). We break the proof into two lemmas: **Lemma 10**.: _If \(a>b\) are integers, then_ \[S^{(a-b)}S^{(b-1)}-S^{(1)}S^{(a-b)}S^{(b)}+S^{(a-b-1)}S^{(b)}=-S^{(a+1)}. \tag{38}\] Proof.: We simplify the above equation: \[S^{(a-b)}S^{(b-1)}+(S^{(a-b-1)}-S^{(1)}S^{(a-b)})S^{(b)}\] \[=S^{(a-b)}S^{(b-1)}+(S^{(a-b-1)}-S^{(a-b-1)}-S^{(a-b+1)})S^{(b)}\] \[=S^{(a-b)}S^{(b-1)}-S^{(a-b+1)}S^{(b)}.\] By expanding the last tensor product, we get the desired result. The following lemma is crucial to the calculation of the multiplicities \(m(\lambda,\mu):\) **Lemma 11**.: _Suppose for some \(j\),_ \[\mu=\lambda^{(j)}-(1,1,\ldots,1)=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j- 1},\lambda_{j+1}-1,\ldots,\lambda_{n}-1).\] _For \(\varepsilon\in\{0,1,2\}^{n-1}\), define_ \[m^{\prime}(\lambda,\mu+\varepsilon)=\begin{cases}m(\lambda,\mu+\varepsilon)& \text{ if }\varepsilon\neq(0,0,\ldots,0)\\ \prod_{i}S^{(x_{i}-y_{i})}&\text{ if }\varepsilon=(0,0,\ldots,0),\end{cases}\] _where \(\{x_{1},y_{1},\ldots,x_{n},y_{n}\}\) is the non-decreasing rearrangement of \(\{\lambda_{1},\ldots,\lambda_{n},\)\(\mu_{1},\ldots,\mu_{n-1},0\}\). Then,_ \[\sum_{\begin{subarray}{c}\mu+\varepsilon\text{ dominant}\\ \varepsilon\in\{0,1,2\}^{n-1}\end{subarray}}(-S^{(1)})^{n-1-|\varepsilon-(1, \ldots,1)|}m^{\prime}(\mu+\varepsilon)=(-1)^{n-j}S^{(\lambda_{j}+n-j)}, \tag{39}\] Proof.: The proof proceeds by downwards induction on \(j\). The case \(j=n\) is the base case of induction. Suppose \(\mu+\varepsilon\) doubly interlaces \(\lambda\). Then \(\varepsilon_{i}=0\) for all \(i<j\). Hence we can rewrite equation (39) as: \[\sum_{\varepsilon_{j}}\dots\sum_{\varepsilon_{n-2}}(-S^{(1)})^{r(\varepsilon^{ \prime})}\!\!\left(m^{\prime}(\mu+\varepsilon^{\prime})+m^{\prime}(\mu+ \varepsilon^{\prime}\!\!+\!2e_{n-1})-S^{(1)}m^{\prime}(\mu+\varepsilon^{\prime }\!\!+\!e_{n-1})\right)\!\!, \tag{40}\] where \[\varepsilon^{\prime} =(0,\dots,0,\varepsilon_{j},\varepsilon_{j+1},\dots,\varepsilon_ {n-2},0),\] \[r(\varepsilon^{\prime}) =n-|\varepsilon^{\prime}-(1,1,\dots,1)|-1.\] By the induction hypothesis and our assumption, we can write \[m^{\prime}(\mu+\varepsilon^{\prime})=\prod_{i}S^{(x_{i}^{\prime}-y_{i}^{ \prime})},\] where \(\{x_{1}^{\prime},y_{1}^{\prime},\dots,x_{n}^{\prime},y_{n}^{\prime}\}\) is the non-decreasing rearrangement of \(\{\lambda_{1},\dots,\lambda_{n},\mu_{1}+\varepsilon_{1}^{\prime},\dots,\mu_{n- 1}+\varepsilon_{n-1}^{\prime},0\}\). Using this, we rewrite the above sum as \[\sum_{\varepsilon^{\prime}}(-S^{(1)})^{r(\varepsilon^{\prime})} \left(\prod_{i}^{n-3}S^{(x_{i}^{\prime}-y_{i}^{\prime})}\right)\times\] \[\quad\quad\left(S^{(x_{n-2}^{\prime}-\lambda_{n})}S^{(\lambda_{n}- 1)}+S^{(x_{n-2}^{\prime}-\lambda_{n}-1)}S^{(\lambda_{n})}-S^{(1)}S^{(x_{n-2}^{ \prime}-\lambda_{n})}S^{(\lambda_{n})}\right).\] By lemma 10, \[S^{(x_{n-2}^{\prime}-\lambda_{n})}S^{(\lambda_{n}-1)}+S^{(x_{n-2}^{\prime}- \lambda_{n}-1)}S^{(\lambda_{n})}-S^{(1)}S^{(x_{n-2}^{\prime}-\lambda_{n})}S^{ (\lambda_{n})}=-S^{(x_{n-2}^{\prime}+1)}.\] If \(j=n-1\), then \(x_{n-2}^{\prime}=\lambda_{n-1}\) and we are done. If \(j<n-1\), then \(x_{n-2}^{\prime}=\lambda_{n-1}-1\) if \(\varepsilon_{n-2}^{\prime}=0\), and equal to \(\lambda_{n-1}\) otherwise. With this substitution, we see that \[\sum_{\varepsilon^{\prime\prime}}(-S^{(1)})^{r(\varepsilon^{ \prime\prime})}\left(\prod_{i}^{n-4}S^{(x_{i}^{\prime}-y_{i}^{\prime})}\right)\times\] \[\quad\quad\left(S^{(x_{n-3}^{\prime}-\lambda_{n-1})}S^{(\lambda_{ n-1})}+S^{(x_{n-3}^{\prime}-\lambda_{n-1}-1)}S^{(\lambda_{n-1}+1)}-S^{(1)}S^{(x_{n-3} ^{\prime}-\lambda_{n-1})}S^{(\lambda_{n-1}+1)}\right)\] where \(\varepsilon^{\prime\prime}=(0,\dots,0,\varepsilon_{j},\varepsilon_{j+1},\dots,\varepsilon_{n-3},0,0)\). By lemma 10, \[S^{(x_{n-3}^{\prime}-\lambda_{n-1})}S^{(\lambda_{n-1})}+S^{(x_{n-3}^{\prime}- \lambda_{n-1}-1)}S^{(\lambda_{n-1}+1)}-S^{(1)}S^{(x_{n-3}^{\prime}-\lambda_{n-1 })}S^{(\lambda_{n-1}+1)}\] \[=-S^{(x_{n-3}^{\prime}+2)}.\] If \(j=n-2\), then \(x_{n-3}^{\prime}=\lambda_{n-2}\), and we are done. Otherwise, we proceed in a similar manner for \(n-j\) times, and apply lemma 10 every time. We would see that \(x_{n-j-1}=\lambda_{n-j}\), which would give us the desired equality. **Corollary 5**.: _Let \(\mu=\lambda^{(j)}-(1,1,\dots,1)\) for some \(j\), then_ \[m(\lambda,\mu)=\prod_{i}S^{(x_{i}-y_{i})},\] _where \(\{x_{1},y_{1},\dots,x_{n},y_{n}\}\) is the non-decreasing rearrangement of \(\{\lambda_{1},\dots,\lambda_{n},\\ \mu_{1},\dots,\mu_{n-1},0\}\)._ Proof.: For \(\mu=\lambda^{(j)}-(1,1,\ldots,1)=(\lambda_{1},\ldots,\lambda_{j-1},\lambda_{j+1}-1,\ldots,\lambda_{n}-1)\), we look at the coefficients of \(\chi_{\lambda^{(j)}}\) in (29), which gives us \[(-1)^{n-j}S^{(\lambda_{j}+n-j)}=\sum_{\begin{subarray}{c}\mu+\varepsilon\text{ dominant}\\ \varepsilon\in\{0,1,2\}^{n-1}\end{subarray}}(-S^{(1)})^{r(\varepsilon)}m( \lambda,\mu+\varepsilon), \tag{41}\] We rearrange the sum, to get \[m(\lambda,\mu)=(-1)^{n-j}S^{(\lambda_{j}+n-j)}-\sum_{\begin{subarray}{c}\mu+ \varepsilon\text{ dominant}\\ \varepsilon\in\{0,1,2\}^{n-1}\\ \varepsilon\neq(0,\ldots,0)\end{subarray}}(-S^{(1)})^{r(\varepsilon)}m( \lambda,\mu+\varepsilon).\] By Lemma 11, we see that the right side of the equation is given by \(\prod_{i}S^{(x_{i}-y_{i})}\), which completes the proof. With this we have given the proof for the multiplicity formula in the three cases, hence this completes the proof of branching rules in the symplectic case. _Remark 3_.: One can arrive at the classical result of the multiplicity, which is seen as a number, by considering the dimension of the \(SL_{2}\) representation \(m(\lambda,\mu).\) It follows canonically that if \(m(\lambda,\mu)=\prod_{i=1}^{n}S^{(x_{i}-y_{i})}\), then \(\dim m(\lambda,\mu)=\prod_{i=1}^{n}(x_{i}-y_{i}+1)\), which is the formula seen in the classical treatment of the subject. ## 5. Branching rules for \((Spin(2n),Spin(2n-1))\) When \((G,H)=(Spin(2n),Spin(2n-1))\), we are not in an equal rank case. In this situation, we do not seem to have at hand a relative Weyl character formula and the foregoing method cannot be applied. The main idea of our proof is to relate the Weyl character formula for \(G\) to that of \(GL(n)\). The Weyl group of \(Spin(2n)\) is a semi-direct product of the symmetric group \(S_{n}\) with the group of 'even sign changes' \((\mathbb{Z}/2\mathbb{Z})^{n-1}\subset(\mathbb{Z}/2\mathbb{Z})^{n}\). The starting point of our proof is to decompose the Weyl character of \(Spin(2n)\) in terms of the Weyl character formula for \(GL(n)\), indexed by the sign changes. We then use the branching formula for \((GL(n),GL(n-1))\) as an algebraic identity and stitch the resultant identities along the group \((\mathbb{Z}/2\mathbb{Z})^{n-1}\) to arrive at a proof of the branching formula in this case. We recall the convention we had in Section 3: Given a tuple \(\eta=(\eta_{1},\eta_{2}\ldots,\eta_{n})\), we define \[D^{+}(\eta)=\det|x_{j}^{\eta_{i}}+x_{j}^{-\eta_{i}}|\quad\text{ and }\quad D^{-}( \eta)=\det|x_{j}^{\eta_{i}}-x_{j}^{-\eta_{i}}|.\] The dominant weights of \(G\) are given by \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\), where \(\lambda_{i}\) are all integers or all half integers such that \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq|\lambda_{n}|\). Without loss of generality, we may assume that \(\lambda_{n}\geq 0\) (otherwise we can apply the outer automorphism of \(G\), to make it non negative). We recall the Weyl character formula for \(G\): \[\chi_{\lambda}=\frac{\frac{1}{2}(D^{-}(\lambda+\rho_{G})+D^{+}(\lambda+\rho_{G}))}{ \frac{1}{2}D^{+}(\rho_{G})}, \tag{42}\] where \(\rho_{G}=(n-1,n-2,\ldots,1,0)\) is the half sum of positive roots for \(G\). (We put the \(\frac{1}{2}\) in the numerator and denominator to keep the product formula for the Weyl denominator coherent.) The Weyl denominator has a product expansion: \[\frac{1}{2}D^{+}(\rho_{G})=\frac{\prod_{1\leq i<j\leq n}(x_{i}-x_{j})(x_{i}x_{j }-1)}{(x_{1}\ldots x_{n})^{n-1}}.\] We first express the numerator in the Weyl character formula of \(Spin(2n)\) as a sum of \(GL(n)-\)type numerators: **Proposition 6**.: _Given a tuple \(\eta=(\eta_{1},\eta_{2}\ldots,\eta_{n})\), we have that_ \[\frac{1}{2}(D^{-}(\eta)+D^{+}(\eta))=\sum_{\begin{subarray}{c}\varepsilon\in \{\pm 1\}^{n}\\ (-1)^{\varepsilon}=1\end{subarray}}\det|x_{j}^{\varepsilon_{i}\eta_{i}}|,\] _where \((-1)^{\varepsilon}=\prod_{i}\varepsilon_{i}\)._ Proof.: We use the multilinearity of the determinant and expand along the columns to get, \[\det|x_{j}^{\eta_{i}}+x_{j}^{-\eta_{i}}|=\sum_{\varepsilon\in\{ \pm 1\}}\det|x_{j}^{\varepsilon_{i}\eta_{i}}|\quad\text{ and }\] \[\det|x_{j}^{\eta_{i}}-x_{j}^{-\eta_{i}}|=\sum_{\varepsilon\in\{ \pm 1\}}(-1)^{\varepsilon}\det|x_{j}^{\varepsilon_{i}\eta_{i}}|.\] Adding the two equations and dividing by \(2\) completes the proof of the lemma. We recall (see [1, p.378]), that we can consider the maximal torus of \(Spin(2n-1)\) as embedded inside the maximal torus of \(Spin(2n)\) by letting \(x_{n}=1\). Using this specialization, we rewrite the Weyl denominator of \(Spin(2n)\) as follows: **Lemma 12**.: _If \(WD(G)\) (resp. \(WD(H)\)) is the weyl denominator of \(G\) ( resp. \(H\)), then_ \[WD(G)|_{x_{n}=1}=WD(H)\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2}).\] Proof.: We manipulate the product formula for the Weyl denominator of \(G\): \[WD(G)|_{x_{n}=1} =\frac{\prod_{1\leq i<j\leq n-1}(x_{i}-x_{j})(x_{i}x_{j}-1)\prod_{i= 1}^{n-1}(x_{i}-1)^{2}}{(x_{1}x_{2}\ldots x_{n-1})^{n-1}}\] \[=\frac{\prod_{1\leq i<j\leq n-1}(x_{i}-x_{j})(x_{i}x_{j}-1)}{(x_{1 }x_{2}\ldots x_{n-1})^{n-2}}\prod_{i=1}^{n-1}\frac{(x_{i}-1)^{2}}{x_{i}}\] \[=\left(\frac{\prod_{1\leq i<j\leq n-1}(x_{i}-x_{j})(x_{i}x_{j}-1) \prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2})}{(x_{1}x_{2}\ldots x_{n-1})^{n-2}} \right)\left(\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2})\right)\] \[=WD(H)\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2}).\] We reformulate Theorem 1 as a formal algebraic identity involving the variables \(x_{i}\): **Proposition 7**.: _Let \(\lambda_{i}\) be all integers or all half integers such that \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{n}\geq 0\). Then,_ \[\left.\frac{\det|x_{j}^{\lambda_{i}+n-i}|}{\det|x_{j}^{n-i}|}\right|_{x_{n}=1} =\sum_{\mu\preceq\lambda}\frac{\det|x_{j}^{\mu_{i}+n-i-1}|}{\det|x_{j}^{n-i-1}|}, \tag{43}\] _where \(\mu\preceq\lambda\) refers to saying that \(\mu\) interlaces \(\lambda\) (\(\lambda_{1}\geq\mu_{1}\geq\lambda_{2}\ \geq\cdots\geq\lambda_{n-1}\geq\mu_{n-1}\geq \lambda_{n}\)). Note that the determinants on the left side have size \(n\times n\) whereas those on the right side are of size \(n-1\times n-1\)._ Proof.: If \(\lambda_{i}\) are all integers, then the above identity follows from Theorem 1. If \(\lambda\) has parts that are all half integers, we consider equation (43) for \(\lambda^{\prime}=\lambda-(\frac{1}{2},\frac{1}{2},\ldots,\frac{1}{2})\). By multiplying both sides of the said identity by \((x_{1}x_{2}\ldots x_{n-1})^{1/2}\), we can show that the above identity holds when \(\lambda_{i}\) is half integral. In this case the right side of equation (43) is summed over all the \(\mu\) that are half integers and interlace \(\lambda\). **Corollary 6**.: \[\frac{(\det|x_{j}^{\lambda_{i}+n-i}|)|_{x_{n}=1}}{\prod_{i=1}^{n-1}(x_{i}-1)}= \sum_{\mu\preceq\lambda}\det|x_{j}^{\mu_{i}+n-i-1}|.\] (44) Proof.: The denominators in equation (43) are Vandermonde determinant. Upon cross multiplying we get the above equality. Proof of Theorem 4.: By Proposition 6 and Lemma 12, we have \[\chi_{\lambda}|_{x_{n}=1}=\frac{1}{WD(H)}\sum_{\begin{subarray}{c} \varepsilon\in\{\pm 1\}^{n}\\ (-1)^{\varepsilon}=1\end{subarray}}\frac{\det|x_{j}^{\varepsilon_{i}(\lambda_ {i}+n-i)}|_{x_{n}=1}}{\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2})}. \tag{45}\] We first observe that the following two abelian groups are isomorphic: \[\{\varepsilon\in\{\pm 1\}^{n}\mid(-1)^{\varepsilon}=1\}\cong\{\varepsilon^{ \prime}\in\{\pm 1\}^{n-1}\}.\] Given \(\varepsilon=(\varepsilon_{1},\ldots,\varepsilon_{n})\in\{\pm 1\}^{n},(-1)^{ \varepsilon}=1\),we have the injection \(\varepsilon^{\prime}=(\varepsilon_{1},\ldots,\varepsilon_{n-1})\). To show that it is a surjection, for \(\varepsilon^{\prime}\in\{\pm 1\}^{n-1}\), let \(\varepsilon^{\prime}_{n}=\prod_{i=1}^{n-1}\varepsilon^{\prime}_{i}\). Then \(\varepsilon=(\varepsilon^{\prime}_{1},\ldots,\varepsilon^{\prime}_{n-1}, \varepsilon^{\prime}_{n})\in\{\pm 1\}^{n}\) and \((-1)^{\varepsilon}=1\). Our convention is that by \(\varepsilon\) we mean an element on the left side of the above isomorphism, and \(\varepsilon^{\prime}\) refers to its isomorphic image on the right side. For a fixed \(\varepsilon\in\{\pm 1\}^{n},(-1)^{\varepsilon}=1\), let \(y_{i}=x_{i}^{\varepsilon_{i}}\). Then, \[\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2}) =\prod_{i=1}^{n-1}(y_{i}^{\varepsilon_{i}/2}-y_{i}^{-\varepsilon_{ i}/2})\] \[=(-1)^{\varepsilon^{\prime}}\prod_{i=1}^{n-1}(y_{i}^{1/2}-y_{i}^{ -1/2})\] \[=(-1)^{\varepsilon^{\prime}}(y_{1}y_{2}\ldots y_{n-1})^{-1/2}\prod _{i=1}^{n-1}(y_{i}-1). \tag{46}\] Using equation (44), we get that \[\frac{(\det|x_{j}^{\varepsilon_{i}(\lambda_{i}+n-i)}|)|_{x_{n}=1} }{\prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2})} =(-1)^{\varepsilon^{\prime}}(y_{1}y_{2}\ldots y_{n-1})^{1/2}\frac {\det|y_{j}^{\lambda_{i}+n-i}|_{y_{n}=1}}{\prod_{i=1}^{n-1}(y_{i}-1)}\] \[=(-1)^{\varepsilon^{\prime}}(y_{1}y_{2}\ldots y_{n-1})^{1/2}\sum _{\mu\preceq\lambda}\det|y_{j}^{\mu_{i}+n-i-1}|\] \[=(-1)^{\varepsilon^{\prime}}\sum_{\mu\preceq\lambda}\det|y_{j}^{ \mu_{i}+n-i-1/2}|\] \[=(-1)^{\varepsilon^{\prime}}\sum_{\mu\preceq\lambda}\det|x_{j}^{ \varepsilon_{j}(\mu_{i}+n-i-1/2)}|\] \[=(-1)^{\varepsilon^{\prime}}\sum_{\mu\preceq\lambda}\det|x_{j}^{ \varepsilon_{j}^{\prime}(\mu_{i}+n-i-1/2)}|. \tag{47}\] Combining this with equation (45), we get \[\chi_{\lambda}|_{x_{n}=1} =\frac{1}{WD(H)}\sum_{\begin{subarray}{c}\varepsilon\in\{\pm 1\}^{n} \end{subarray}}\frac{\det|x_{j}^{\varepsilon_{i}(\lambda_{i}+n-i)}|_{x_{n}=1}}{ \prod_{i=1}^{n-1}(x_{i}^{1/2}-x_{i}^{-1/2})}\] \[=\frac{1}{WD(H)}\sum_{\varepsilon^{\prime}\in\{\pm 1\}^{n-1}}(-1) ^{\varepsilon^{\prime}}\sum_{\mupreceq\lambda}\det|x_{j}^{\varepsilon^{ \prime}_{j}(\mu_{i}+n-i-1/2)}|\] \[=\sum_{\mupreceq\lambda}\sum_{\varepsilon^{\prime}\in\{\pm 1\}^{n-1} }(-1)^{\varepsilon^{\prime}}\frac{\det|x_{j}^{\varepsilon^{\prime}_{j}(\mu_{i }+n-i-1/2)}|}{WD(H)}\] \[=\sum_{\mupreceq\lambda}\frac{\det|x_{j}^{(\mu_{i}+n-i-1/2)}-x_{ j}^{-(\mu_{i}+n-i-1/2)}|}{WD(H)}\] \[=\sum_{\mupreceq\lambda}\chi_{\mu}. \tag{48}\] This completes the proof of Theorem 4. ### Acknowledgement The authors thank Dibyendu Biswas, Sourav Ghosh, Sameer Kulkarni, Niladri Patra, Dipendra Prasad and Manodeep Raha for their suggestions and many valuable discussions on the said topic of this paper. The authors especially thank Swarnava Mukhopadhayay for his constant support and suggestions. The second author expresses his gratitude to Ashoka University for the warm hospitality and productive working atmosphere during their visit. The first author is indebted to TIFR for providing a great working atmosphere during his tenure there.
2309.06619
RT-LM: Uncertainty-Aware Resource Management for Real-Time Inference of Language Models
Recent advancements in language models (LMs) have gained substantial attentions on their capability to generate human-like responses. Though exhibiting a promising future for various applications such as conversation AI, these LMs face deployment challenges on various devices due to their extreme computational cost and unpredictable inference latency. Such varied inference latency, identified as a consequence of uncertainty intrinsic to the nature of language, can lead to computational inefficiency and degrade the overall performance of LMs, especially under high-traffic workloads. Unfortunately, the bandwidth of these uncertainty sources is extensive, complicating the prediction of latency and the effects emanating from such uncertainties. To understand and mitigate the impact of uncertainty on real-time response-demanding systems, we take the first step to comprehend, quantify and optimize these uncertainty-induced latency performance variations in LMs. Specifically, we present RT-LM, an uncertainty-aware resource management ecosystem for real-time inference of LMs. RT-LM innovatively quantifies how specific input uncertainties, adversely affect latency, often leading to an increased output length. Exploiting these insights, we devise a lightweight yet effective method to dynamically correlate input text uncertainties with output length at runtime. Utilizing this quantification as a latency heuristic, we integrate the uncertainty information into a system-level scheduler which explores several uncertainty-induced optimization opportunities, including uncertainty-aware prioritization, dynamic consolidation, and strategic CPU offloading. Quantitative experiments across five state-of-the-art LMs on two hardware platforms demonstrates that RT-LM can significantly reduce the average response time and improve throughput while incurring a rather small runtime overhead.
Yufei Li, Zexin Li, Wei Yang, Cong Liu
2023-09-12T22:22:10Z
http://arxiv.org/abs/2309.06619v1
# RT-LM: Uncertainty-Aware Resource Management for Real-Time Inference of Language Models ###### Abstract Recent advancements in language models (LMs) have gained substantial attentions on their capability to generate human-like responses. Though exhibiting a promising future for various applications such as conversation AI, these LMs face deployment challenges on various devices due to their extreme computational cost and unpredictable inference latency. Such varied inference latency, identified as a consequence of uncertainty intrinsic to the nature of language, can lead to computational inefficiency and degrade the overall performance of LMs, especially under high-traffic workloads. Unfortunately, the bandwidth of these uncertainty sources is extensive, complicating the prediction of latency and the effects emanating from such uncertainties. To understand and mitigate the impact of uncertainty on real-time response-demanding systems, we take the first step to comprehend, quantify and optimize these uncertainty-induced latency performance variations in LMs. Specifically, we present RT-LM, an uncertainty-aware resource management ecosystem for real-time inference of LMs. RT-LM innovatively quantifies how specific input uncertainties, recognized within the NLP community, adversely affect latency, often leading to an increased output length. Exploiting these insights, we devise a lightweight yet effective method to dynamically correlate input text uncertainties with output length at runtime. Utilizing this quantification as a latency heuristic, we integrate the uncertainty information into a system-level scheduler which explores several uncertainty-induced optimization opportunities, including uncertainty-aware prioritization, dynamic consolidation, and strategic CPU offloading. Quantitative experiments across five state-of-the-art LMs on two hardware platforms demonstrates that RT-LM can significantly reduce the average response time and improve throughput while incurring a rather small runtime overhead. Language model, uncertainty, real-time system ## I Introduction The recent surge in the development and dissemination of language models (LMs) such as ChatGPT has significantly reshaped the landscape of natural language processing (NLP) [1, 2, 3, 4]. This advancement holds immense promise for a multitude of applications, including multi-lingual robots and voice control devices integral to the future of smart homes [5, 6, 7]. Despite the impressive capability to generate human-like responses, these state-of-the-art LMs present a formidable challenge when attempting to deploy them on various devices due to their complex computational behaviors and unpredictable real-time inference capabilities [8, 9]. With the increasing demand for real-time language processing, server-backed systems, such as online chatbots (e.g., ChatGPT manages over 10 million daily queries) and live-translation services, exemplify the need for devices that can efficiently process simultaneous requests from multiple users, especially during peak times. A set of recent works seek to enhance the inference latency of on-device LMs by crafting an array of model optimization techniques, including quantization [10], pruning [11, 12], and distillation [13]. These techniques aim at decreasing model complexity (thus the computational demand) while preserving their accuracy. Nonetheless, a knowledge gap persists in understanding and exploring the correlation between an input text and the corresponding inference latency within a given LM from a system-level perspective. The NLP community has recently brought to light various sources of uncertainties [14, 15, 16, 17, 18], which have been shown to negatively impact model's accuracy and may introduce significant variations in the lengths of generated responses. Take, for example, a broad and ambiguous question such as "_Can you tell me the history of art?_". This could prompt a LM to generate lengthier outputs, given that the history of art spans millennia and includes a multitude of cultures, styles, periods, and artistic movements. Intuitively, the longer output a LM generates, the greater the inference latency, as each output token is sequentially generated with negligible computational difference [19, 20]. These sources of uncertainties, often intrinsic to the nature of language understanding and generation, can stem from varying data distributions [21, 22], intricate model architectures [23], or even the non-deterministic parallel computing behaviors at runtime [24], rendering the induced latency more complex and challenging to manage. Consequently, it is critical to understand and mitigate such uncertainties due to their potential to induce non-trivial inference latency and computational inefficiency, or even hinder the prompt delivery of dialogue generation (DG) due to degraded system performance. This work is specifically motivated by the following queries: (i) What is the intrinsic correlation between an input text's uncertainty characteristics and the subsequent computational demand (and thus, the inference latency) for a given LM, such as why two syntactically similar inputs may necessitate dramatically different inference latencies? (ii) Is it feasible to devise a lightweight approach to predict an input's computational demand at runtime? and (iii) Can the system-level resource manager exploit these quantified input characteristics to improve latency performance during inference? Understanding the quantifiable correlation between an input text and its computational demand is critical, as it could unveil novel opportunities for system-level optimization, thereby enhancing the performance and efficiency of LMs deployed on embedded devices, e.g., by deferring the execution of inputs with high computational demand thus reducing head-of-line blocking. Our research attempts to comprehend, quantify, and optimize these uncertainty-induced variations on latency performance in LMs. We propose a cohesive ecosystem that integrates an application-level uncertainty quantification framework with a system-level uncertainty-aware resource manager. The application-level framework aims to precisely quantify task uncertainties and their potential impacts on latency. Simultaneously, the system-level resource manager utilizes the provided estimations to make informed decisions on resource allocation and task scheduling, thereby mitigating the detrimental effects of uncertainties on system performance. **Contributions.** In this paper, we propose an uncertainty-aware resource management ecosystem, namely RT-LM, for real-time on-device LMs. Specifically, RT-LM features three technical novelties: 1) It first quantitatively reveals how major input uncertainties--well-defined by the NLP community--negatively impact latency. Our findings demonstrate that uncertainty characteristics of an input text may notably increase the output length, i.e., the number of tokens in the generated response; 2) Building on this insight, we develop a lightweight yet effective method that can quickly correlate and quantify the output length for an input text at runtime, considering a comprehensive set of uncertainties defined by the NLP community; 3) Leveraging this quantification as a heuristic of latency, we incorporate the uncertainty information of each input into system-level scheduler that performs several optimizations, including uncertainty-aware prioritization, dynamic consolidation, and strategic utilization of CPU cores. We implement RT-LM mainly on an edge server. We evaluate the response time and throughput across five state-of-the-art LMs1, namely DialoGPT [25], GODEL [26], Blender-Bot [27], BART [28], and T5 [29]. We utilize RT-LM four widely-researched benchmark datasets: _Blended Skill Talk_[30], _PersonaChat_[31], _ConvAI2_[32], and _Empathetic Dialogues_[33]. For both the models and datasets, we use the versions released by Hugging Face. Footnote 1: While there are larger models like ChatGPT that offer impressive capabilities, their resource-intensive nature makes them less viable for deployment. Evaluation results demonstrate that RT-LM achieves: * **Efficiency**: RT-LM outperforms all compared methods by a significant margin in most cases, improving the maximum response time by up to 30% and throughput by up to 40% compared to uncertainty-oblivious baselines. * **Efficacy across a range of behaviors**: The tested workloads include five LMs with diverse task uncertainty characteristics and varied workload settings. * **Robustness under malicious scenarios**: RT-LM is resilient when facing adversarial conditions, effectively mitigating the impact of malicious tasks by resource management. * **Runtime overhead**: The design and implementation of RT-LM is efficient, incurring a rather small runtime latency and memory usage. ## II Background and Challenges ### _Dialogue Generation using LMs_ Recently, pre-trained LMs such as ChatGPT and GPT-4 [1] have emerged as a dominant force in the field of dialogue generation (DG). These models are characterized by their large size and are often trained on vast amounts of textual data, which demonstrate remarkable capabilities in understanding and generating human-like responses across a wide range of tasks. A key property of these models is the _autoregressive_ generation process [9], where output tokens are generated sequentially with each new token being conditioned on the previously generated tokens. Consequently, the output length plays a pivotal role in determining the inference latency of a LM, as generating longer sequences inherently requires more time. Depending on the nature of inputs, a LM may generate outputs of varied lengths. For instance, a query that has clear and concise meanings may elicit a brief response, whereas an ambiguous or broad query may demand a considerably longer output. This variability, often called _linguistic uncertainty_[34] by the NLP community, in output length and the subsequent impact on latency, can pose significant challenges when deploying LMs on resource-constrained devices, as the performance requirements and computational constraints must accommodate a wide range of potential latencies. ### _Sources and Impacts of Linguistic Uncertainty_ Linguistic uncertainty is a challenging and diverse subdomain in NLP, which often leads to multiple interpretations of inputs and potentially varied outputs in dialogue systems. The language and linguistics community has well-defined a categorization of linguistic uncertainty that encompasses the majority of uncertainty sources, including three types of \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Type** & **Definition** & **Statement/Question** \\ \hline Structural ambiguity & Uncertainty related to multiple possible parse structures, leading to outputs with varying lengths. & “John saw a boy in the park with a telescope.” \\ \hline Syntactic ambiguity & Uncertainty arising from multiple part-of-speech tags of a word, resulting in different interpretations. & “Rice flies like sand.” \\ \hline Semantic ambiguity & Uncertainty stemming from words with multiple meanings, leading to varying interpretations. & “What’s the best way to deal with bats?” \\ \hline Vague expressions & Uncertainty arising from broad concepts or highly-generalized topics that demand specific analysis. & “Tell me about the history of art.” \\ \hline Open-endedness & Questions or statements that lack a single definitive answer and require providing relevant context, background, and explanations. & “What are the causes and consequences of poverty in developing countries?” \\ \hline Multi-partness & Questions or statements containing multiple sub-questions or topics, which demand detailed answers. & “How do cats and dogs differ in behavior, diet, and social interaction?” \\ \hline \hline \end{tabular} \end{table} TABLE I: Types of linguistic uncertainty, their definitions and example statements or questions. lexical ambiguity (structural ambiguity [14, 35], syntactic ambiguity [15, 36], semantic ambiguity [37, 38]), vague expressions [16], open-ended questions [17, 39], and multi-part questions [18] that demand comprehensive answers and additional explanations. Their definitions and example statements or questions are listed in Table I. ## III Key Observations and Ideas ### _Uncertainty-Induced Negative Impact on LM Latency and the Root Cause_ We conducted a comprehensive set of studies investigating the correlation between inputs' uncertainty characteristics and the resulting inference latency of several LMs. Specifically, we create 1,000 utterances for each of the six uncertainty types (defined in Sec. II-B) and record the averaged output length as well as inference latency across DialoGPT, GODEL, BlenderBot, BART, and T5, as shown in Fig. 1a. We observe that all types of linguistic uncertainties lead to longer outputs and non-trivially larger latencies to varying degrees. Specifically, vague expressions, open-endedness, and multi-partness are generally more deterministic compared to the three types of lexical ambiguities. This can be attributed to that modern neural networks (NNs) lack uncertainty awareness and are prone to overconfidence when making decisions [40], which results in LMs understanding one potential interpretation and respond accordingly without seeking further clarifications. Furthermore, semantic ambiguity has a more significant impact on output lengths than structural and syntactic ambiguities. We speculate that this is because some words with multiple meanings such as _"trunk"_ or _"monitor"_ are more likely to cause confusions for a LM and thereby triggering longer responses, e.g., by enumerating all potential interpretations of a word sense and asking for explanations. Fig. 1b plots the correlation between inference latency and output length for sentences that contain different types of uncertainties. We observe that inference latency is proportional to the output length, with longer outputs generally requiring larger inference latencies. Some sentences with uncertainties such as open-endedness and multi-partness may even take over 700ms for a LM to generate corresponding responses, which is 2\(\sim\)4 times the latency of normal sentences. This presents a substantial opportunity for system-level optimization, as resource manager can leverage this uncertainty impact as an estimation of task execution times to enhance system efficiency and resource utilization. ### _Predicting the Output Length for a Given Input_ Upon observing that inference latency is determined by output length, we develop methods that can accurately yet efficiently predict such length for a given input at runtime. As discussed earlier, uncertainty of an input text may increase the output length and thus negatively impact inference latency. Our methods shall take uncertainty into account when making such predictions. **Uncertainty score**: In this work, we define uncertainty score for an input text as the estimated number of tokens (output length) required to formulate a comprehensive and unambiguous response that sufficiently addresses the posed inquiry. **Input length.** Intuitively, longer inputs may lead to LMs generating longer outputs, even without considering uncertainty. We demonstrate the impact of this naive heuristic on output lengths in Fig. 2a. We observe that although the correlation is not deterministic and noisy, longer input lengths generally induce longer generated outputs. This inspires us to further improve it by considering uncertainty. **Single rule.** We measure the intensity of each uncertainty using hand-crafted rules introduced in the literature. Specially, we use the spaCy2 language tool to tokenize input text and obtain the Part-of-Speech (PoS) tag for each token in the original text. Then, we quantify uncertainty scores by searching for pre-defined patterns inherently existing in each uncertainty source using regular expressions. Listing 1 shows an example code for quantifying vague expression uncertainty. Note that for input sentences that do not contain the defined six uncertainty sources, we use input lengths as their single rule scores. We evaluate the correlation between single rule scores and the output lengths for inputs containing the corresponding type of uncertainty in Fig. 2b. We observe that the correlation is slightly more apparent and less noisy, which demonstrates the impact of uncertainty on LM generation process. Footnote 2: [https://spacy.io/](https://spacy.io/) **Weighted rule.** The previous method assumes a primary uncertainty source for each sentence, which is not generic for real-world test cases that may contain multiple uncertainty sources. Instead, we measure the six defined uncertainty scores for a given text and assign a weight to each category by Fig. 1: Observations of (a) distribution of LM output lengths for inputs with different uncertainty types, and (b) the correlation between LM output lengths and inference latency. learning a linear regression to the previously fitted line. We evaluate the correlation between weighted rule scores and output lengths for inputs with the corresponding type of uncertainty in Fig. 2c. We observe that the dependency between uncertainty scores and output lengths noticeably increases, without more data points getting close to the trend line. **Lightweight model.** While hand-crafted rules can capture certain uncertainty for sentences, they are heuristic methods and not comprehensive enough since the data distribution is not learned. To make such estimation more reliable, we introduce a data-driven black-box lightweight (LW) multi-layer perceptron (MLP) [41] that takes the six rule-based scores as features and predicts the output length for any given query. Specifically, we train a LW model on the training sets of four benchmark datasets and evaluate the correlation between its predictions and output lengths for unseen queries in the test sets in Fig. 2d. We observe the output lengths are almost linearly dependent on our predicted scores, with only few noisy samples. We further evaluate the correlation between the predicted uncertainty scores and averaged inference latency across different LMs on the four benchmark datasets in Fig. 3. The predicted scores are highly consistent with the inference latencies across all datasets, i.e., sentences with smaller uncertainties generally require larger inference latencies. This suggests that our method can precisely estimate LM execution times for any unseen query in real-world dialogue scenarios. ### _System-level Optimization Opportunities_ We now illustrate several precious system-level optimization ideas enabled by leveraging uncertainty score metric. **Prioritization.** Online queries, though without intrinsic deadlines, have _priorities_ (e.g., urgency of the task) that can be specified by RT-LM using the priority point parameter according to their estimated workloads. Leveraging the uncertainty score of each task (i.e., the estimated number of output tokens of each input), the scheduler shall make better prioritization decisions. Intuitively, prioritizing tasks that require shorter execution times and earlier priority points would improve throughput and timing correctness (often due to reduced head-of-line blocking), as illustrated in Fig. 4. In this example, five tasks that arrive at the same time (the length of each block presents its execution time) are scheduled by three strategies, namely Highest Priority Point First (HPF), Least Uncertainty First (LUF), and RT-LM utilizing Uncertainty-aware Prioritization (UP). As a result, HPF and LUF respectively miss two (\(J_{4}\) and \(J_{5}\)) and three (\(J_{2}\), \(J_{4}\), and \(J_{5}\)) priority points, whereas UP misses only one priority point (\(J_{2}\)). **Consolidation.** In any heavily-loaded systems requiring machine learning workload multitasking, batch execution is a commonly-used method to enhance response time and timing correctness. Our estimated uncertainty scores can assist deciding which tasks shall be batched and executed together to better utilize hardware resources. Fig. 5 describes this idea using an intuitive example comparing two batch executions for eight tasks with a batch size of four. Fig. 5a presents a schedule under uncertainty-oblivious batching, e.g., HPF where tasks in Fig. 4: Prioritization example for HPF, LUF, and UP. \(J_{i}\) denotes the \(i\)-th task, \(\tau_{i}\) denotes its arrival/priority point. Tasks depicted in red color denote those missing their priority points. Fig. 3: Distribution of latency and corresponding uncertainty on four benchmark DG datasets, (a) _Blended Skill Talk_, (b) _ConvAI2_, (c) _PersonaChat_, (d) _Empathetic Dialogue_. The data points are ranked by descending order of latency. Fig. 2: Correlation between average output length across the five LMs and (a) input length, (b) single rule-based score, (c) weighted rule-based score, (d) LW model scores for self-generated sentences that contain different types of uncertainties, as well as (e) input length for sentences from the four benchmark datasets. each batch have similar priority points. Four tasks (\(J_{2}\), \(J_{5}\), \(J_{6}\), \(J_{8}\)) miss priority points with a fairly low GPU utilization. Fig. 4(b) describes uncertainty-aware batching, where tasks in each batch have similar uncertainty scores. Only two tasks (\(J_{6}\), \(J_{8}\)) miss priority points with an improved GPU utilization and shorter response time. **Strategic offloading to CPU.** Previous works [42, 43] and our experiments indicate that offloading machine learning workloads to CPU cores often introduces non-negligible communication and synchronization overhead, negating the benefits of parallel utilization of both CPUs and GPUs. Fig. 6 depicts an illustrative example, where we compare the layer-wise data transfer cost with layer-wise GPU execution times for running AlexNet [44]. As seen, data transfer takes nearly the same amount of time as GPU execution for the majority of layers. Nonetheless, under overloaded situations or scenarios containing computation-demanding workloads, RT-LM could identify such tasks by checking whether the estimated uncertainty scores exceeds a pre-defined threshold. The scheduler can then decide whether offloading such demanding tasks to CPUs can improve the overall efficiency of the system. While it is likely that the negative impact due to offloading and communication can be totally negated by freeing up the precious GPU resource for executing other normal tasks, our intuition is to leverage uncertainty scores to reflect different levels of task demand and offload demanding tasks to CPU. This strategic offloading balances the workload between CPU and GPU, enabling efficient use of system resources and ensuring that the overall system remains responsive and productive. ## IV Design of RT-LM ### _Design Overview_ In this section, we illustrate the overall design of RT-LM, as shown in Fig. 7. RT-LM comprises two major components: an application-level framework that quantifies task uncertainty, and a system-level framework that leverages this information for optimized scheduling (prioritization, dynamic consolidation) and resource allocation (strategic offloading). ### _Uncertainty-aware Prioritization_ For any given input \(J\), our rule generator \(\text{RuleGen}(\cdot)\) first yields a feature vector containing the intensity of the six linguistic uncertainties. Then our LW model \(m_{\theta}\) takes the feature vector and predicts the final uncertainty score: \[u_{J}=m_{\theta}(\text{RuleGen}(J)) \tag{1}\] In some scenarios such as conversational AI in healthcare [45], if an LM request has a user-specified deadline \(t_{J}\), RT-LM can specify the priority point parameter using that deadline (\(d_{J}\) in Eq. 3 is replaced by \(t_{J}\)); whereas most LM-assisted dialogue systems do not have such user-specified deadlines. Based on our observations in Fig. 2e where longer inputs generally induce longer outputs, we empirically define a priority point for each task according to its input length \(d_{J}=\varphi_{f}|J|\), where \(\varphi_{f}\) is a coefficient that projects input length to the latency of an LM \(f\). A straightforward way of factoring in both uncertainty and priority point into a system is to use the concept of "slack" (\(\zeta\)), which measures the remaining time until the priority point: \[p_{J}=\frac{1}{\zeta_{J}}=\frac{1}{d_{J}-r_{J}-\eta_{f}\cdot u_{J}} \tag{2}\] Here \(r_{J}\), \(d_{J}\) denote the arrival time and priority point of the task, respectively. The term \(u_{J}\) presents the uncertainty score of the task, reflecting the estimated output length, while \(\eta_{f}\) is a coefficient that projects output lengths to latencies, regarding the LM \(f\). This slack-based approach prioritizes urgent tasks that are close to their priority points, which is suitable for systems with stringent priority point constraints and relatively stable task execution times. However, for on-device LM systems facing workloads with high variability in uncertainties, such as input texts with a large uncertainty range causing LMs to generate outputs with varied lengths, a more flexible approach that can prioritize tasks with shorter execution times when needed ensures more predictable and consistent system performance. In RT-LM, we design Uncertainty-aware Prioritization (UP) where each task is assigned a priority \(p_{J}\) that reflects its weighted criticality: \[p_{J}=\frac{1-\alpha\cdot u_{J}}{d_{J}-r_{J}-\eta_{f}\cdot u_{J}} \tag{3}\] Here \(\alpha\) is a system-level hyper-parameter that provides a control over the impact of uncertainty on the priority. Specifically, \(d_{J}-r_{J}-\eta_{f}\cdot u_{J}\) represents the estimated slack for the execution of the task, and \(\alpha\cdot u_{J}\) is a scaled uncertainty score. The fraction computes the estimated execution time after considering the scaled uncertainty, normalized by how much time is left, to represent the criticality of a task. The intuition behind this priority assignment is that a task with a shorter slack window or smaller uncertainty score should have a higher priority. This ensures that tasks with imminent priority points or short execution times are attended to promptly, enhancing the likelihood of meeting their priority points. The factor \(\alpha\) provides a level of adaptability to the system. A larger value of \(\alpha\) implies that the system is placing a higher emphasis on tasks with lower uncertainties, regardless of how soon their priority points are, while a smaller \(\alpha\) value reduces the impact of uncertainty on the priority calculation, placing a higher emphasis on the remaining time until the priority point. We search an optimal \(\alpha\) value from 0 to 2.0 with a increment of 0.1 by testing the corresponding response time (see Fig. 13a). ### _Dynamic Consolidation_ In the dynamic consolidation process, we aim to enhance the overall system efficiency by executing batches of tasks with similar estimated uncertainties, as they are more likely to have comparable processing requirements. The intuition is that executing tasks with similar workload characteristics as a batch can potentially lead to better resource utilization and reduced overheads, as illustrated in Fig. 5. Specifically, we maintain a queue of tasks sorted by the priority based on our UP algorithm (Eq. 3). We then group tasks with similar uncertainty scores together by introducing two hyper-parameters, \(\lambda\) and \(b\). Among them, \(b\) determines the number of tasks to consider for a batch. Given a pre-defined batch size \(C\), once the current batch accumulates \(b\times C\) tasks from the task queue, we reorder these tasks according to their uncertainty scores. We then select the top-\(C\) tasks from this reordered list for execution. This mechanism ensures that tasks are executed in an order that prioritizes higher urgency as well as shorter execution times. Additionally, parameter \(\lambda\) controls the maximum allowable ratio in uncertainty scores between tasks within a batch. As we traverse the sorted list of tasks within the current batch, if we encounter a situation where the uncertainty score of the current task is more than \(\lambda\) times that of the previous one, we segment the list at this point. The tasks preceding this point are executed as a batch, while the remaining tasks are returned to the queue for future processing. The whole consolidation process unfolds as follows: * Maintain a queue of tasks ordered by descending priority, based on the UP algorithm. * Once accumulating \(b\times C\) tasks in the current batch, reorder them in accordance with their uncertainty scores. * Traverse the reordered batch of tasks. If the uncertainty of a task exceeds \(\lambda\) times the uncertainty of the previous task, or if the batch size \(C\) is met, segment the list at this point. * Execute the tasks before the segmentation point as a batch, while returning the remaining tasks to the queue. Dynamic consolidation provides flexibility in adjusting to varied workload characteristics and system conditions through the adjustment of the parameters \(b\) and \(\lambda\). For instance, in scenarios where tasks exhibit diverse uncertainty scores, a smaller \(b\) or larger \(\lambda\) can be utilized to ensure that only tasks with similar uncertainties are grouped together. Conversely, if tasks have similar uncertainty scores, a larger \(b\) or smaller \(\lambda\) will form larger batches, potentially achieving higher system throughput. Moreover, dynamic consolidation can help balance the trade-off between throughput and predictability. By executing tasks with similar uncertainties as a batch, the system may exhibit more predictable behaviors, as estimating the execution time of a batch is often simpler than predicting individual task execution times. Meanwhile, by executing tasks in batches, the system can potentially achieve higher throughput compared to executing tasks individually. ### _Strategic Offloading to CPU_ In the dynamic consolidation process described above, tasks are assigned to batches and then executed based on uncertainty scores. However, such a process can lead to the situation where some tasks with high uncertainty scores (e.g., malicious, adversarial tasks) may potentially delay the execution of the whole batch, negatively affecting the overall system performance. To address this, we propose a protective mechanism, termed'strategic offloading', to offload potentially malicious tasks and execute them separately on CPU cores. In our implementation, we define a parameter \(k\) (\(0<k<1\)) which denotes the top-\(k\) percentage of uncertainty scores in the training set to control the malicious threshold \(\tau\): \[\tau=\text{quantile}_{k}\left(\{m_{\theta}(\text{RuleGen}(J))|J\in\mathcal{D} _{train}\}\right) \tag{4}\] In essence, \(\tau\) corresponds to the boundary of the highest \(k\)-percentile of uncertainty scores. If the uncertainty score of a task is larger than \(\tau\), it is offloaded to a CPU batch for separate execution. Otherwise, it is assigned to a GPU batch for grouped execution. Furthermore, we ensure that there is always a batch of tasks ready for execution. If the task queue is empty and there are remaining tasks in the GPU batch, these tasks are offloaded for execution. Similarly, if there are no tasks in the GPU and CPU batches, the remaining tasks from the task queue are offloaded to the appropriate execution batch based on their uncertainty scores. This strategic offloading mechanism provides a layer of protection against extreme execution times, ensuring malicious tasks do not excessively delay the execution of a batch and promising a more predictable and reliable system performance, particularly under workloads with high variability. By carefully controlling the offloading parameter \(k\), this mechanism can be tuned to balance the benefits of grouping tasks for efficient execution against the potential delays caused by malicious tasks. ### _Pseudo Code and Illustration_ Algorithm 1 illustrates the whole framework of RT-LM, known as UASched. It takes several aforementioned control parameters, \(\alpha\), \(\lambda\), \(k\), and \(b\), and operates in two main phases: offline profiling and online scheduling. **Offline profiling.** The algorithm starts by initializing a LW regressor \(m_{\theta}\). For each task in the training set, \(\text{RuleGen}(\cdot)\) generates rule scores \(\mathbf{r}_{J}\), which is taken by \(m_{\theta}\) as features and calculates the output length from the LM. The algorithm then minimizes the Mean Squared Error (MSE) between the estimated output lengths and the LM output lengths, thereby updating the LW model. It also records GPU utilization to determine the minimum batch size \(C_{f}\) for the LM \(f(\cdot)\) that can better utilize hardware resources, e.g., when GPU usage reaches 100%. Finally, it determines the malicious threshold \(\tau\) according to the uncertainty score distribution. **Online scheduling.** The algorithm iterates over tasks in the test set, calculating uncertainty scores using the pre-trained LW model \(m_{\theta}\), and then placing them into a task queue. The \begin{table} \begin{tabular}{c|c|c} \hline & **Edge Server** & **NVIDIA AGX Xavier** \\ \hline \multirow{3}{*}{CPU} & 96-core AMD & 8-core NVIDIA \\ & EPYC 7352 & Carmel Armv8.2 \\ & 24-Core Processor & 64-bit CPU \\ \hline GPU & NVIDIA RTX A4500 & NVIDIA Volta GPU \\ \hline Memory & 512GB & 16GB LPDDR4x \\ \hline Storage & 8TB SSD & 32GB eMMC \\ \hline \end{tabular} \end{table} TABLE II: Hardware platforms used in our experiments. Fig. 8: Offline decisions on (a) optimal batch size \(C\) and (b) malicious threshold \(\tau\) (\(k=0.9\)) for the five LMs. tasks are then popped and processed in a descending order of priority scores. If a task's uncertainty score is greater than the threshold, it is offloaded to a CPU batch; otherwise, it is placed in a temporary batch. If the temporary batch reaches a size of \(b\cdot C_{f}\), the scheduler sorts tasks in the batch in ascending order of uncertainty scores. It then segments the batch at a point where the current uncertainty score is larger than \(\lambda\) times that of the previous one or if the pre-defined batch size \(C_{f}\) has been reached. The segmented tasks are offloaded to a GPU batch, while the remaining ones are put back into the queue. ## V Implementation and Evaluation ### _Experiment Setup_ **Testbeds.** We implement RT-LM and conduct an extensive set of experiments on an edge server, as shown in Table II, simulating the single-device multitasking scenarios of online chatbots or services, and live-translation services. **Benchmark.** We evaluate RT-LM across five state-of-the-art LMs that are widely used in dialogue systems--DialoGPT [25], GODEL [26], BlenderBot [27], BART [28], and T5 [29]--on four benchmark datasets: _Blended Skill Talk_[30], _PersonaChat_[31], _ConvAI2_[32], and _Empathetic Dialogues_[33]. We use the pre-trained versions of these models--_DialoGPT-medium_, _GODEL-v1__1-base-seq2seq_, _blenderbot-400M-distill_, _bart-base_, _t5-base_ and annotated datasets released by Hugging Face3. Footnote 3: [https://huggingface.co/](https://huggingface.co/) **Metrics.** We evaluate RT-LM's performance w.r.t. the average response time, throughput, and runtime overhead. We also delve deeper into the effect of different components of RT-LM on the system-level performance, the robustness of RT-LM against different parameter settings, and its effectiveness under different proportions of malicious tasks. **Hyper-parameters.** For the offline profiling, we initialize a lightweight MLP which has four layers of hidden size [100, 200, 200, 100], and train the model with a learning rate of 1e-4. We record the average GPU usage for the five LMs with different batch sizes in Fig. 7(a). Specifically, we choose an optimal batch size (i.e., minimum batch size that a LM can reach 100% GPU usage) of 11, 24, 33, 11, 33, for DialoGPT, GODEL, BlenderBot, BART, T5, respectively. We further record the distribution of uncertainty scores for each LM in Fig. 7(b), and select a malicious threshold of 35, 34, 29, 26, 22 for DialoGPT, GODEL, BlenderBot, BART, T5, respectively. We set the uncertainty-weight \(\alpha\) as 1.0, the output-latency coefficients \(\eta\) as 0.05, 0.04, 0.1, 0.05, 0.04, and the input-latency coefficients \(\varphi\) as 0.08, 0.10, 0.13, 0.08, 0.07 across the five LMs for priority assignment; \(\lambda\), \(b\) as 1.5, 1.8, respectively for dynamic consolidation; and \(k\) as 0.9 for protective mechanism. To gather necessary statistics, we employ the tegrastats utility for recording GPU and CPU memory usage. Additionally, we use Python's time library to track the arrival and end time of each task, as well as the latency incurred by RT-LM. **Workload setup.** Real-world human-generated processes, such as phone calls to a call center, can often be represented as a Poisson process, where the number of arrivals within a specific time interval is governed by a Poisson distribution [46, 47]. Given the independent nature of user queries in our context, we adopt a similar model to simulate task arrivals. This model is principally defined by its average arrival rate, denoted as \(\beta\) (representing queries per minute). We generated synthetic traces by sampling inter-arrival times from an exponential distribution with differing mean \(\mu=\frac{1}{\beta}\) to modulate the arrival rate. To create time-varying synthetic workloads, we continuously evolve the workload generator across different exponential distributions throughout the process. This involves iterating through integer values of \(\beta\) ranging from 10 to 150. For each minute, we sample from the corresponding exponential distribution, ensuring a comprehensive representation of workload scenarios, from light-load phases to high-traffic peaks. Following the generation of these traces, we shuffle the test dataset and map them to the created arrival patterns. To enhance realism, acknowledging that users may require some time to complete a query, we introduced a wait time interval \(\xi=2\) seconds so that tasks arriving within this span are processed as either a single batch or multiple batches4. Footnote 4: We conducted supplementary experiments using diverse sets of \(\mu\) and \(\xi\) values. The findings consistently align with the trends observed in Fig.9\(\sim\)11. ### _Latency Performance_ We evaluate the latency performance of various strategies by calculating their response time - the time elapsed between a task's end time and its arrival time across the five LMs. Naturally, a lower average response time indicates a more efficient system. We compare RT-LM to the following baselines: * First-In-First-Out (FIFO): Tasks are queued based on their arrival times, creating uncertainty-oblivious random batches with a fixed size for execution. * Highest Priority-Point First (HPF) [48]: Tasks with higher priority points are prioritized. This approach batches tasks with similar priority points together, maintaining a fixed batch size, yet remains uncertainty-oblivious. * LUF: Tasks with lower uncertainty scores are given precedence. Those with comparable uncertainty scores (or execution times) are batched together using a fixed size. * Maximum Uncertainty First (MUF): This strategy prioritizes tasks with higher uncertainty scores. Those with analogous scores are batched together with a set size. To gauge the impact of uncertainty on system-level performance, we evaluate all methods across three subsets of tasks featuring small, medium, and large variance of uncertainty scores on the edge server. Fig. 9 demonstrates the distribution of response time values, while Table III records the worst-case response time for each method across task subsets. From our observations: 1) Uncertainty-aware strategies tend to surpass uncertainty-oblivious ones, especially when input data exhibits varied uncertainty scores. For the small-variance subset, all methods display similar response times in Fig. 9a, with the maximum values of LUF, MUF, RT-LM even larger than FIFO, HPF in some cases in Table III, but on the large-variance subset, LUF, MUF, RT-LM consistently outperform FIFO and HPF. This is because when tasks exhibit similar workloads, all strategies essentially mimic FIFO. However, when there's significant variance in task uncertainty, grouping tasks with analogous uncertainty scores reduces the likelihood of computation-intensive tasks holding up the entire batch. 2) Generally, LUF produces a better performance than MUF. By prioritizing tasks with high uncertainty, MUF can inadvertently cause the entire system to lag, thus compromising average response times. 3) RT-LM consistently exhibits superior performance, achieving the most efficient response times across all LMs. The average response time of RT-LM is roughly 0.8s less than FIFO for BART in Fig. 9c; and its maximum response time is up to 30% smaller than FIFO for BlenderBot in Table III. This suggests that considering both execution times and priority points in task prioritization can further optimize latency performance. This dual consideration ensures RT-LM is versatile across varied workload distributions. 4) Larger LMs are more sensitive to variations in task uncertainty, requiring even more execution times for tasks with high uncertainty scores, thereby benefiting more from uncertainty-aware strategies, e.g., RT-LM improves the maximum response time over FIFO to a larger extent for GODEL and BlenderBot (20% and 30%) than other LMs. ### _Throughput Performance_ We further evaluate the throughput of various strategies as the average completed tasks per minute, across the five LMs, on the edge server. As expected, a higher throughput implies a more efficient system. Table IV summarizes the results on the three subsets. We observe the throughput profiles of all methods are highly consistent with their latency performance metrics. Specifically, uncertainty-aware strategies notably exhibit larger advantages over uncertainty-oblivious ones when the uncertainty variance of test inputs grows, e.g., RT-LM can process over 6 more tasks per minute than FIFO, with DialoGPT in the large-variance subset. Among these, LUF is generally superior to MUF. RT-LM, however, stands out by consistently outperforming all other strategies. Moreover, uncertainty-aware strategies, particularly on larger LMs, can significantly boost system efficiency, e.g., RT-LM boosts the average throughput by 10% to 30% for BART and GODEL. ### _Ablation Study_ To elucidate the superiority of RT-LM, we conduct an ablation study investigating the individual contributions of each component of our method to the response time and throughput performance: \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{DialoGPT} & \multicolumn{3}{c|}{GODEL} & \multicolumn{3}{c|}{BlenderBot} & \multicolumn{3}{c|}{BART} & \multicolumn{3}{c}{T5} \\ \cline{2-13} & Small & Normal & Large & Small & Normal & Large & Small & Normal & Large & Small & Normal & Large \\ \hline FIFO & 2.25 & 3.75 & 3.90 & **2.15** & 3.06 & 3.93 & **2.24** & 2.52 & 3.41 & **1.87** & 1.93 & 1.95 & **2.90** & 2.95 & 3.30 \\ HPF & 3.25 & 3.92 & 4.68 & 2.75 & 3.79 & 4.53 & 2.90 & 3.54 & 3.37 & 2.34 & 2.13 & 2.63 & 3.56 & 4.13 & 3.97 \\ \hline LUF & 2.85 & **2.77** & 3.55 & 2.47 & 3.06 & 3.41 & 2.86 & 2.79 & 2.93 & 1.98 & 1.82 & 2.19 & 3.24 & 3.43 & **3.17** \\ MUF & 3.03 & 3.68 & 3.93 & 3.52 & 3.74 & 4.21 & 3.36 & 3.52 & 3.10 & 2.97 & 3.00 & 2.38 & 4.01 & 3.30 & 3.98 \\ RT-LM & **2.24** & 2.96 & **3.18** & 2.52 & **2.80** & **3.17** & 2.92 & **2.26** & **2.38** & 1.93 & **1.66** & **1.86** & 3.45 & **2.64** & 3.25 \\ RT-LM & -0.4\% & -21.1\% & -18.5\% & +17.2\% & -8.5\% & -19.3\% & +30.4\% & -10.3\% & -30.2\% & +3.2\% & -14.0\% & -4.6\% & +19.0\% & -23.4\% & -1.5\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Maximum response time (s) and percentage of improvement for sentences with small, normal, and large uncertainty variance on the edge server. The evaluated methods consist of uncertainty-oblivious (former) and uncertainty-aware (latter) ones. **Bold** numbers denote the best metric values among them. * Uncertainty-aware prioritization (UP): We compare uncertainty-oblivious prioritization strategies, namely FIFO and HPF, with UP for response time and throughput evaluation, respectively. * Dynamic consolidation: We contrast UP (using static batching) with its dynamic consolidation counterpart (UP+C). * Strategic offloading: We compared UP+C with RT-LM, which facilitates execution of malicious tasks on the CPU. Fig. 10 illustrates the subtle improvements of each component-enabled method over its component-oblivious counterpart in terms of reduced response times on the edge server, with RT-LM consistently outperforming the rest. For example, UP achieves an average response time of 0.2\(\sim\)0.7s less than FIFO. This indicates all three components of RT-LM are integral to its superior performance. Notably, the performance boost derived from prioritization and consolidation is typically larger than offloading, e.g., the average response time gap between UP+C and RT-LM is smaller than other pairs in most cases. This suggests that our prioritization and consolidation are more consequential in improving efficiency. Interestingly, strategic offloading has slightly more significant impact on larger LMs, e.g., RT-LM reduces the average response time over UP+C to 0.4s for GODEL, while their performance are nearly the same for BART. This is because computational demanding tasks have larger impact on sophisticated LMs, causing even more severely overloaded systems. ### _On-Device Evaluation_ Emerging embedded devices, augmented with powerful computing capabilities and LM intelligence [49], have the potential to serve as the local central service in future smart homes. These devices may support hundreds of IoT devices, facilitating concurrent multi-user or multi-device (e.g., refrigerator, air conditioner) communications with a single LM, a concept known as connected intelligence. In this context, we delve into the performance evaluation of various methods on an NVIDIA AGX Xavier (see Table II), which is widely used in various applications such as autonomous driving [50, 51] and robotics [52, 53, 54, 55], to reflect the feasibility of RT-LM in on-device multitasking scenarios. Fig. 11 showcases the response time of all evaluated methods across three subsets on the AGX Xavier. The observed patterns largely mirror those seen on the edge server. For instance, uncertainty-aware strategies excel, particularly in subsets with diverse uncertainty characteristics. LUF is generally more efficient than MUF, RT-LM consistently outperforms other baselines across all LMs, and uncertainty-aware strategies derive greater efficiency benefits from larger LMs, such as GODEL. Furthermore, a comparative analysis between the two platforms reveals an interesting insight: high-performance devices, being quicker in execution, tend to display a smaller disparity in performance across different methods compared to embedded devices. This subtly hints at a diminished relative advantage for RT-LM on more powerful devices. Fig. 12 depicts the individual contributions of each RT-LM component, in terms of reduced response time on the embedded device. The findings align with on the edge server: all three components collectively boost its performance, prioritization and consolidation emerge as more influential factors in enhancing efficiency than offloading, and larger LMs generally derive more pronounced benefits from offloading. ### _Parameter Study_ We explore the impact of two key hyperparameters, \(\alpha\) and \(b\), which control the influence of uncertainty in priority computation and the batch size determined by the number of tasks, on RT-LM. We vary \(\alpha\) from 0.1 to 2.0 (with a fixed \(b=2.0\)) and \(b\) from 1.0 to 3.0 (with a fixed \(\alpha=1.0\)), incrementing by 0.1 in both cases, and assess the resulting average response time of RT-LM across different LMs. Fig. 12(a) shows that RT-LM is robust to changes in \(\alpha\), with a maximum divergence in response time not exceeding 0.35s for each LM. This resilience indicates that UP functions as a well-balanced, uncertainty-aware priority, aptly mediating between priority points and execution times for tasks. An optimal \(\alpha\) value of 1.0 is indicated by our performance metrics. Placing a higher emphasis on either uncertainty (larger \(\alpha\)) or remaining time until the priority point (smaller \(\alpha\)) results in a slight increase of response time. Fig. 12(b) reveals that \(b\) has a more significant impact on latency performance than \(\alpha\), with the maximum deviation in \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{DialoGPT} & \multicolumn{3}{c|}{GODEL} & \multicolumn{3}{c|}{BlenderBot} & \multicolumn{3}{c|}{BART} & \multicolumn{3}{c}{T5} \\ \cline{2-13} & Small & Normal & Large & Small & Normal & Large & Small & Normal & Large & Small & Normal & Large & Small & Normal & Large \\ \hline FIFO & 21.68 & 18.00 & 15.68 & 17.89 & 18.28 & 13.36 & 21.15 & 17.90 & 17.63 & 32.30 & 30.62 & 25.92 & 17.86 & 16.57 & 16.15 \\ HPF & 20.26 & 19.27 & 16.41 & 19.55 & 18.69 & 13.99 & 21.26 & 18.28 & 17.32 & **33.59** & 30.22 & 26.56 & 18.10 & 16.75 & 17.18 \\ \hline LUF & 23.19 & 21.09 & 19.97 & 19.71 & 19.17 & 17.68 & **21.34** & 19.48 & 19.81 & 32.86 & 31.02 & 28.52 & 19.75 & 18.94 & 18.84 \\ MUF & 22.40 & 20.06 & 19.44 & 19.14 & 18.34 & 16.76 & 21.20 & 18.92 & 20.08 & 32.03 & 31.28 & 27.97 & 19.58 & 17.78 & 17.52 \\ RT-LM & **24.61** & **23.89** & **22.34** & **23.73** & **21.54** & **19.78** & 21.12 & **20.66** & **20.80** & 32.14 & **31.71** & **28.64** & **22.28** & **21.94** & **20.03** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Average throughput for sentences with small, normal, and large uncertainty variance on the edge server. Fig. 13: Study of average response time with different values of (a) \(\alpha\) and (b) \(b\) across five LMs on the edge server. response time reaching about 0.75s for T5. This indicates a considerable dependence of dynamic consolidation on the number of tasks considered for a batch. Optimal performance is achieved at \(b=1.8\). Values below or above this introduce inefficiencies, either mimicking static batching or causing delays in task completion due to longer wait time. ### _Evaluating Malicious Scenarios_ To evaluate the robustness of RT-LM against malicious inputs, we apply a state-of-the-art adversarial attack method [56] that crafts provided input texts to elongate LM outputs. Table V presents an example of a malicious sentence designed to prompt an LM to generate longer output \(\hat{\mathbf{A}}\) than the original one \(\mathbf{A}\), leading to a computational burst and degraded system performance. Tasks are deemed malicious if their uncertainty scores exceed a predefined threshold (see Eq. 4). To assess the response, we control the proportion of deliberately crafted malicious tasks within a range of 0% to 100%, increasing in increments of 10%, and evaluate subsequent system latency performance. Fig. 14 shows the effects of varying ratios of malicious tasks on the average response time of both FIFO and RT-LM, as well as the associated average inference latency across different LMs. As seen, RT-LM is proficient in managing extreme conditions wherein a large proportion of malicious tasks need to be processed, outperforming the uncertainty-oblivious FIFO. When the malicious task ratio exceeds 30%, FIFO exhibits high sensitivity, with the average response time increases from around 2.0s to 3.0s. Whereas RT-LM is resilient against malicious tasks, maintaining a steady average response time of around 1.5\(\sim\)1.9s. Our results confirm that RT-LM effectively prevents malicious tasks from hindering the execution of other critical tasks. This resilience enhances RT-LM's suitability for applications like chatbots [57], personal assistants [58], and conversational AI in healthcare [45] where defense against adversarial attacks is crucial. ### _Overhead Analysis_ Analyzing overhead is crucial in practical real-time systems which are more complicated and variant. A solution with high overhead may undermine response time and throughput, as the scheduling process may severely block task execution. We present an analysis of both latency and memory usage introduced by RT-LM on the edge server, offering insights into the practical efficiency of our design. **Offline Profiling.** We initialize an LW model and train it for 100 epochs, using the LM outputs as ground truths. We report both the average training time per epoch and its proportion relative to the LM inference time. Memory usage during this phase is also recorded. As shown in Table VI, our training consumes merely around 3\(\sim\)4% of the LM inference latency, and less than 3% of the total available memory (512 GB), demonstrating the overhead efficiency of RT-LM. **Online Scheduling.** We evaluate the average per-task latency of each component of RT-LM and compare the combined latency to the LM inference time. We also record the average memory usage as well as CPU/GPU utilization during online scheduling. Table VII reveals that RT-LM introduces less than 3% additional latency overhead relative to the LM inference time (around 415 milliseconds per task). Such small overheads are unlikely to affect real-time dialogue systems noticeably. Notably, prioritization accounts for the majority of scheduling time, as uncertainty is computed and queued at this stage. For all LMs, CPU/GPU utilization reach over 90%, which suggests effective resource allocation under RT-LM. ## VI Related Work and Discussion **Real-time DNN Inference.** Recent research has improved real-time Deep Neural Network (DNN) performance with strategies optimizing performance-accuracy trade-offs [59, 60, 61], and exploring system design for DNN execution [62, 63, 64, 65, 66, 67]. Despite these advancements, previous works \begin{table} \begin{tabular}{l|c} \hline \hline **Q**: Not really. Let’s **table**_ think_ about food. What do you like to eat? \\ **I love**_like_ fish. \\ **A**: I love fish too! What is your favorite kind? I like pasta, filipino, streak, etc. It talk a lot on IRC and it is fun to learn about it with some \\ other guys. \\ \hline \hline \end{tabular} \end{table} TABLE V: An example of crafted sentence that causes DiLaoGPT to generate much longer outputs. _Italics_ and **strike** through denote added and removed tokens, respectively. \begin{table} \begin{tabular}{c|c c c|c|c|c} \hline \hline \multirow{2}{*}{LM} & \multicolumn{3}{c|}{Avg. per-task latency (ms)} & \multicolumn{2}{c|}{Memory} & \multicolumn{1}{c|}{CPU / GPU util.} \\ \cline{2-7} & Prior. & Consol. & Off. & Ratio & Test & Ratio \\ \hline DialoGPT & 8.04 & 0.42 & 0.37 & 2.10\% & 11.293 MB & 97\% / 92\% \\ GODEL & 7.78 & 0.43 & 0.49 & 2.04\% & 12,795 MB & 93\% / 97\% \\ BlenderRot & 9.24 & 0.53 & 0.40 & 2.93\% & 12,366 MB & 99\% / 95\% \\ BART & 7.84 & 0.35 & 0.10 & 2.06\% & 11,979 MB & 97\% / 91\% \\ TS & 8.39 & 0.33 & 0.18 & 2.27\% & 11,653 MB & 95\% / 90\% \\ \hline \hline \end{tabular} \end{table} TABLE VII: Latency, memory, and CPU/GPU utilization of online scheduling. Prior., consolol., and off. denote prioritization, consolidation, and offloading. Fig. 14: Average response time and LM inference latency on the edge server, under varying ratios of malicious tasks. \begin{table} \begin{tabular}{c|c c|c|c} \hline \hline \multirow{2}{*}{LM} & \multicolumn{3}{c|}{Total LW latency (s)} & \multicolumn{1}{c}{Memory} \\ \cline{2-5} & Train & Ratio & Train \\ \hline DialoGPT & 351 & 3.01\% & 14.607 MB \\ GODEL & 490 & 3.96\% & 14,768 MB \\ BlenderRot & 448 & 3.71\% & 14,723 MB \\ BART & 392 & 3.25\% & 14,631 MB \\ T5 & 369 & 3.06\% & 14,639 MB \\ \hline \hline \end{tabular} \end{table} TABLE VI: Latency and memory of offline profiling. neither consider the dynamics of DNNs for different execution times of inputs. In contrast, our proposed method, RT-LM, builds upon these existing scheduling algorithms by incorporating uncertainty estimation to further enhance performance and resource allocation. **Uncertainty Estimation.** Uncertainty estimation has been a topic of interest in the machine learning and NLP community, particularly in the context of deep learning [68]. Methods like Monte Carlo dropout [23] and Bayesian neural networks [69] have been proposed to quantify the uncertainty in model predictions. Previous works [17, 39, 70] also show that uncertainty may cause an LM to generate outputs with varied lengths. Our method employs a lightweight regressor to estimate the uncertainty in terms of the output length of an LM inference, which can be used to inform the scheduling process, improving resource utilization and response time. **Intelligent Edge Server Systems.** In cloud-edge-client hierarchical systems, AI models are co-deployed on the cloud and edge servers [49, 71], where multiple requests from diverse users via edge devices can be processed concurrently by the DNNs. Notable examples of such applications include online chatbots and live translation services. Additionally, cloud servers frequently grapple with load balancing across multiple workers [72]. RT-LM could prioritize critical requests and redirect malicious tasks to CPU cores, thereby enhancing overall system performance and reducing the threat of performance attacks against DNNs [73, 74, 75, 76, 56, 77]. **Limitations of RT-LM.** RT-LM mainly targets system-level optimization in heavy-workload scenarios, emphasizing concurrent task processing by taking into account the uncertainty characteristics of each task. In real-world on-device LM-embedded systems, where queries typically arrive sequentially, there's room for further improvement, e.g., optimizing performance for each individual task by leveraging the correlation between uncertainty and layer-level LM inference/training efficiency could be pursued. Additionally, our current approach is designed for single-machine scenarios. Expanding to hybrid deployment setups, such as server-edge combinations, is an avenue worth exploring. Moreover, RT-LM doesn't account for memory and power constraints, which could cause potential out-of-memory (OOM) issues on edge environments and pose challenges when deploying on low-power devices. Although deep learning compilers [78, 79] may mitigate the challenges posed by limited resources in such scenarios, adapting RT-LM to work efficiently in memory-constrained edge settings and optimizing LM inference from a power-efficiency standpoint is an area yet to be addressed. ## VII Conclusion In this paper, we introduced RT-LM, a novel uncertainty-aware resource management for real-time on-device LMs. Our extensive evaluations demonstrated the superior performance of RT-LM in terms of response time, system throughput, and robustness to various system settings, while maintaining low overhead and excellent memory efficiency. In the future, we will focus on further optimizing the uncertainty estimation mechanism and expanding the applicability of RT-LM to more diverse and dynamic real-world workloads. ## Acknowledgment This research was supported by the National Science Foundation under Grants CNS Career 2230968, CPS 2230969, CNS 2300525, CNS 2343653, CNS 2312397.
2309.07507
The Os$^{16+}$ and Ir$^{17+}$ ions as candidates for accurate optical clock sensitive to physics beyond standard model
We perform detailed calculations of the electronic structure of the Os$^{16+}$ ion and demonstrate that it has several metastable states which can be used for very accurate optical clocks. The clocks are highly sensitive to manifestations of the physics beyond standard model, such as time variation of the fine structure constant $\alpha$, interaction with scalar and pseudoscalar (axion) dark matter fields, local Lorentz invariance and local position invariance violations, and interaction of atomic electrons with nucleus mediated by new boson. The latter can be studied by analysing King plot for isotope shifts and its possible non-linearities since Os has 5 stable isotopes with zero nuclear spin. Similar calculations for the Ir$^{17+}$ ion spectra demonstrate very good agreement between theory and experiment. This helps to validate the method of the calculations and demonstrate that both ions are excellent candidates for the search of new physics.
V. A. Dzuba, V. V. Flambaum
2023-09-14T08:19:15Z
http://arxiv.org/abs/2309.07507v2
The Os\({}^{16+}\) and Ir\({}^{17+}\) ions as candidates for accurate optical clock sensitive to physics beyond standard model. ###### Abstract We perform detailed calculations of the electronic structure of the Os\({}^{16+}\) ion and demonstrate that it has several metastable states which can be used for very accurate optical clocks. The clocks are highly sensitive to manifestations of the physics beyond standard model, such as time variation of the fine structure constant \(\alpha\), interaction with scalar and pseudoscalar (axion) dark matter fields, local Lorentz invariance and local position invariance violations, and interaction of atomic electrons with nucleus mediated by new boson. The latter can be studied by analysing King plot for isotope shifts and its possible non-linearities since Os has 5 stable isotopes with zero nuclear spin. Similar calculations for the Ir\({}^{17+}\) ion spectra demonstrate good agreement between theory and experiment. This helps to validate the method of the calculations and demonstrate that both ions are excellent candidates for the search of new physics. ## I Introduction It was suggested in Refs. [1] to use highly charged ions (HCI) to search for optical transitions highly sensitive to the time variation of the fine structure constant \(\alpha\). The idea is based on the fact of the _level crossing_[2]. Usually intervals between electron energy levels are very large in HCI compare to neutral atoms. However, due to different level ordering in neutral atoms and hydrogen-like ions, the energy interval between states of different configurations, drawn as a function of the ionisation degree \(Z_{i}\), must cross at some point, bringing the energy interval into the optical region. Since states of different configurations have different dependence on the value of the fine structure constant \(\alpha\), the energy intervals are very sensitive to the variation of \(\alpha\). The sensitivity is proportional to \(Z^{2}(Z_{i}+1)^{2}\) and strongly depends on the electron orbital angular momentum. The largest sensitivity can be found in electron transitions in heavy ions which in singe-electron approximation can be described as \(s_{1/2}\) - \(f_{5/2},f_{7/2}\) or \(p_{1/2}\) - \(f_{5/2},f_{7/2}\) (\(s\)-\(f\) or \(p\)-\(f\)) transitions [1; 2; 3]. Use of metastable states brings additional advantage of potentially very high accuracy of the measurements typical for atomic optical clocks. The accuracy for HCI clocks can be even higher than that for optical clocks in neural atoms due to the fact that states of HCI are less sensitive to perturbations due to compact size of HCI, small polarisability and large energies of excitations [4]. A number of candidate systems were suggested in earlier works [4; 5; 6; 7] (see also reviews [8; 9] and references therein). Experimental studies were performed for the Ho\({}^{14+}\)[10; 11] and Ir\({}^{17+}\)[12] ions. Further work is in progress [8; 9]. In present work we study the Os\({}^{16+}\) ion. It has some important features which make it attractive candidate for experimental study. It has several metastable states which can be used for clock transitions. At least one transition is \(s-f\) transition, so that it is very sensitive to the variation of the fine structure constant \(\alpha\) and dark matter field which may be a source of such variation [14; 15; 16]. Other transitions are less sensitive to \(\alpha\) variation and can serve as _anchor_ lines. In addition, they are sensitive to other manifestations of new physics such as local Lorentz invariance and local position invariance violations, etc. The energy diagram for Os\({}^{16+}\) ion is presented on Fig. 1. This diagram is the result of the calculations in the present work. Experimental energy intervals between states of different configurations are not known, The Os\({}^{16+}\) ion is similar to the Ir\({}^{17+}\) ion studied before [12; 13]. However, it has important advantage of having five stable isotopes with zero nuclear spin (Ir has none). It makes this ion suitable for searching for new interactions via looking at possible non-linearities of King plot [21; 22]. The minimum requirements for such study include having two clock transitions and four stable isotopes. Isotopes with zero nuclear spin have further advantage of having no hyperfine structure which complicates the analysis of the isotope shift. Table 1 lists five stable isotopes of Os which has zero nuclear spin. It also presents the parameters \(\beta\) of nuclear quadrupole deformation. These parameters come from nuclear calculations [20]. Nuclear deformation can lead to the non-linearities of King plot [23; 24] presenting important systematic effect in search for new interactions. Note however that the parameters of deformation have similar values for all stable isotopes (see Table 1). This means that significant cancellation of the effect of deformation is possible in the isotope shift. Finally, the Os\({}^{16+}\) and Ir\({}^{17+}\) ions are suitable for searching the effects of local Lorentz invariance (LLI) \begin{table} \begin{tabular}{c c c c c c} \(A\) & 184 & 186 & 188 & 190 & 192 \\ \(\beta\) & 0.281 & 0.257 & 0.223 & 0.185 & 0.164 \\ \end{tabular} \end{table} Table 1: A list of stable isotopes of Os with zero nuclear spin. Parameters \(\beta\) of the quadrupole deformation of proton distribution are taken from Ref. [20].
2309.03761
Hyperpolarisation of nuclear spins: polarisation blockade
Efficient hyperpolarisation of nuclear spins via optically active defect centers, such as the nitrogen vacancy (NV) center in diamond, has great potential for enhancing NMR based quantum information processing and nanoscale magnetic resonance imaging. Recently, pulse-based protocols have been shown to efficiently transfer optically induced polarisation of the electron defect spin to surrounding nuclear spins -- at particular resonant pulse intervals. In this work, we investigate the performance of these protocols, both analytically and experimentally, with the electronic spin of a single NV defect. We find that whenever polarisation resonances of nuclear spins are near-degenerate with a `blocking' spin, which is single spin with stronger off-diagonal coupling to the electronic central spin, they are displaced out of the central resonant region -- without, in general, significant weakening of the resonance. We analyse the underlying physical mechanism and obtain a closed form expression for the displacement. We propose that spin blocking represents a common but overlooked effect in hyperpolarisation of nuclear spins and suggest solutions for improved protocol performance in the presence of (naturally occurring) blocking nuclear spins.
O. T. Whaites, C. I. Ioannou, B. J. Pingault, G. L. van de Stolpe, T. H. Taminiau, T. S. Monteiro
2023-09-07T15:02:54Z
http://arxiv.org/abs/2309.03761v1
# Hyperpolarisation of nuclear spins: polarisation blockade ###### Abstract Efficient hyperpolarisation of nuclear spins via optically active defect centers, such as the nitrogen vacancy (NV) center in diamond, has great potential for enhancing NMR based quantum information processing and nanoscale magnetic resonance imaging. Recently, pulse-based protocols have been shown to efficiently transfer optically induced polarisation of the electron defect spin to surrounding nuclear spins- at particular resonant pulse intervals. In this work, we investigate the performance of these protocols, both analytically and experimentally, with the electronic spin of a single NV defect. We find that whenever polarisation resonances of nuclear spins are near-degenerate with a 'blocking' spin, which is single spin with stronger off-diagonal coupling to the electronic central spin, they are displaced out of the central resonant region- without, in general, significant weakening of the resonance. We analyse the underlying physical mechanism and obtain a closed form expression for the displacement. We propose that spin blocking represents a common but overlooked effect in hyperpolarisation of nuclear spins and suggest solutions for improved protocol performance in the presence of (naturally occurring) blocking nuclear spins. ## I Introduction There is significant current interest in techniques for the control of nuclear spins using solid-state defects like nitrogen vacancy (NV) centers in diamond [1; 2]. Many of these techniques rely on protocols of periodically applied microwave pulses. Although they were originally developed to dynamically decouple the electron spin from the environment [3; 4; 5], it was subsequently found that when pulses are applied at intervals resonant with surrounding \({}^{13}\)C precession frequencies, the resulting entanglement between and individual nuclear spin and the electronic spin of the defect offers a very effective technique for sensing and controlling nuclear spin states [1; 6; 7]. Such pulse-based control has been exploited for nuclear polarisation and state initialisation with applications ranging from quantum error correction and quantum information [6; 7; 8; 9; 10; 11], to nanoscale NMR and other sensing applications [12; 13]. Dynamical nuclear polarisation (DNP) [14], the transfer of polarisation from electrons to nuclear spins, originally developed for NMR is also being developed in this context. Recently proposed pulse-based DNP protocols explicitly aimed at nuclear polarisation with NVs, PulsePol [15; 16] and PolCPMG [17] have been demonstrated to polarise \({}^{13}\)C nuclei in diamond. Polarisation of spins external to the diamond sample using PulsePol has been achieved using an ensemble of NV centers [18]. However, while these protocols were designed in the setting of polarisation transfer to a single spin, in a realistic setting, the central spin couples to multiple spins. In this study, we investigate polarization transfer from a single spin simultaneously coupled to several environmental spins. In particular, we find that polarisation transfer, at the expected polarisation resonance \(T=T_{r}\), can be suppressed by a blocking spin, which is a simultaneously coupled environmental spin, with similar precession frequencies but with stronger coupling. This effect is illustrated in Fig.1: the presence of the blocking spin expels the polarisation resonances of the weaker-coupled spins from the central \(T\approx T_{r}\) region. We analyse the underlying physical mechanism and clarify its relation to dark states, which are known to suppress polarisation of nuclear spins [20; 22]. Spin blocking is quite distinct and, to our knowledge, not previously investigated: while dark-states suppress polarisation by decoupling a subspace of states from the dynamics, spin-blocking acts by shifting a subset of spins off-resonance. We experimentally verify this analysis using a single NV whose microscopic spin environment has been precisely characterised [24; 25]. Figure 1: Illustration of nuclear spin polarisation in the presence of a blocking spin: **(a)** shows an NV center in diamond. A pulse-based polarisation protocol, characterised by a pulse interval \(\tau\), is applied to polarise a distant cluster of nuclear spins (in light green) **(b)** Efficient polarisation of the weakly-coupled spins (green dots) is expected near the resonant pulse period \(T=T_{r}\). However, in the presence of a spin that is near-degenerate but interacts more strongly with the NV (blocking spin B), the cluster’s resonances are displaced, to \(T\simeq T_{r}^{B}\) (solid green line). The resonance of spin B (red line) is unperturbed. We term this effect ‘polarisation blockade,’ in analogy to blockade effects encountered in other fields of physics [19]. In section II we review pulse-based control of nuclear spins and introduce the theoretical Floquet-based models used to analyse the joint dynamics of the NV and nuclear spins and simulate the experiments. In section III we present our results, including theory-experimental comparisons as well as an expression for the resonance displacement, equation 6, a key new result of the work. In Sec.IV we discuss the implications for nuclear polarisation and more efficient control of nuclear registers. ## II Methods: Pulse-based control An NV electron spin system surrounded by \(N_{\mathrm{nuc}}\) nuclear spins may be described by the Hamiltonian \(\hat{H}(t)=\hat{H}_{p}(t)+\hat{H}_{0}\) where : \[\hat{H}_{0}=\omega_{\mathrm{L}}\sum_{n=1}^{N_{\mathrm{nuc}}}\hat{I}_{z}^{(n)}+ \hat{S}_{z}\sum_{n=1}^{N_{\mathrm{nuc}}}\mathbf{A}^{(n)}\cdot\mathbf{I}^{(n)}. \tag{1}\] The operators for the electronic spin in subspace \(\{|0\rangle,|-1\rangle\}\) and the nuclear spins are labelled \(\hat{S}\), \(\hat{I}\) respectively. \(\omega_{\mathrm{L}}\) is the nuclear Larmor frequency; the hyperfine field \(\mathbf{A}^{(n)}\) acting on the nuclear spin has components \(A_{\perp}^{(n)},A_{z}^{(n)}\) relative to the \(z\)-axis; we take \(A_{\perp}^{(n)}\equiv A_{x}^{(n)}\), without loss of generality. \(\hat{H}_{p}(t)=\Omega(t)\hat{S}_{k}\) is the pulse control Hamiltonian. \(\Omega(t)\) is set by the microwave control field, while \(k\equiv x,y\) for common protocols. For pulse-based control, there is a resonant pulse spacing \(\tau=\tau_{r}\) for which the electron and nuclear spin states selectively interact, allowing efficient control of the nuclear states. For example, in the well-known CPMG sequence [23], microwave pulses are applied along the \(x\)-axis at regular intervals, \(\tau\); its resonant pulse spacing is \(\tau_{r}=j\pi/\omega_{I}\), where \(j\) is an odd integer and the resonant frequency of a given nucleus is \(\omega_{I}\simeq\omega_{L}-A_{z}/2\). For common protocols, the full protocol period \(T\) is an integer multiple of \(\tau\). For CPMG specifically, it is \(T_{r}=2\tau_{r}\). Note that we omit the \(n\) superscript for single nuclear spin calculations. _Polarisation protocols:_ recently, new pulse-based protocols were identified [15; 17] that split the resonance, such that each component selectively addresses one nuclear spin state, allowing polarisation. In the present work we focus on the DNP protocol PulsePol, commonly used due to its robustness to detuning [15]. PulsePol combines a series of \(x\)- and \(y\)-directional MW pulses to map the NV state onto a nuclear spin. It has pulse period \(T=4\tau\) where \(\tau\) is the pulse interval. Its resonant pulse interval is \(\tau_{r}=j\pi/(2\omega_{L})\), for \(j=1,2...\) and the \(3^{\mathrm{rd}}\) harmonic (\(j=3\)) is often selected for its effectiveness [15]. By averaging over the period, an effective, time-independent single nuclear-spin Hamiltonian \(\hat{H}^{(n)}\equiv g^{(n)}(\hat{S}_{+}\hat{I}_{-}^{(n)}+\hat{S}_{-}\hat{I}_{+} ^{(n)})\) can be obtained, corresponding to a flip-flop type interaction between the NV spin (\(S\)) and the \(n\)-th nuclear spin (\(I\)). The corresponding flip-flop rate at this resonance was found to be \(g^{(n)}=A_{x}^{(n)}(\sqrt{2}+2)/(6\pi)\), [15]. _Repetitions:_ in general, in order to achieve high levels of polarisation, repetitions of the polarisation protocol are required. For each repetition, \(N_{p}\) cycles are applied at each value of \(\tau\). This is the periodic component, where the NV-nuclear evolution is largely coherent. However, the NV electronic spin is reinitialised optically after \(N_{p}\) cycles to its \(m=0\) state. The reduced nuclear bath evolution is (ideally) uninterrupted. The \(N_{p}\) sequence is then repeated. A series of NV reinitialisations, interspersed with \(N_{p}\) protocol cycles, is repeated \(R\) times. Typically, protocols employ a short run \(N_{p}=2-8\) that yields appreciable polarisation for only the most strongly coupled spins; but if this is followed by many \(R\gg 1\) repetitions, polarisation of even nuclear spins with weak coupling is gradually achieved. Corresponding theoretical simulation involves \(R\) sets of coherent Hamiltonian evolution for \(t=N_{p}T\) interspersed with calculation steps where the NV states are traced out in order to simulate re-initialisation in \(m_{s}=0\). ### Experimental set-up We study nuclear polarisation dynamics surrounding a single NV center at cryogenic temperatures (4 K). The NV electron spin is initialised and read out via resonant optical excitation. The NV sample employed here was previously characterised in detail, allowing for accurate modelling of the microscopic nuclear environment [24; 25]. The individual nuclear spins are labelled as C1, C2...Cn and their hyperfine coupling strengths \(A_{x}^{(n)},A_{z}^{(n)}\), taken from [24], are tabulated in the appendix. The nuclear spin expectation values are read out by applying a combination of nuclear-nuclear and electron-nuclear gates, and subsequently reading out the electron spin state as detailed in [22]. As in previous work, [22], we systematically correct for pulse errors and amplitude damping during the readout pulse sequences, in order to get a best estimate for the spin expectation values. ### Theoretical methods: Floquet methods We analyse the coherent dynamics with Floquet theory, a general framework for periodically driven physical systems that has found wide applicability, ranging from NMR continuous driving [21] to also pulse-based control of NV centers [28]. However, Floquet theory encompasses several different analytical tools. Floquet engineering (FE), [26] where a system driven by a typically strong or high frequency (non-resonant) field can be shown to correspond to an effective, static Hamiltonian with renormalised parameters, by averaging over the period of the driving. Varying the _amplitude_ of the non-resonant drive, one may tune over the effective Hamilto nian to polarise the bath [27]. A common and widely used approach is the Fourier series decomposition of the one-period Hamiltonian, in a suitable rotating frame, which has also been employed for NV pulse-based control of a nuclear-spin bath [29]. Floquet spectroscopy [28] has been introduced in this context: resonances for pulse-based protocols were shown to correspond to avoided crossings of the underlying Floquet quasi-energies of the pulse-protocol unitary. Thus the morphology of these single or multiple avoided crossings has proved insightful for analysis of NV-nuclear entanglement and polarisation in terms of Landau-Zener dynamics [30]. Here we employed both Fourier analysis and Floquet spectroscopy to analyse our results. _Floquet spectroscopy:_ for a system with a temporally periodic Hamiltonian, \(\hat{H}(t+T)=\hat{H}(t)\), Floquet's theorem allows one to write solutions of the Schrodinger equation in terms of quasi-energy states \(|\psi_{l}(t)\rangle=\exp{(-i\epsilon_{l}t)}|\Phi_{l}\rangle\) where \(\epsilon_{l}\) is the quasi energy. \(|\Phi_{l}(t)\rangle=|\Phi_{l}(t+T)\rangle\), \(T\) is the period while \(l=1,..,D\) (\(D\) is the dimension of the state space). One may also obtain eigenstates of the one-period unitary evolution operator \(\hat{U}(T)\equiv\hat{U}(T,0)\). The Floquet states \(|\Phi_{l}\rangle\), obey the eigenvalue equation: \[\hat{U}(T)|\Phi_{l}\rangle=\lambda_{l}|\Phi_{l}\rangle\equiv\exp{(-i\mathcal{ E}_{l})}|\Phi_{l}\rangle \tag{2}\] where \(\mathcal{E}_{l}(T)\equiv\tan^{-1}\mathrm{Im}\,\lambda_{l}/\,\mathrm{Re}\, \lambda_{l}\) is the eigenphase (the Floquet phase). For Floquet spectroscopy numerics, we diagonalise the full state space of the NV plus a cluster of \(N_{nuc}\sim 1-7\) nuclear spins. Thus we can readily calculate and plot \(\mathcal{E}_{l}(T)\) as a function of period \(T\), to investigate resonances and gain insight on the role played here by overlapping avoided crossings. For Fourier series analysis, a transformation to the togling frame (the frame of the pulses, see Appendix for details) is widely used, including for analysis of polarisation protocols [15; 17] and their resonances; a key step is to average over a single period. However, in order to understand experimental traces as a function of \(\tau\), we must in addition consider also off-resonant behavior (away from \(\tau=\tau_{r}\)), as shown below. Figure 2: Comparisons with experiment and spectral analysis **(a)** Polarisation of weakly coupled nuclear spin C21 with \((A_{x},A_{z})\equiv(\approx 5.0,-9.7)\) kHz \(\times 2\pi\) employing the PulsePol protocol, with \(N_{p}=4,R=100\). The NV environment contains a blocking spin C3 with \((A_{x}^{B},A_{z}^{B})\equiv(59.,-11.3)\) kHz \(\times 2\pi\). Nuclear spin C21 shows the asymmetric displaced resonance expected for \(N_{p}=4\). Lower panels show experiment (blue), single-spin simulation of C21 (black dash line), simulation of C21 and C3 (orange line). The upper panel shows the corresponding Floquet spectra, and offers an intuitive spectroscopic understanding of spin blocking. The wide, broad avoided crossing is associated primarily with C3. It overlaps with the much narrower avoided crossing corresponding to the polarisation resonance of C21. This means the narrow C21 crossing is pushed away from \(T_{r}\) to _lower_\(T_{r}^{\prime}<T_{r}\). **(b)** For C16, with \((A_{x},A_{z})\equiv(5.3,-19.8)\) kHz \(\times 2\pi\), the overlap with the strong C3 avoided crossing results in the narrow C16 crossing being pushed towards larger \(T_{r}^{\prime}>T_{r}\). Eq.6 quantifies the magnitude and clarifies that the sign of the displacement depends on \(A_{z}^{B}-A_{z}\). ## III Results ### Single spin polarisation: off-resonant behavior Details of our analysis are given in the Appendix and here we summarise the key steps. Away from \(\tau=\tau_{r}\), we introduce a small nuclear detuning, slightly altering the PulsePol Hamiltonian to: \[\hat{H}\equiv\sum_{n=1}^{N_{\rm nuc}}g^{(n)}(\hat{S}_{+}\hat{I}_{-}^{(n)}+\hat{ S}_{-}\hat{I}_{+}^{(n)})+(\omega_{I}^{(n)}-\omega)\hat{I}_{z}^{(n)} \tag{3}\] where the detuning of each nucleus corresponds to \(\delta_{n}(\tau)=\omega_{I}^{(n)}-\omega\ll\omega_{L}\) and the protocol frequency \(\omega=6\pi/T=3\pi/(2\tau)\). The resonant nuclear precession frequency is \(\omega_{I}^{(n)}=\sqrt{(\omega_{L}-A_{z}^{(n)}/2)^{2}+(A_{x}^{(n)}/2)^{2}}\). For coherent evolution over \(N_{p}\) pulses, we can readily show that the polarisation \(2\langle\hat{I}_{z}^{(n)}\rangle\) of a single nuclear spin, for moderate detuning, takes the simple form: \[\mathcal{P}(N_{p}T)=\left(\frac{2g}{\Omega_{r}}\right)^{2}\sin^{2}\left(\frac {\Omega_{r}N_{p}T}{2}\right) \tag{4}\] where the generalised Rabi frequency \(\Omega_{r}=\sqrt{\delta^{2}+(2g)^{2}}\). Hence, the maximum population transfer into this state is \(\mathcal{P}_{max}=1/(1+(\delta/2g)^{2})\) at the integer closest to the pulse number \(N_{p}=2\pi/(\Omega_{r}T)\). At resonance, \(\delta=0\) and the maximum saturation \(\mathcal{P}_{max}=1\) and the Rabi frequency is \(\Omega_{r}=2g\). For \(N_{p}\) greater than this maximal value, the polarisation oscillates cyclically. Here, by convention \(\mathcal{P}\in[-1,1]\). We adopt \(\mathcal{P}\in[-1/2,1/2]\), where results can be recovered with the appropriate rescaling. _Asymptotic behavior:_ we note the above result is for a single repetition, \(R=1\), and experimental results range from \(R\sim 10^{2}-10^{4}\). One may show that single spin polarisation behaviour tends to an asymptotic envelope in \(R\rightarrow\infty\) limit. This is illustrated in Fig.1 (right panel, red solid line). Stronger-coupled spins attain the asymptotic form after a few repetitions. For some weaker coupled blocked spins, simulations indicate that even \(R=10000\) may be insufficient to reach the asymptotic limit. A notable feature of the polarisation traces is that they exhibit sharp 'dips' at period \(T=T_{dip}\), seen in Fig.1 (red solid line) and also seen in the experiments. Here we show that these dips (see Appendix for further details) occur for: \[T_{\rm dip}\simeq\frac{T_{r}}{(1+\mu^{2})}\left[1\pm\frac{n}{3N_{p}}\sqrt{1+ \mu^{2}\left(\frac{9N_{p}^{2}}{n^{2}}-1\right)}\right] \tag{5}\] where \(n\in\mathbb{Z}^{+}\), \(n>0\), \(T_{r}=6\pi/\omega_{I}\) and \(\mu=2g/\omega_{I}\ll 1\) thus \(T_{\rm dip}\simeq T_{r}\left[1\pm\frac{n}{3N_{p}}\right]\). The experimental traces also contain additional fine-structure due to dephasing arising from the instrumental waiting time \(\sim 10\,\mu\)s in between repetitions. ### Blockade spins: theory and experiment If an NV center has a proximate \({}^{13}\)C nuclear spin, at relative orientation such that the spin will have \(A_{z}^{B}\sim 0\) but reasonably strong off-diagonal coupling \(A_{x}^{B}\), we label this spin (see Fig.1(a))with superscript \(B\) as this 'blockade' spin in effect expels the resonances of weakly-coupled spins (with \(A_{z}^{s}\sim 0\) with \(s=1,2...\)) from the region around \(T_{r}=4\tau_{r}^{(B)}\), the expected resonance. The resonance of the more strongly coupled blocking spin is unperturbed and remains at \(T=T_{r}\). The effect was illustrated in Fig.1. Although full numerical simulation of clusters of 5-8 nuclei is feasible, for insight, our analysis of spin blockade Figure 3: Upper panel tests the analytical expression for the position of the resonance Eq.6 against numerical simulations. Simulations for this colour map use C3 as a blockade spin and spin with \(A_{x}/(2\pi)=5\) kHz and a variable \(A_{z}\). Although agreement with the displaced peak is not exact, the expression tracks the displacement quite well. The lower panel compares experimental polarisation of C16 (blue) in the presence of blockade spin C3, demonstrating good agreement with simulation (orange) of C3 and C16. For comparison, the undisplaced single C16 simulation is shown (black dash line). Eq.6 is shown to give reasonable agreement with the displaced peak position. All simulations and experiments in this figure use parameters \(N_{p}=8\), \(R=100\) which results in a displaced resonance rather than the ‘wedge’ profile obtained for \(N_{p}=4\). requires consideration of the NV electron spin as well as the nuclear spin _pair_ comprising B and one more nuclear spin. Pair dynamics involves consideration of an 8-state space. However, two states are largely decoupled and analysis reduces to two triplets of coupled states (numerics involve full diagonalisation but, for insight, a simpler model is analysed). From the Floquet spectroscopy, this scenario corresponds to two sets of avoided crossings in the Floquet eigenphases. This is illustrated in the upper panels of Fig.2. It shows the pair of avoided crossings: a very broad crossing of width \(\sim A_{x}^{B}\) for the case of a strong-coupled spin and, within it, a very narrow crossing due to the weaker coupled spin since \(A_{x}^{B}\gg A_{x}^{s}\). In other words the strong avoided crossing involves a pair of states which mostly overlap with states of the single-spin avoided crossing of spin \(B\); while the overlapping narrow crossing involves states that mostly overlap with the weaker spin states. The lower panels show the corresponding experimental profiles for the weakly coupled nuclear spins C21 with \((A_{x},A_{z})\equiv(\approx 5.0,-9.7)\) kHz\(\times 2\pi\) and C16, with \((A_{x},A_{z})\equiv(5.3,-19.8)\) kHz\(\times 2\pi\) respectively. Their resonances are displaced by the stronger blockade spin spin C3 which has experimentally measured couplings \((A_{z}^{B},A_{x}^{B})=(-11,-59)\) kHz\(\times 2\pi\). There is excellent agreement with simulations. As \(N_{p}=4\) the resonance takes a characteristic wedge shape whereas for \(N_{p}=8\) it is predicted to be fully displaced. A striking result is that while the C21 resonance is displaced to lower \(T\), for the C16 resonance, the converse is true. Analysis of the 3 state matrix and its eigenvalues gives the position of the weak spin resonance and magnitude of the displacement (see Appendix for derivation): \[\Delta T_{r}/T_{r}\simeq-\frac{(A_{x}^{B})^{2}}{\omega_{I}^{(s)}(\omega_{I}^{ (B)}-\omega_{I}^{(s)})} \tag{6}\] Eq.6 is a key result of this work. Fig.3 tests this expression against numerics and experimental data. In the upper panel, the numerical colour map shows polarisation as a function of \(T\) and detuning \(\omega_{I}^{(B)}-\omega_{I}^{(s)}\). The overlaid white dots are from Eq.6 and demonstrate that it provides a robust estimate of the magnitude of the displacement of the polarisation resonance peak. The lower panel illustrates an example of the displacement for \(N_{p}=8\) and spin C16. In contrast to the observed resonance displacement \(\Delta T_{r}\), the Rabi frequency, or width of the avoided Floquet crossing, is not strongly affected by the blockade spin provided that \(|\omega_{I}^{(B)}-\omega_{I}^{(s)}|/A_{x}^{B}\ll 1\) (see appendix for details). In general the measured polarisation is not very significantly reduced, but rather is shifted to a different period \(T_{r}+\Delta T_{r}\). There are particular exceptions, such as the case of experimental data for a spin simultaneously perturbed by two blockade spins (discussed in the appendix). ## IV Discussion As natural diamond contains of order 1.1% of \({}^{13}\)C, we estimate that of order 20% of NV defects will have a nuclear spin with reasonably strong \(A_{x}\) coupling but with \(A_{z}\sim 0\), thus is able to produce a blocking effect on distant, weakly coupled nuclear spins. However, Eq.6 makes clear that the blocking effect is more generic and will occur wherever a weaker coupled spin is near-degenerate with a stronger coupled spin, thus can occur for arbitrary \(A_{x},A_{z}\), provided \(A_{x}^{(B)}\gg A_{x}\) and \(A_{z}^{(B)}\sim A_{z}\). Thus it should be a relatively common feature in such studies, and spin-blocking is identifiable via its distinctive spectral profiles, such as the 'wedge' shape for \(N_{p}=4\). The scenario of two blocking spins acting simultaneously on a weaker spin is less common; but in the present data, we observed the case where two blocking spins act to provide displacements of opposite signs Figure 4: Compares the conventional polarisation method of applying PulsePol at a constant \(T\) (or \(\tau\)) to our proposed adaptation of applying two different regimes of \(T\) in the presence of blocking spin C3. (a) Simulated polarisation against periodicity of PulsePol, \(T\), with parameters \(Np=8\), \(R=100\) of both C3 in blue and C16 with in red. The two regimes are highlighted as \(T_{1}\simeq T_{r}+\Delta T_{r}\simeq 1.85\)\(\mu\)s which is a periodicity at the displaced resonance and \(T_{2}\simeq T_{r}\simeq 1.74\)\(\mu\)s near the original resonance. (b) Shows simulations polarisation of both spins with total time. The solid line is the application of 200 repetitions (grey region) at \(T_{1}\) followed by 200 repetitions at \(T_{2}\), the dashed line is the application of 400 repetitions at \(T_{2}\). A higher level of polarisation in less time is achieved for C16. For the polarisation of C3, although polarisation rises rapidly for both values of \(T\), driving close to its resonant value at \(T=T_{2}\), rather than far off resonance at \(T=T_{1}\) is important for ensuring robust polarisation, to the 0.5 limit. (presented in the Appendix). The result is a sort of destructive cancellation that fully suppresses the resonance peak of the weaker spin. While the present study considered PulsePol, our simulations show that similar behavior occurs also for PolCPMG. Understanding of the blocking spin mechanism allows one to propose approaches to improve polarisation of weak spins. Figure 4 demonstrates a method for drastically improving the polarisation efficiency by employing two different \(T\). The upper panel highlights the shifted resonance of spin C16 in the presence of blockade spin C3. First, PulsePol is applied with \(T\simeq T_{r}+\Delta T_{r}\) in the region with the shifted resonance to polarise weak spins. Following this, PulsePol with \(T\simeq T_{r}\) is applied to maximise polarisation: the asymptotic, saturated polarisation is maximal for \(T\simeq T_{r}\). The lower panel compares the effectiveness of the two \(T\) method compared to the standard technique of applying the protocol at \(T=T_{r}\) only. The two spin system of C16 (weak spin) and C3 (blocking spin) was used. The initial stage (grey region) with \(T\simeq T_{r}+\Delta T_{r}\), shows rapid polarisation of both spins (solid lines); the second stage with \(T\simeq T_{r}\) then yields an improvement on overall polarisation relative to the single \(T\), on-resonance polarisation (dashed line). Spin blocking effects are relevant to DNP of \({}^{13}\)C in the diamond crystal and, potentially, external nuclear spins as well. Even for different spin species with different Larmor frequencies accidental resonances [31] such as those that occur between harmonics for \({}^{1}\)H and \({}^{13}\)C might come into play, but this has not been investigated here. #### Spin blocking versus dark modes. _Dark-bright_ modes are a ubiquitous effect occurring in the physics of 3-level systems [32; 33]: if we consider two degenerate modes, with eigenvalues \(\epsilon_{1,2}\approx\epsilon\), independently interacting with a third mode \(\epsilon_{3}\sim\epsilon\) with finite coupling strengths \(g_{1}=g_{2}\equiv g\), but not with each other, then modes \(1,2\) hybridize such that one of the hybrid modes fully decouples from mode 3 (dark mode, effective \(g=0\)) while the others acquire enhanced coupling \(\sqrt{2}g\) (bright modes). The spectral signature is generic [32]: instead of two independent avoided crossings of width \(g_{1}\) and \(g_{2}\), the mixing/hybridisation produces a a single wider avoided crossing of width \(\sqrt{2}g\) and a completely decoupled state. The role of dark states in impeding polarisation has been noted [22] and investigated in a many body context [27]. _Spin blocking_ is a distinct polarisation suppression mechanism. It occurs for similar regimes, but for the case where the coupling is highly anisotropic \(g_{1}\gg g_{2}\). The spectral signature is also generic. There are once again two separate crossings, of width not far from the unperturbed widths \(g_{1}\) and \(\sim g_{2}\), like the unhybridised case. However only mode 1 remains at \(\epsilon\simeq\epsilon_{1}\). The weaker coupled mode has its crossing pushed out of the \(\epsilon\approx\epsilon_{1,2}\) region. For the polarisation protocols, the weaker spin in effect is pushed off-resonance, which is different from having its effective coupling suppressed, as is the case for a dark state. Off-resonant driving in general does not fully suppress polarisation, and in principle, all spins should eventually tend to the \(R\rightarrow\infty\) limit, which is only maximal on-resonance. However it may make it extremely inefficient and slow, potentially allowing imprecisions and decoherent processes to perturb the protocol in a real experiment. However, unlike dark-mode suppression, the effect may be mitigated by adjusting the protocol period to account for the shifted resonance. ## V Conclusions In conclusion: in the present work we introduce, and theoretically and experimentally investigate, the spin-blockade effect, thus named in analogy to blockade effects [19] encountered in other fields of physics. We show that nuclear spins with strong interactions to the central spin can block polarisation transfer by detuning weaker-coupled spins away from resonance. This many-body effect, detrimental for polarisation efficiency, can be mitigated by pulse sequences that are tailored to the microscopic configuration of the spin system. Polarisation transfer to a complex spin system can be highly dependent on the microscopic configuration of the spins and our results thus provide an opportunity for optimization of dynamical nuclear polarisation in various settings. **Acknowledgements** Oliver Whaites acknowledges support from an EPSRC DTP grant. This work was supported by the Netherlands Organisation for Scientific Research (NWO/OCW) through a Vidi grant. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 852410). This work is part of the research programme NWA-ORC (NWA.1160.18.208 and NWA.1292.19.194), (partly) financed by the Dutch Research Council (NWO). This work was supported by the Dutch National Growth Fund (NGF), as part of the Quantum Delta NL programme. We acknowledge funding from the Dutch Research Council (NWO) through the project "QuTech Phase II funding: Quantum Technology for Computing and Communication" (Project No. 601.QT.001). B.P. acknowledges financial support through a Horizon 2020 Marie Sklodowska-Curie Actions global fellowship (COHESiV, Project Number: 840968) from the European Commission. We thank M.Markham and D.J.Twitchen from Element Six for providing the diamond.
2309.13572
Causal Asymmetry of Classical and Quantum Autonomous Agents
Why is it that a ticking clock typically becomes less accurate when subject to outside noise but rarely the reverse? Here, we formalize this phenomenon by introducing process causal asymmetry - a fundamental difference in the amount of past information an autonomous agent must track to transform one stochastic process to another over an agent that transforms in the opposite direction. We then illustrate that this asymmetry can paradoxically be reversed when agents possess a quantum memory. Thus, the spontaneous direction in which processes get 'simpler' may be different, depending on whether quantum information processing is allowed or not.
Spiros Kechrimparis, Mile Gu, Hyukjoon Kwon
2023-09-24T07:32:14Z
http://arxiv.org/abs/2309.13572v1
# Causal Asymmetry of Classical and Quantum Autonomous Agents ###### Abstract Why is it that a ticking clock typically becomes less accurate when subject to outside noise but rarely the reverse? Here, we formalize this phenomenon by introducing _process causal asymmetry_ - a fundamental difference in the amount of past information an autonomous agent must track to transform one stochastic process to another over an agent that transforms in the opposite direction. We then illustrate that this asymmetry can paradoxically be reversed when agents possess a quantum memory. Thus, the spontaneous direction in which processes get'simpler' may be different, depending on whether quantum information processing is allowed or not. Suppose we observe two streams of data: the first, \(\mathcal{A}=\ldots 010101\ldots\), is an alternating stream of 0 and 1s, generated by a binary switch that flips each time-step; the second, \(\mathcal{B}=\ldots 01230123\ldots\), represents configurations of a revolving object that completes a full cycle every four time-steps. We are asked to decide between two possible causal explanations: (1) \(\mathcal{A}\) causes \(\mathcal{B}\), such that each \(b_{t}\) of \(\mathcal{B}\) at time \(t\) is generated by some channel acting on \(a_{t}\) or (2) \(\mathcal{B}\) causes \(\mathcal{A}\) such that the input and output of this channel are reversed. While we cannot conclusively rule out either causal explanation, the two causal structures are not symmetric. Let \(a_{t}\) and \(b_{t}\) be the respective outputs of \(\mathcal{A}\) and \(\mathcal{B}\) at time \(t\). A simple channel channel enacting \(a_{t}=b_{t}\mod 2\) transforms \(\mathcal{B}\) to \(\mathcal{A}\). This channel is memoryless and time-independent. Any agent implementing it does not need to adapt its action to \(a_{t}\) or \(b_{t}\) at times prior to \(t\). In contrast, transforming from \(\mathcal{A}\) to \(\mathcal{B}\) is more complex. An agent that sees \(a_{t}=0\) cannot decide whether \(b_{t}=0\) or \(b_{t}=2\) without at least 1 bit of information about the past (see Fig. 1). In complexity science, each piece of information a machine must track is considered to be a necessary cause of future behaviour [1; 2]. As such, our thought experiment highlights a variant of _causal asymmetry_ - causal explanations in one direction appear more natural than the other [3; 4; 5]. Here, \(\mathcal{A}\) can be thought of as a rudimentary discrete-time clock that tracks 1-bit of data about the current time, while \(\mathcal{B}\) is a slightly more sophisticated clock that tracks 2-bits [6; 7]. Causal asymmetry then represents the intuition that it is easier to degrade a clock than to make it more accurate. Adding noise needs only a memoryless noise channel while reversing this requires adaptive operations. The goal of this article is to (1) formalize the above intuitions using causal transducers - a mathematical description of autonomous agents that transform one stochastic process to another [2] and (2) determine how quantum-enhanced agents can change what causal direction we consider to be more natural. This involves defining _process causal asymmetry_ as the difference in the amount of past data an agent must track to transform \(\mathcal{A}\) to \(\mathcal{B}\) versus from \(\mathcal{B}\) to \(\mathcal{A}\); and examining how this difference changes when we deploy agents with quantum memory [8]. We make a surprising observation: quantum processing can reverse causal asymmetry, such that \(\mathcal{A}\) causing \(\mathcal{B}\) may be more natural when considering only classical agents, while the reverse is the more natural one when quantum agents are allowed. _Framework.--_ We adopt computational mechanics to describe stochastic processes and agents that transform them from one to another [1; 2; 9]. Formally, a stochastic process \(\mathcal{A}\) can be described by a bi-infinite sequence of random variables \(\overrightarrow{X}=\ldots X_{-1}X_{0}X_{1}\ldots\), where \(X_{t}\) denotes the probability the process emits \(x_{t}\in\mathcal{X}\) at time \(t\). Each instance of a process has a particular past \(\overleftarrow{x}=\ldots x_{-1}x_{0}\), and a conditional future governed by \(\overrightarrow{X}\big{|}_{\overrightarrow{x}}\). Here we consider stationary stochastic processes, such that \(\overleftrightarrow{X}\) is time-translation invariant. Autonomous agents are finite-state machines that can Figure 1: Transformation between processes \(\mathcal{A}\) and \(\mathcal{B}\). To transform \(\mathcal{A}\) to \(\mathcal{B}\), memory is necessary, and the memory state \(m_{j}\) is updated as \(m_{j}\gets m_{(j+x)\mod 2}\) for the input bit \(x\) while the output is generated as \(y=2j+x\). On the other hand, to transform \(\mathcal{B}\) to \(\mathcal{A}\), the output is obtained as \(y=x\mod 2\), and thus no memory is necessary. Inside the boxes, we show the finite state machine presentation of the input-output processes. Each transition emitting output \(y\) when receiving input \(x\), which occurs with probability \(p\), is represented as \(y|x:p\). transform one stochastic process to another [2; 8]. At each time-step \(t\), an agent accepts input \(x_{t}\in\mathcal{X}\) and emits a corresponding output \(y_{t}\in\mathcal{Y}\). Taking the present as \(t=0\), the agent then experiences a history \(\overleftarrow{x}=(\overleftarrow{x},\overleftarrow{y})\). The operational behaviour of the agent is defined by an _input-output process_, a family of random variables \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}\), which governs the probability the agent will emit \(\overrightarrow{y}=y_{1}y_{2}\ldots\) when given future inputs \(\overrightarrow{x}=x_{1}x_{2}\ldots\). We say that an agent transforms \(\mathcal{A}\) to \(\mathcal{B}\) when, given inputs sampled from \(\mathcal{A}\), its corresponding outputs \(\overleftarrow{Y}\) align with \(\mathcal{B}\). When \(\overrightarrow{Y}|\overrightarrow{x},\overleftarrow{z}=\overrightarrow{Y}| \overrightarrow{x}\) is history independent, the agent is _non-adaptive_. This represents the simplest input-output process. Agents implementing such transforms can be _memoryless_. They just need to apply the same transformation on \(x_{t}\) at each time. The set of possible transforms, however, is extremely limited. There is no capability to generate temporal structure, such as converting a sequence of \(\mathcal{A}=\ldots 0101\ldots\) to \(\mathcal{B}=\ldots 01230123\ldots\). To do this would require tracking some information about \(\overleftarrow{z}\). That is, the agent must possess an internal memory \(\mathcal{M}\) that is configured in some state \(s_{\overleftarrow{z}}=k(\overleftarrow{z})\), such that there is a systematic method to sample from \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}\) for each possible \(\overrightarrow{x}\) and \(\overleftarrow{z}\). Here, we assume that all our agents are causal, such that \(\mathcal{M}\) does not contain any oracular information - information about the future that is not already contained in the past [5; 10; 11]. To quantify the memory cost of the transformation, we make use of \(\epsilon\)-transducers, the provably optimal classical agents for executing a given input-output process [2]. An \(\epsilon\)-transducer operates on the rationale that an agent needs not distinguish two histories \(\overleftarrow{z}\) and \(\overleftarrow{z}\)' if its future decisions never depend on this information, i.e. when \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}=\overrightarrow{Y}|_ {\overrightarrow{x},\overleftarrow{z}}\), for all \(\overrightarrow{x}\); which we abbreviate to the equivalence relation \(\overleftarrow{z}\sim_{\epsilon}\overleftarrow{z}^{\prime}\). Thus, \(\epsilon\)-transducer features an encoding function \(k\) such that \(k(\overleftarrow{z})=k(\overleftarrow{z}\)') if and only if \(\overleftarrow{z}\sim_{\epsilon}\overleftarrow{z}\)'. The resulting memory states \(\mathcal{S}=\{s_{i}\}\) are then in 1-1 correspondence with the equivalence classes induced by \(\sim_{\epsilon}\). The dynamics of the input-output process can be realized by stochastic transitions between causal states, i.e., a set of transition matrices \(\mathcal{T}\equiv\{T^{(y|x)}\}\). Their elements \(T^{(y|x)}_{ij}\) then represent the probability an \(\epsilon\)-transducer in state \(s_{i}\) will emit the symbol \(y\) upon receiving input \(x\), and transition to state \(s_{j}\). The memory states \(\{s_{i}\}\) are referred to as the causal states, owing to their interpretation as the minimal set of belief states an agent must hold to be able to execute \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}\) to perfect statistical fidelity. For each input stochastic process \(\mathcal{A}\), the resulting agent memory will be driven to some stationary distribution, \(\{\pi_{i}\}\), where \(\pi_{i}\) denotes the probability of being at causal state \(s_{i}\) when driven by input process \(\mathcal{A}\). The resulting memory cost \(C_{\mathcal{A}}=-\sum\pi_{i}\log_{2}\pi_{i}\) is then known as the statistical complexity of the process on input \(\mathcal{A}\) and is considered a fundamental measure of structure for \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}\) that characterizes how much past information any agent executing \(\overrightarrow{Y}|_{\overrightarrow{x},\overleftarrow{z}}\) must hold in memory. By minimizing the statistical complexity over all input-output processes that transform \(\mathcal{A}\) to \(\mathcal{B}\), we obtain the minimal memory cost of the transformation, \(C_{\mathcal{A}\rightarrow\mathcal{B}}\). Note also that this quantity can always be bounded above by \(C_{\mathbf{0}\rightarrow\mathcal{B}}\), the memory cost of transforming a sequence of \(\mathbf{0}=\ldots 0000\ldots\) (or any other i.i.d sequence) to \(\mathcal{B}\). This is because we can always choose to first erase \(\mathcal{A}\) using a memoryless channel at no cost. The resulting machine is known as the optimal predictive (or causal) model for \(\mathcal{B}\)[1; 5; 9]. The states of such a machine are referred to as the causal states of \(\mathcal{B}\). \(C_{\mathbf{0}\rightarrow\mathcal{B}}\) is referred to as the statistical complexity of \(\mathcal{B}\), representing the minimal information we need to store about its past to generate statistically correct future predictions [1]. When \(C_{\mathcal{A}\rightarrow\mathcal{B}}\) saturates this bound, it suggests that tracking \(\mathcal{A}\) has no benefit for making predictions on \(\mathcal{B}\). Given two processes \(\mathcal{A}\) and \(\mathcal{B}\), let \(C_{\mathcal{A}\rightarrow\mathcal{B}}\) denote the minimal past data needed to transform \(\mathcal{A}\) to \(\mathcal{B}\), and \(C_{\mathcal{B}\rightarrow\mathcal{A}}\) as the equivalent when transforming from \(\mathcal{B}\) to \(\mathcal{A}\). Then, we define _process causal asymmetry_ by the difference \[\Delta_{C}(\mathcal{A},\mathcal{B})=C_{\mathcal{A}\rightarrow\mathcal{B}}-C_{ \mathcal{B}\rightarrow\mathcal{A}}\,. \tag{1}\] A positive \(\Delta_{C}(\mathcal{A},\mathcal{B})\) indicates that more memory is required to transform \(\mathcal{A}\) to \(\mathcal{B}\) than from \(\mathcal{B}\) to \(\mathcal{A}\). Our opening example illustrates this, with \(C_{\mathcal{A}\rightarrow\mathcal{B}}=1\) and \(C_{\mathcal{B}\rightarrow\mathcal{A}}=0\), such that \(\Delta_{C}(\mathcal{A},\mathcal{B})=1\). So far, we assumed that \(\mathcal{M}\) stores only classical states. However, quantum agents encoding each causal state \(s_{i}\) to corresponding quantum state \(|s_{i}\rangle\) exhibit a memory advantage [8; 12]. The quantum causal states can always be taken to be pure states [8], and the quantum complexity is defined as the von Neumann entropy \(Q_{\mathcal{A}}=S(\rho_{\mathcal{A}})=-\mathrm{tr}\,(\rho_{\mathcal{A}}\log\rho_{ \mathcal{A}})\) of the average state of the quantum memory, \(\rho_{\mathcal{A}}=\sum_{i}\pi_{\mathcal{A},i}|s_{i}\rangle\!\langle s_{i}|\), where \(|s_{i}\rangle\!\langle s_{i}|\) is the quantum state representing the classical causal state \(s_{i}\). A quantum model is at least as efficient as the best classical model, i.e., \(Q_{\mathcal{A}}\leq C_{\mathcal{A}}\)[8; 12]. By minimizing the quantum complexity over all input-output processes that transform \(\mathcal{A}\) to \(\mathcal{B}\), we obtain \(Q_{\mathcal{A}\rightarrow\mathcal{B}}\), the minimal memory needed by quantum agents to transform \(\mathcal{A}\) to \(\mathcal{B}\). For certain \(\mathcal{A}\) and \(\mathcal{B}\), we have that \(Q_{\mathcal{A}\rightarrow\mathcal{B}}<C_{\mathcal{A}\rightarrow\mathcal{B}}\). In analogy to the classical case, we define process causal asymmetry for quantum agents as \[\Delta_{Q}(\mathcal{A},\mathcal{B})=Q_{\mathcal{A}\rightarrow\mathcal{B}}-Q_{ \mathcal{B}\rightarrow\mathcal{A}}.\] _Main Results.--_ We now state our main results. The first shows that the direction of causal asymmetry can be reversed by quantum agents. **Result 1** (Inconsistent Causal Asymmetry).: _There exist processes \(\mathcal{A}\) and \(\mathcal{B}\) with process causal asymmetries simultaneously obeying \(\Delta_{C}(\mathcal{A},\mathcal{B})>0\) and \(\Delta_{Q}(\mathcal{A},\mathcal{B})<0\) _which implies that the classical memory resources necessary to map \(\mathcal{A}\) to \(\mathcal{B}\) are higher than that of mapping \(\mathcal{B}\) to \(\mathcal{A}\), while quantum mechanically it is the other way around. Causal asymmetry can thus reverse depending on whether quantum agents are allowed or not._ Our second result shows that not only an inconsistent causal asymmetry exists, but the magnitude of this inconsistency can be arbitrarily large. **Result 2** (Unbounded Inconsistent Causal Asymmetry).: _There exist processes \(\mathcal{A}\) and \(\mathcal{B}_{n}\), for \(n\in\mathbb{N}\) such that the classical process asymmetry \(\Delta_{C}(\mathcal{A},\mathcal{B})\) diverges with increasing \(n\), while the quantum one, \(\Delta_{Q}(\mathcal{A},\mathcal{B})\), remains bounded but with the opposite sign._ Our results thus illustrate - for the first time to our knowledge - that quantum and classical theories can result in different conclusions about the most natural order of causal transformations. The possibility of storing information on non-orthogonal quantum states, not only leads to a quantitative reduction of memory cost [8; 12], but to a qualitatively different behaviour in the direction of causation. _Explicit Demonstration -_ Let us consider two processes that consist of a ground state to which the probability of transition is always \(\nicefrac{{1}}{{2}}\) from any of the excited states. The first process has only one excited state and is a coarse-graining of the second one, which has multiple excited states. In detail, let \(\mathcal{A}\) be a process with alphabet \(x\in\mathcal{X}=\{0,1,2\}\), whose statistics can be generated by a machine with two internal states, ground state \(r_{0}\) and excited state \(r_{1}\) (see Fig. 2). An output of \(x=0\) is followed by a transition to state \(r_{0}\), while outputs \(x=1\) or \(x=2\) lead to a trasition to state \(r_{1}\), with probability \(\nicefrac{{1}}{{2}}\). We also consider a family of stochastic processes, \(\mathcal{B}_{n}\), labeled by an index \(n\in\mathbb{N}\) with \(n\geq 3\). Each can be modeled by a machine with \(n\) causal states, \(s_{0},\ldots,s_{n-1}\), and \(n\) outputs, \(y\in\mathcal{Y}=\{0,\ldots,n-1\}\) with the following properties: (i) an emission of symbol \(y_{j}\) leads to a transition to the state of the same index \(s_{y_{j}}\), (ii) any excited state \(s_{j\neq 0}\) either transitions to the ground state \(s_{0}\) with probability \(\nicefrac{{1}}{{2}}\) or transitions to another excited state \(s_{k\neq 0}\) with probability \(\nicefrac{{p_{j}+}}{{2}}\), (iii) \(s_{0}\) transitions to one of the excited states \(s_{j\neq 0,2}\) with probability \(\nicefrac{{p_{0}}}{{2}}\), except for \(j=2\) which occurs with probability \(\nicefrac{{1}}{{p_{0}}{2}}\). Moreover, the probabilities \(p_{jk}\), satisfying \(\sum_{k=1}^{n-1}p_{jk}=1\) for all \(j\), are all assumed to be close to the value \(\nicefrac{{1}}{{n-1}}\) but different from each other. These processes are shown in Fig. 2. We now turn to \(\epsilon\)-transducers that transform \(\mathcal{A}\) to \(\mathcal{B}_{n}\). Briefly speaking, such an agent has one state for each different state of the output \(\mathcal{B}_{n}\) and modifies the probabilities of the outputs of \(\mathcal{A}\) accordingly in order to reproduce the probabilities of \(\mathcal{B}_{n}\). These constraints alone are not sufficient to single out a unique construction, and thus, there exists a family of \(\epsilon\)-transducers that take \(\mathcal{A}\) to \(\mathcal{B}_{n}\) with minimal complexity. Nevertheless, as they all have the same classical complexity (see Appendix A), we choose a representative \(\epsilon\)-transducer \(\mathcal{T}_{n}\) with \(n\) states, \(\Sigma=\{\sigma_{i}\}_{i=0\ldots n-1}\), a 3-symbol input alphabet \(\mathcal{X}=\{0,1,2\}\), and an \(n\)-symbol output alphabet \(\mathcal{Y}=\{0,\ldots,n-1\}\) (see Fig. 3). Whenever an emission of a certain symbol is made, a transition to the internal state of the same label occurs. Specifically, if a symbol \(x\neq 1\) is input, the same symbol \(y=x\) is output, and a transition to state \(\sigma_{x}\) is made, respectively. For input \(1\), however, output \(y=j\) is emitted with probability \(p_{ij}\) if the \(\epsilon\)-transducers are at state \(\sigma_{i}\). Similarly, the \(\epsilon\)-transducers that transform \(\mathcal{B}_{n}\) to \(\mathcal{A}\) keep intact the emissions of symbol \(0\) but have to 'erase' all the probabilities associated with the remaining \(n-1\) outputs. We denote this with \(\hat{\mathcal{T}}_{n}\), and a graphic representation is shown in Fig. 3. Once again, more than one channel yielding the same transformation exists, but they all have the same classical statistical complexity (see Appendix A). We now show that the classical complexities \(C_{\mathcal{A}\rightarrow\mathcal{B}_{n}}\) of the \(\epsilon\)-transducers \(\mathcal{T}_{n}\) that transform from \(\mathcal{A}\) to \(\mathcal{B}_{n}\) are increasing with \(n\), while the quantum complexities \(Q_{\mathcal{A}\rightarrow\mathcal{B}_{n}}\) get arbitrarily close to \(0\) when the probabilities \(p_{ij}\) get closer. In the other direction, however, both classical, Figure 3: The \(\epsilon\)-transducers \(\mathcal{T}\) and \(\hat{\mathcal{T}}_{n}\). Labels of transitions with the same colour share the same pattern. Figure 2: Processes \(\mathcal{A}\) and \(\mathcal{B}_{n}\). Labels of transitions with the same colour share the same pattern. \(C_{\mathcal{B}_{n}\to\mathcal{A}}\), and quantum, \(Q_{\mathcal{B}_{n}\to\mathcal{A}}\), complexities assume a finite value between \(0\) and \(1\). We start with the classical complexity of the \(\epsilon\)-transducer \(\tilde{\mathcal{T}}_{n}\) from \(\mathcal{A}\) to \(\mathcal{B}_{n}\). We first obtain the stationary distribution of the process \(\mathcal{A}\) itself and then the long-term probabilities of symbol \(x\) being emitted. We explicitly find the values \(\{\Pr(0),\Pr(1),\Pr(2)\}=\{\nicefrac{{1}}{{3}},\nicefrac{{1}}{{2}},\nicefrac{{ 1}}{{6}}\}\). By also noting that every output of the \(\epsilon\)-transducers leads to a transition to the state with the same label, the stationary distribution of the \(\epsilon\)-transducers can be evaluated (see Appendix B). For \(n\geq 3\) and under the assumption \(p_{i,j}\approx\nicefrac{{1}}{{n-1}}\,\forall i,j\), we find the stationary distribution \(\varphi_{0}=\Pr(0),\varphi_{2}=\nicefrac{{\Pr(1)}}{{n-1}}+\Pr(2)\), and \(\varphi_{j}=\nicefrac{{\Pr(1)}}{{n-1}}\), for \(j=1,3,\ldots,n-1\). The complexity is obtained as \(C_{\mathcal{A}\to\mathcal{B}_{n}}=-\sum_{i}\varphi_{i}\log\varphi_{i}\); in the limit of large \(n\) it grows logarithmically with \(n\), i.e., \(C_{\mathcal{A}\to\mathcal{B}_{n}}\sim\log n\), and thus it is unbounded. On the other hand, the \(\epsilon\)-transducers that map \(\mathcal{B}_{n}\) to \(\mathcal{A}\) can be shown to have a classical complexity independent of \(n\). The stationary distribution is \(\hat{\varphi}_{0}=\Pr(0)=1/3\,,\hat{\varphi}_{1}=1-\Pr(0)=2/3\), where now \(\Pr(0)\) is the probability of process \(\mathcal{B}_{n}\) emitting a \(0\). However, since \(\mathcal{T}_{n}\) preserved the emissions with \(0\), \(\Pr(0)\) for \(\mathcal{B}_{n}\) is the same as that of the input \(\mathcal{A}\), from which we obtain \(C_{\mathcal{B}_{n}\to\mathcal{A}}=-\sum_{i=0,1}\hat{\varphi}_{i}\log\hat{ \varphi}_{i}\approx 0.918\) independent of \(n\). It follows that the classical process causal asymmetry is \[\Delta_{C}(\mathcal{A},\mathcal{B}_{n})\sim\log n,\] which grows as the logarithm of \(n\). In other words, transforming process \(\mathcal{A}\) to \(\mathcal{B}_{n}\) is increasingly costly in terms of memory when a classical memory is employed. We now turn to the derivation of the quantum complexity, \(Q_{\mathcal{A}\to\mathcal{B}_{n}}\). We explicitly construct the following quantum models where the classical causal states are encoded as \(|s_{i}\rangle=\otimes_{x}|s_{i}^{x}\rangle\) with \(|s_{i}^{x}\rangle=\sum_{k}\sum_{y}\sqrt{T_{ik}^{(y|x)}}|y\rangle|k\rangle\)[5; 8]. The \(\epsilon\)-transducers \(\mathcal{T}_{n}\) are such that for inputs \(x=0,2\) we have that \(|s_{i}^{x}\rangle=|x\rangle|x\rangle\) independent of the state index \(i\). Also, one can easily check that \(|s_{i}^{1}\rangle=\sum_{j\neq 0}\sqrt{p_{ij}}|j\rangle|j\rangle\). As a result, the overlap between causal states is given by \(\langle s_{i}|s_{j}\rangle=\sum_{k\neq 1}\sqrt{p_{ik}\,p_{jk}}\). The quantum complexity is the von Neumann entropy of the state \(\rho=\sum_{i}\varphi_{i}|s_{i}\rangle\langle s_{i}|\) where \(\varphi_{i}\) denotes the \(i-\)th element of the stationary distribution of the \(\epsilon\)-transducer \(\mathcal{T}_{n}\) when driven by the input \(\mathcal{A}\). From the fact that state \(\rho\) and the Gram matrix of overlaps, with elements \(G_{ij}=\sqrt{\varphi_{i}\varphi_{j}}\langle s_{i}|s_{j}\rangle\), have the same non-zero eigenvalues [13], thereby their von Neumann entropies coincide. When all the probabilities \(p_{ij}\) get arbitrarily close to each other, we have that \(G_{ij}\approx\sqrt{\varphi_{i}\varphi_{j}}\). In this case, the Gram matrix \(G\approx vv^{\top}\), with \(v^{\top}=(\sqrt{\varphi_{1}}\,,\ldots,\sqrt{\varphi_{n}})\) has approximately one eigenvalue equal to \(1\) and all others equal to \(0\), leading to a quantum complexity of \(0\). In Appendix C we give a precise argument and provide an upper bound for a small perturbation of the probabilities around the value \(\nicefrac{{1}}{{n-1}}\) so that \(p_{ij}=\frac{1+\delta_{ij}}{n-1}\), with \(|\delta_{ij}|\leq\delta\) for some small \(\delta>0\), and we show that the optimal quantum complexity is bounded as \[Q_{\mathcal{A}\to\mathcal{B}_{n}}^{\mathrm{upper}}(\delta)\geq Q_{\mathcal{A} \to\mathcal{B}_{n}}\geq 0\,, \tag{2}\] where \(Q_{\mathcal{A}\to\mathcal{B}_{n}}^{\mathrm{upper}}(\delta)\) can be made arbitrarily small by an appropriate choice of \(\delta\). Turning to the quantum complexity of the \(\epsilon\)-transducers from \(B_{n}\) to \(\mathcal{A}\), the situation is more intricate. We consider all possible classical channels and show that the complexity of all possible quantum models is bounded (see Appendix A). This follows from a maximal fidelity constraint that any model has to obey to correctly reproduce the channel [8]. Specifically, we can show that the optimal quantum complexity is bounded from below and above according to the inequality \[Q_{\mathcal{B}_{n}\to\mathcal{A}}^{\mathrm{upper}}\geq Q_{\mathcal{B}_{n}\to \mathcal{A}}\geq 0.55\,, \tag{3}\] where \(Q_{\mathcal{B}_{n}\to\mathcal{A}}^{\mathrm{upper}}\) depends on the value of \(n\); it takes the value \(0.682\) for \(n=3\) and approaches the classical complexity of \(0.918\) with increasing values of \(n\). Using Eqs. (2) and (3), we obtain the following bounds on the quantum process causal asymmetry, \(\Delta_{Q}(\mathcal{A},\mathcal{B}_{n})\), \[Q_{\mathcal{A}\to\mathcal{B}_{n}}^{\mathrm{upper}}(\delta)-0.55\geq\Delta_{Q}( \mathcal{A},\mathcal{B}_{n})\geq-Q_{\mathcal{B}_{n}\to\mathcal{A}}^{\mathrm{ upper}}. \tag{4}\] Figure 4: (a) Classical complexities \(C_{\mathcal{A}\to\mathcal{B}_{n}}\) (blue, solid line) and \(C_{\mathcal{B}_{n}\to\mathcal{A}}\) (red, solid line), as well as the quantum complexities, \(Q_{\mathcal{A}\to\mathcal{B}_{n}}\) (light blue shaded area) and \(Q_{\mathcal{B}_{n}\to\mathcal{A}}\) (orange shaded area). (b) Classical and quantum process causal asymmetries between \(\mathcal{A}\) and \(\mathcal{B}_{n}\). In magenta we show \(\Delta_{C}(\mathcal{A},\mathcal{B}_{n})\), while \(\Delta_{Q}(\mathcal{A},\mathcal{B}_{n})\) lies in the brown-shaded area. On the left hand side and for any \(n\), there exists a range of \(\delta\) that make this difference negative (see Appendix A). This implies that mapping processes \(\mathcal{B}_{n}\) to \(\mathcal{A}\) has a higher memory cost than that of mapping \(\mathcal{A}\) to \(\mathcal{B}_{n}\), when a quantum memory is employed. In Fig. 4, we plot the difference of classical and quantum process causal asymmetries, \(\Delta_{C}(\mathcal{A},\mathcal{B}_{n})\) and \(\Delta_{Q}(\mathcal{A},\mathcal{B}_{n})\), for \(n=3,\ldots,20\) and \(\delta=10^{-2}\). We see that the classical one grows logarithmically with \(n\) while the quantum approaches a finite negative value. _Discussion.--_ Transforming one stochastic process into another generally requires memory. Here, we introduced _process causal asymmetry_ to investigate the difference in memory costs required for transformations between a pair of stochastic processes \(\mathcal{A}\) and \(\mathcal{B}\). We demonstrated that agents with quantum memory exhibit dramatically different behaviour from those with classical memory. In particular, we provided the first explicit example where the direction of causation is flipped between classical and quantum models, and the gap grows unboundedly with system size. There are several natural directions for further research. The first involves linking our work to that of causal asymmetry in the context of causal vs. retrocausal models - where memory costs of modeling stochastic processes in forward-time vs. reverse-time can differ [3]. Here, quantum models could remove this asymmetry, but a full reversal remains unknown [5]. Meanwhile, our quantum agents transform classical data to classical data - while quantum agents (combs) that transform quantum processes also exist [14; 15; 16]. It would be exciting to unify such settings, considering causal asymmetry when both time and input-outputs are reversed in contexts of both classical and quantum outputs. Conceptually, process causal asymmetry also hints at some underlying resource theory on temporal correlations. For example, a sequence of i.i.d. random variables would take no memory to generate, while transforming this to a highly non-Markovian process requires adaptive agents with increasing memory resources [17; 18]. This opens an interesting research direction towards constructing a formal hierarchy of non-adaptive agents and relating it to existing ideas from resource theories of non-Markovianity [19; 20]. Indeed, the power of memory and adaptive operations has already been recognized in quantum state preparation [21; 22], gate synthesis [23; 24], exhibiting contextuality [25; 26; 11; 27], and generating data-strings [18] as well as being a key differentiator between quantum error mitigation and error correction [28]. Therefore, this direction could provide valuable insights on when quantum agents have greatest advantage and where they are most useful. _Acknowledgements.--_ We thank Paul Riechers, Jayne Thompson, Thomas Eliott, Andrew Garner, Felix Binder and Alec Boyd for interesting discussions. This work is supported by the National Research Foundation of Singapore, and the Agency for Science, Technology and Research (A*STAR) under its QEP2.0 programme (NRF2021-QEP2-02-P06), the Singapore Ministry of Education Tier1 Grant RG77/22, the Singapore Ministry of Education Tier 2 Grant MOET2EP50221-0005, FQXI under grant nos. FQXi-RFP-IPW-1903 ("Are quantum agents more energetically efficient at making predictions?") and FQXi-RFP-1809 ("The role of quantum effects in simplifying adaptive agents") from the Foundational Questions Institute and Fetzer Franklin Fund (a donor-advised fund of Silicon Valley Community Foundation). H.K. and S.K. are supported by KIAS Individual Grant Nos. CG085301 (H.K.) and CG086201 (S.K.) at the Korea Institute for Advanced Study.
2309.05310
ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space
This paper introduces a novel deep-learning approach for human-to-robot motion retargeting, enabling robots to mimic human poses accurately. Contrary to prior deep-learning-based works, our method does not require paired human-to-robot data, which facilitates its translation to new robots. First, we construct a shared latent space between humans and robots via adaptive contrastive learning that takes advantage of a proposed cross-domain similarity metric between the human and robot poses. Additionally, we propose a consistency term to build a common latent space that captures the similarity of the poses with precision while allowing direct robot motion control from the latent space. For instance, we can generate in-between motion through simple linear interpolation between two projected human poses. We conduct a comprehensive evaluation of robot control from diverse modalities (i.e., texts, RGB videos, and key poses), which facilitates robot control for non-expert users. Our model outperforms existing works regarding human-to-robot retargeting in terms of efficiency and precision. Finally, we implemented our method in a real robot with self-collision avoidance through a whole-body controller to showcase the effectiveness of our approach. More information on our website https://evm7.github.io/UnsH2R/
Yashuai Yan, Esteve Valls Mascaro, Dongheui Lee
2023-09-11T08:55:04Z
http://arxiv.org/abs/2309.05310v3
# ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space ###### Abstract This paper introduces a novel deep-learning approach for human-to-robot motion retargeting, enabling robots to mimic human poses accurately. Contrary to prior deep-learning-based works, our method does not require paired human-to-robot data, which facilitates its translation to new robots. First, we construct a shared latent space between humans and robots via adaptive contrastive learning that takes advantage of a proposed cross-domain similarity metric between the human and robot poses. Additionally, we propose a consistency term to build a common latent space that captures the similarity of the poses with precision while allowing direct robot motion control from the latent space. For instance, we can generate in-between motion through simple linear interpolation between two projected human poses. We conduct a comprehensive evaluation of robot control from diverse modalities (i.e., texts, RGB videos, and key poses), which facilitates robot control for non-expert users. Our model outperforms existing works regarding human-to-robot retargeting in terms of efficiency and precision. Finally, we implemented our method in a real robot with self-collision avoidance through a whole-body controller to showcase the effectiveness of our approach. ## I Introduction In recent years, human-robot interaction (HRI) has gained significant attention as it plays a leading role in deploying robots into our daily lives. For a natural HRI, the robot needs not only to capture the human movements but also to understand the human motion intentions behind them. To enhance HRI, it is also crucial to intuitively retarget these human motions onto robots while preserving their similarity and improving robot autonomy. This paper addresses the challenge of enabling robots to mimic human motions while preserving the likeness of the original movement. However, retargeting human motions to robots is a complex task due to the fundamental differences between human and robot anatomies, kinematics, and motion dynamics. Unlike humans, robots possess rigid bodies, different form factors, and distinct physical limitations. Consequently, directly mapping human motion to robot actuators often leads to unnatural and suboptimal robot behavior, undermining the objective of achieving human-like movements. For example, when retargeting the motion of a human touching his head with his right hand, it is crucial that the retargeted robot poses also reproduce this touching behavior in their motion. Solely replicating the specific arm movements could lead to the robot hand not being close to the head due to the different robot kinematics. Encoding such motions in the retargeting task is essential to ensure that robots have more natural and intuitive behaviors, leading to better and easier HRI. While motion retargeting is a long-standing challenge in the robotic and animation community, most recent research has been focused on the exploration of large human motion capture datasets [1, 2] to learn and synthesize human motions from different modality inputs: text [3], 3D scene [4], audio [5] or conditioned by key poses [6]. Our primary goal in this research is to develop a novel method that eliminates the reliance on data annotation, thereby accomplishing the learning of a shared representation space in which human and robot poses are mutually and integrally represented. A good representation space ensures that similar poses from both domains are positioned close to each other while dissimilar poses are far apart. While previous research [7, 8] requires manually annotating human and robot pairs performing the same pose to learn this retargeting process, we consider an unsupervised training technique that does not require pairing data. Consequently, we can reduce the implementation costs for retargeting human poses to new robots. To this end, we propose an encoder-decoder architecture to construct a latent space that preserves the spatial relationships between human joints as well as the likeness of the original human motion. We achieve this process through the synergy of multiple losses. First, we adopt adaptive contrastive learning to autonomously construct the common latent space based on a proposed similarity metric. Then, we incorporate a reconstruction loss on robot data to ensure the regeneration of the same motion from the latent space. that the robot faithfully follows the movement of the human. Finally, we enforce a consistency term to constrain that the robot faithfully follows the movement of the human. As a consequence, the constructed latent space remains tractable via simple operations. For instance, we are able to generate smooth robot motions between key poses by simply using linear interpolation in the latent space. This intuitive behavior facilitates motion control and also showcases the robustness of our learned latent space. Finally, our decoder can translate the latent representations to robot motion control commands. Contrary to prior methods that adopt soft safety measures in learned approach [9], we implement our method in a real robot with a whole-body controller that ensures self-collision avoidance in the retargeted motion. Our pipeline allows for the seamless and real-time translation of human skeleton data into robot motion control. Additionally, our model can be easily integrated into the aforementioned deep learning architectures [3, 4, 5] to accommodate robot motion control from various modalities, enabling flexible and intuitive control over robot behavior. By addressing this challenge, we anticipate significant advancements in HRI. Our research has broad applications, including robot-assisted therapy, entertainment, teleoperation, and industrial robotics. Enabling robots to replicate human motion and intention opens up new possibilities for intuitive and natural HRI, enhancing user experience and fostering acceptance of integrating robots into our daily lives. Our work leads to the following contributions. 1. Unsupervised deep learning approach to learn human-to-robot retargeting without any paired human and robot motion data. 2. Robust and tractable latent space to generate smooth robot motion control through simple linear interpolation. 3. Direct mapping from human skeletons to robot control commands via an encoder-decoder neural network. 4. Evaluating control of a real robot from various modalities: text, video, or conditioned by key poses, which ensures user-friendly robot control, particularly for non-experts. ## II Related Work Existing literature on human-to-robot motion retargeting techniques is reviewed next, highlighting limitations and the need for advancements in translating human motion's overall expressivity and naturality. ### _Motion retargeting in animation_ Human motion retargeting onto animated characters has been a long-standing challenge in the computer graphics community. By bridging the gap between human motion and animation, motion retargeting enhances the quality and naturality of character animation, opening up possibilities for various applications in fields such as film, gaming, and virtual reality. Classical motion retargeting approaches [10, 11, 12, 13] involved manually defining kinematic constraints and simplifying assumptions to map human motion onto animated characters. These methods were limited in their ability to handle complex motions and could not accurately capture human movement's nuances. However, with the increased availability of motion capture data [1, 2], data-driven approaches emerged as a more attractive alternative. These approaches offer the potential to overcome the limitations of classical methods and achieve more natural and nuanced motion transfer. [7, 8] learned a shared latent representation to translate motions between different kinematic agents. However, they required paired training data, which is costly and specific for each robot. To cope with the cost of pairing data, [14] used a recurrent neural network to learn motion retargeting without those pairs using adversarial training and cycle consistency. [15] showed that disentangling pose from movement in the retargeting process leads to more natural outcomes. However, these data-driven approaches required the same source and target kinematics. Inspired by the intuition that different kinematics can be reduced to a common primal skeleton, [16] proposed explicitly encoding the different skeleton topologies and projecting those into a shared latent space without pairing data. [16] adopted a latent consistency loss to ensure that the retargeted poses remain faithful to the source. Our work is inspired by their consistency idea, but we construct a more robust shared latent space through a contrastive loss which improves the retargeting outcome. Recently, [17, 18] focused on the motion retargeting but considering the mesh constraints of the animated characters, and thus adjusting motions to reduce interpenetration and feasibility of the motions. Contrary to the aforementioned works that consider self-collision avoidance as an additional feature for more realistic animation, our work ensures the feasibility of the retarget motion by implementing self-collision in the whole-body control of a real robot while preserving the source motion likeness. Finally, [18] proposed an Euclidean distance matrix to account for the motion retargeting, which is relevant for skeletons with similar proportions but underperforms when the targets have different trunk-to-arms ratios, as in our case. On the contrary, we propose to formulate this similarity through global rotations, which precisely capture the likeness in the retargeting task. ### _Motion retargeting in robotics_ Despite the great success of motion retargeting for character animation, their community has only been considering the feasibility of the movements in terms of physical constraints [17, 18, 19, 12]. Besides ensuring motion's feasibility, robotics research also requires adequate control of the appropriate robot based on the source motion. [19, 20] considered constrained optimization algorithms to retarget a human motion in a simulated robot but required learning a given trajectory and can not quickly overcome new variations. [21] proposed Bayesian optimization and inverse kinematics (IK) to tackle natural retargeting, but their approach required manually selecting joints of interest and was constrained to a few specific motions. Likewise, [19, 22, 23] considered whole-body retargeting by mapping human link orientation to robots and solving IK. [22] introduced a dynamic filter to enforce robot stability, which also over-smoothed the robot poses, thus failing to capture the motion nuances in the retargeting. Moreover, [22] method did not generalize to new kinematics. To cope with that issue, [23] proposed to solve the IK over the robot model, which facilitated the generalization to new robots. For that, [23] orients the robot links closer to the corresponding human links to better capture the likeness in the retargeting. We adopt a similar approach by considering the global rotation of body links as the similarity measurement between humans and the retargeted robot pose. However, all these previous works failed to overcome the manual morphing problem [24]: the challenge of mapping in the joint space from human to robot, which requires similar joint orders among the human and robot. On the contrary, our work does not focus on the task of retargeting the poses while keeping the robot balanced, [22, 23], but on the generalization of a unique method for human-robot retargeting with accuracy and capturing the nuances. Closer to our work, [25] proposed a learned-based footstep planner and a whole-body controller to retarget the human locomotion to a robot while being coherent with the generated footsteps. However, [25] only considered locomotion retargeting and assumed that the robot had at least one known contact with the environment at any time. Therefore, [25] was inappropriate for contact-free motions such as jumping or running. Deep learning has become a solution to ensure the retargeting process generalizes in terms of kinematics and diversity in the motions while being efficient. First, [26] proposed to construct a shared latent space to retarget human motion to humanoid robots, and the shared latent space is constructed with annotated human-to-robot pair data. Gathering a sufficient quantity of paired data for constructing the latent space is a laborious and time-intensive process and hardens the generalization to new configurations. [9] extended this approach by creating an automated paired data generation process. However, both works have to use nonparametric optimization in the latent space to retrieve similar robot poses to control the robot, which is inefficient if the dataset to retrieve is large. Contrastingly, our method learns a direct mapping from human poses to robot control commands. Therefore, our approach can control a robot at a high rate without being constrained by the quantity of training data. ## III Methodology In this section, we present an overview of our proposed framework for unsupervised human-to-robot motion retargeting via a shared latent space. First, we formulate the human-to-robot retargeting task. Then, we describe our encoder-decoder deep learning architecture, illustrated in Figure 2. ### _Problem Formulation_ Let \(\mathbf{x}_{h}=[x_{h,1},\cdots,x_{h,J_{h}}]\in\mathbb{R}^{J_{h}\times n}\) be a human pose composed by \(J_{h}\) joints. Similarly, \(\mathbf{x}_{r}=[x_{r,1},\cdots,x_{r,J_{r}}]\in\mathbb{R}^{J_{h}\times s}\) represents a robot pose. Then, the task of human motion retargeting can be formulated as finding a function \(f\) that maps a \(\mathbf{x}_{h}\) to \(\mathbf{x}_{r}\) (\(f:\mathbf{x}_{h}\longmapsto\mathbf{x}_{r}\)) so that \(\mathbf{x}_{r}\) preserves the human-like naturality of the pose \(\mathbf{x}_{h}\). However, the joints for humans and robots usually have different configurations: a human joint (e.g., wrist joint) can have more than 1DoF, while one robot joint usually has only 1DoF. To cope with such differences, we describe each human joint \(x_{h,j}\) as its quaternion representation referring to its parent (\(n=4\)), while each robot joint \(x_{r,j}\) (i.e., revolute joint) is described as its joint angle (\(s=1\)). In our particular case, and contrary to all works focusing on character animation, we are interested in the direct control of a robot. Robots can be controlled via their joint angles. As joint angles for robots and humans have different configurations, it makes little sense to compare joint angles to measure their similarity. Inspired by [23], we propose to use the global rotation of body links to compare the similarity between human and robot poses, which better captures their likeness and allows for better generalization to different kinematics. The similarity metric is defined in Section III-B. Previous works [9, 26] rely on the acquisition of a dataset of mapped motions between the human and the robot to retarget, which we describe as a \(\{\mathbf{x}_{h},\mathbf{x}_{r}\}\) pair. These works learn the retargeting function \(f\) in a supervised manner. On the contrary, we consider the retargeting task without collecting the correct \(\{\mathbf{x}_{h},\mathbf{x}_{r}\}\) pair and learn without supervision how to approximate \(f\) better. To this end, our model first learns to project human \(\mathbf{x}_{h}\) and robot \(\mathbf{x}_{r}\) poses to the same representation space. Then, we decode the learned representation to robot joint angles, which allows us to control the robot directly. ### _Cross-domain similarity metric_ To create a shared latent space in an unsupervised way, we initially define a similarity metric that captures the likeness of the poses between humans and robots. Contrary to prior works that use the local quaternions [16] or the relative XYZ position of the end effector [18], we consider the global rotation of body limbs as the similarity metric that better preserves the skeleton visual appearance. By using global rotation, our model captures the complete 3D orientation and remains invariant to coordinate systems and articulation variations. Let \(q_{h,j}\) and \(q_{r,j}\) represent the global quaternions of the same limbs (e.g., shoulder-to-elbow, elbow-to-wrist, etc.) of a human pose \(\mathbf{x}_{h}\) and a robot pose \(\mathbf{x}_{r}\). As a human pose is represented as limb quaternions, it is straightforward to obtain \(q_{h,j}\) from \(\mathbf{x}_{h}\). To get limb quaternions of a robot, we utilize forward kinematics to map robot joints \(\mathbf{x}_{r}\) to its limb quaternions \(q_{r,j}\). Then, the distance between the two poses can be computed as shown in Equation 1, where \(<,>\) denotes the dot product between two vectors. \[S_{GR}(\mathbf{x}_{h},\mathbf{x}_{r})=\sum_{j}(1-<q_{h,j},q_{r,j}>^{2}) \tag{1}\] \(S_{GR}\) is employed to measure the similarity between two poses used for contrastive learning in Section III-C. ### _Human-to-Robot shared representation_ We formulate the task of motion retargeting as the translation between two domains. We adopt two multi-layer perceptron (MLP) encoders (\(Q_{h}\), \(Q_{r}\)) to project the human and robot poses to a shared representation space, respectively. This way, \(Q_{h}\) projects \(\mathbf{x}_{h}\in\mathbb{R}^{h_{h}\times n}\) to \(z\in\mathbb{R}^{d}\) while \(Q_{r}\) translates \(\mathbf{x}_{r}\in\mathbb{R}^{h_{r}\times s}\) to \(z\in\mathbb{R}^{d}\). Given a human pose \(\mathbf{x}_{h}\), our shared latent space is used as a bridge to generate \(\mathbf{x}_{r}\) while conserving its similarity defined in Section III-B. We propose to learn the retargeting function \(f:\mathbf{x}_{h}\longmapsto\mathbf{x}_{r}\) without any paired human and robot motion data. Inspired by the recent success of contrastive learning methods (e.g., CLIP [27]), we propose to construct a shared latent space between two domains (here human and robot poses) in an unsupervised manner. Contrastive learning is a training technique that aims to learn from unlabeled data by comparing and contrasting different instances according to given similarity metrics. To do that, a neural network is optimized to maximize the agreement between positive pairs (similar instances) and minimize the agreement between negative pairs (dissimilar instances). Let us assume a large set of data that contains feasible human poses \(\mathbf{x}_{h}\) and robot poses \(\mathbf{x}_{r}\). Our method randomly selects triplets of projections from these data instances. As shown in Figure 2, \(\mathbf{x}_{h}^{i},\mathbf{x}_{h}^{j}\) and \(\mathbf{x}_{r}^{k}\) are a triplet. Then, we first encode them to the shared latent space through \(Q_{h}\) and \(Q_{r}\), respectively. For the encoded triplet \((z^{i},z^{j},z^{k})\), \(z^{i}\) is randomly selected as an anchor \(z_{o}^{i}\), which serves as the reference. We compute the global rotation distance \(S_{GR}\) detailed in Equation 1 to obtain the similarity between our anchor pose \(z_{o}^{i}\) and the two other poses \((z^{j},z^{k})\). The dissimilar \(z^{j}\) is a negative sample \(z_{-}^{j}\) while \(z^{k}\) is a positive sample \(z_{+}^{k}\). Then, we adopt the Triplet Loss [28] that pulls similar samples (anchor \(z_{o}^{i}\) and positive \(z_{+}^{k}\)) close while simultane ously pushing dissimilar samples (anchor \(z_{o}^{j}\) and negative \(z_{-}^{j}\)) away in the latent space. This allows a representation space where similar instances are clustered together and dissimilar instances are pushed apart. Equation 2 shows the Triplet Loss \(\mathcal{L}_{triplet}\) used in our scenario, where \(\alpha=0.05\). \[\mathcal{L}_{triplet}=max(||z_{o}^{j}-z_{+}^{k}||_{2}-||z_{o}^{j}-z_{-}^{j}||_{2 }+\alpha,0) \tag{2}\] ### _Shared representation to robot control_ Our proposed encoders allow us to project human poses and robot poses into a shared representation space. Therefore, the next step is to learn how to decode latent variables \(\mathbf{z}\) sampled from the shared space into robot joint space that can be directly used to control the robot. As shown in Figure 2, the decoder \(D_{r}\) decodes the latent variables \(z^{j}\) and \(z^{k}\) to robot data \(\mathbf{\hat{x}}_{r}^{j}\) and \(\mathbf{\hat{x}}_{r}^{k}\), respectively. As \(z^{k}\) is encoded from the robot data \(\mathbf{x}_{r}^{k}\), we employ a standard reconstruction loss over \(\mathbf{\hat{x}}_{r}^{k}\) and \(\mathbf{x}_{r}^{k}\), as shown in Equation 3. Additionally, to ensure that the predicted robot data \(\mathbf{\hat{x}}_{r}^{j}\) from human data \(\mathbf{x}_{h}^{j}\) is from the same distribution as the real robot data, we adopt the latent consistent loss shown in Equation 4 to encourage direct mapping in the retargeting process, similar to [16]. \[\mathcal{L}_{rec}=||\mathbf{x_{r}}-D_{r}(D_{r}(\mathbf{x_{r}}))||_{1} \tag{3}\] \[\mathcal{L}_{ltc}=||Q_{h}(\mathbf{x_{h}})-Q_{r}(D_{r}(Q_{h}(\mathbf{x_{h}}))) ||_{1} \tag{4}\] Our approach employs an end-to-end training strategy, enabling the encoder to learn a shared representation space for both human and robot poses unsupervised while ensuring that this representation space is reconstructible to robot control through our decoder. The total loss employed during training is a weighing sum as described in Equation 5, where \(\lambda_{triplet}=10,\lambda_{rec}=5\). \[\mathcal{L}=\lambda_{triplet}\mathcal{L}_{triplet}+\lambda_{rec}\mathcal{L}_{ rec}+\mathcal{L}_{ltc} \tag{5}\] ## IV Experiments The experimental setup and datasets used to evaluate the performance of our model are presented, along with the metrics and benchmarks employed to assess the accuracy and fidelity of the retargeted robot motions. ### _Experiment Settings_ The hyperparameter configurations used in our framework are listed in this subsection. The network consisting of two encoders and one decoder is trained end-to-end with a learning rate of 0.001 and batch size of 256. The encoder and decoder are Multi-Layer Perceptrons with the same structure; they have 6 hidden layers, each with 128 units. The shared latent space is of 8 dimensions. Adam [29], a momentum-based method, is utilized to optimize the loss function during training. We trained our model for 2.5 hours until the losses reached convergence. We did not experiment with the hyperparameters but chose default values to simplify the training. We acknowledge that further finetuning of those parameters could result in improvements in our results. We use a Ubuntu 22.04 and RTX A4000 Graphic card for our experiment. Additionally, we employ a bi-manual TiaGo++ robot that integrates two 7-DoF arms. In this paper, we focus on the motion of the upper and lower arm parts. We ignore the motion of the two hands because the HumanML3D human motion dataset [2] used does not contain hand motions. Therefore, the similarity metric \(S_{RD}\) in Equation 1 is defined on four limbs: left shoulder-to-elbow, left elbow-to-wrist, right shoulder-to-elbow, and right elbow-to-wrist. To control the robot in the real world, we send joint commands to the whole-body-controller [30] integrated in Tiago++ robot. The whole-body controller handles joint angle limits, joint velocity limits, and self-collision avoidance. ### _Data collection_ We present a robot pose generation procedure that requires only the robot's kinematic information. First, we sample the robot joint angles from its configuration space. The robot pose can be computed by following its forward kinematics. In such a way, we collect around 15M poses from the TiaGo++ robot by randomly sampling angles per joint. For human motions, we use the HumanML3D dataset [2] that consists of 14616 motions with a total length of 28.59 hours, summing up to around 20M poses. HumanM13D covers human daily activities (e.g., 'walking', 'jumping'), sports (e.g., 'playing golf'), acroqutics (e.g., 'cartwheel'), and aritstry (e.g., 'dancing'). In HumanML3D, a human pose is represented by its skeletons. As robot poses are sampled randomly from the configuration space, they are not matched to human poses in HumanML3D. ### _Baseline_ We implement S\({}^{3}\)LE [9] as our baseline. To train S\({}^{3}\)LE, we use a similar method as mentioned in [9] to generate paired data. We generate the same amount of paired data as in [9], 200K, by selecting the pairs with minimal rotation distance measured by Equation 1. The paired data is only used to train our baseline method. ### _Quantitative evaluation_ To evaluate the performance of each retargeting method, we annotated 11 distinct motions that were not observed while training. The annotated motions serve as the ground truth for our evaluation. We employ the Mean Square Error (MSE) of joint angles between ground truth and predicted results to quantify our proposed method. Furthermore, our method endeavors to address motion retargeting in real-time scenarios. We thoroughly evaluated the computational efficiency and speed at which our model operates. Table I compares our method with the baseline in Section IV-C. Our method outperforms the baseline in terms of MSE of joint angles. Furthermore, our novel approach demonstrates a notable increase in operational efficiency, surpassing the baseline by more than a factor of three. With a speed of 1.5kHz, our method readily fulfills the requirements of most advanced robot control systems. ### _Qualitative evaluation_ Visually compelling examples and comparisons between the original human and retargeted robot motions are showcased in Figure 3. For the selected human motions, we annotated their ground truth shown in the second row. Our method accurately retargets the motion when the input skeleton lifts hands above his head, lifts hands to his chest, or performs T-pose, whereas the baseline fails. ### _Ablation Study_ An ablation study is conducted to systematically analyze the impact of individual loss components in our proposed model. We utilize three loss components in our approach to optimize retargeted motions. When analyzing the results in Table II, it becomes apparent that the removal of the latent consistency loss \(\mathcal{L}_{ltc}\) results in a slight reduction in the performance of our method. On the contrary, the Triplet loss \(\mathcal{L}_{triplet}\) is indispensable for the optimization process. As supported by the experimental results, eliminating \(\mathcal{L}_{triplet}\) significantly increases the loss value, rising from 0.21 to 0.57. This underscores the crucial role played by \(\mathcal{L}_{triplet}\) in achieving improved optimization outcomes, contrary to all previous works that do not explore our contrastive training. ### _From RGB videos to robot motions_ The proposed method can generate natural and visually similar motions from RGB videos. We adopt [31] to obtain human 3D skeletons from RGB images in real-time. We extended [31] with the state-of-the-art YOLOv8 [32] for human detection and tracking to optimize the speed. Since there is no ground truth, we only show snapshots of reference images and corresponding TaGo poses in Figure 4 for qualitative evaluation. We implement the whole pipeline that runs in real-time to control the robot's motions based on the human video. ### _From texts to robot motions_ Text is an essential modality for human motions. Using a pre-trained motion synthesis model, Text-to-Motion Retrieval [33], our method can generate robot motions with texts. To this end, we first retrieve human motions from texts with Text-to-Motion Retrieval and then retarget human motions to the TiaGo++ robot. Figure 5 shows two examples of retargeting motion from texts. More examples can be found on our webpage. ### _From key poses to robot motions_ Our training strategy allows us to build a shared latent space that covers diverse motions. The contrastive loss \(\mathcal{L}_{triplet}\) makes similar poses close and dissimilar poses far away in latent space. In such a way, our proposed method learns a smooth latent space, which enables us to interpolate motions between key poses. In Figure 6, we show three key poses: A, B, and C, and the interpolated in-between motions. For interpolation, two key (e.g., A and B) poses are mapped into two points in latent space, and intermediate steps can be linearly interpolated in between them. The in-between motions are decoded from these interpolated steps. ### _Future work_ Our work proposed to construct a likeness-aware latent space that unifies human and robot representations seamlessly and allows for real-time robot control. While our model exhibits high precision in the retargeting process, we still observe room for improvement. Better exploring the similarity metrics between the different domains as well as connecting the shared space to higher-level representations (textual descriptions of the poses), will be considered in the future to enhance human-to-robot retargeting. ## V Conclusions In this paper, we presented an unsupervised motion retargeting method that ensures a shared latent space for motion generation. To this end, we use contrastive learning combined with deep latent space modeling to incorporate human and robot motion data. To construct a shared representation of \begin{table} \begin{tabular}{c|c|c|c} \(\mathcal{L}_{triplet}\) & \(\mathcal{L}_{rc}\) & \(\mathcal{L}_{ltc}\) & MSE \\ \hline ✓ & ✓ & ✗ & 0.24 \\ ✗ & ✓ & ✓ & 0.57 \\ ✓ & ✓ & ✓ & **0.21** \\ \end{tabular} \end{table} TABLE II: Ablation study of proposed loss components. Mean Square Error (MSE) of joint angles between ground truth and predicted results. Bold fonts indicate better results. Fig. 3: **Human Retargeting comparison for different key poses.** Various human skeleton key poses are retargeted to the Thiago robot. Our model captures the initial pose’s visual similarity and is closely related to the manually annotated ground-truth poses. \begin{table} \begin{tabular}{c|c|c} & Joint Angles & Control Frequency (kHz) \\ \hline Baseline & 0.44 & 0.4 \\ Ours & **0.21** & **1.5** \\ \end{tabular} \end{table} TABLE I: Performance of our proposed method and the baseline. The Mean Square Error (MSE) of joint angles between ground truth and predicted results are compared here. Bold fonts indicate better results. human and robot motion, we define a cross-domain similarity metric based on the global rotation of different body links. Similar motions are clustered together, and dissimilar motions are pushed apart while constructing the latent space. Furthermore, our decoder maps the shared representation to robot joint angles to control a robot directly without any additional optimization process. Additionally, we connect our model with existing pre-trained models to achieve motion retargeting from different modalities, such as controlling the robot with given texts or retargeting from RGB videos. Moreover, our learned latent space remains tractable and allows for the generation of smooth motion inbetweening between two distinct key poses through linear interpolation in the projected latent space. We showcase all results and the robustness of our model through various experiments, both quantitatively and qualitatively. ## Acknowledgment This work is funded by Marie Sklodowska-Curie Action Horizon 2020 (Grant agreement No. 955778) for project 'Personalized Robotics as Service Oriented Applications' (PERSEO).
2309.13214
Assessing the Impact of Personality on Affective States from Video Game Communication
Individual differences in personality determine our preferences, traits and values, which should similarly hold for the way we express ourselves. With current advancements and transformations of technology and society, text-based communication has become ordinary and often even surpasses natural voice conversations -- with distinct challenges and opportunities. In this exploratory work, we investigate the impact of personality on the tendency how players of a team-based collaborative alternate reality game express themselves affectively. We collected chat logs from eleven players over two weeks, labeled them according to their affective state, and assessed the connection between them and the five-factor personality domains and facets. After applying multi-linear regression, we found a series of reasonable correlations between (combinations of) personality variables and expressed affect -- as increased confusion could be predicted by lower self-competence (C1), personal annoyance by vulnerability to stress (N6) and expressing anger occured more often in players that are prone to anxiety (N1), less humble and modest (A5), think less carefully before they act (C6) and have higher neuroticism (N). Expanding the data set, sample size and input modalities in subsequent work, we aim to confirm these findings and reveal even more interesting connections that could inform affective computing and games user research equally.
Atieh Kashani, Johannes Pfau, Magy Seif El-Nasr
2023-09-22T23:24:37Z
http://arxiv.org/abs/2309.13214v1
# Assessing the Impact of Personality on Affective States from Video Game Communication ###### Abstract Individual differences in personality determine our preferences, traits and values, which should similarly hold for the way we express ourselves. With current advancements and transformations of technology and society, text-based communication has become ordinary and often even surpasses natural voice conversations - with distinct challenges and opportunities. In this exploratory work, we investigate the impact of personality on the tendency how players of a team-based collaborative alternate reality game express themselves affectively. We collected chat logs from eleven players over two weeks, labeled them according to their affective state, and assessed the connection between them and the five-factor personality domains and facets. After applying multi-linear regression, we found a series of reasonable correlations between (combinations of) personality variables and expressed affect - as increased confusion could be predicted by lower self-competence (C1), personal annoyance by vulnerability to stress (N6) and expressing anger occured more often in players that are prone to anxiety (N1), less humble and modest (A5), think less carefully before they act (C6) and have higher neuroticism (N). Expanding the data set, sample size and input modalities in subsequent work, we aim to confirm these findings and reveal even more interesting connections that could inform affective computing and games user research equally. Affective Computing, Individual Differences, Five Factor Model, Alternate Reality Games + Footnote †: This work is funded by James S McDonnell Foundation (Grant Title: A Methodology for Studying the Dynamics of Resilience of College Students). + Footnote †: This work is funded by James S McDonnell Foundation (Grant Title: A Methodology for Studying the Dynamics of Resilience of College Students). ## I Introduction Communication is a complex subject that can be influenced by numerous factors including individual differences and their emotional or affective states. During a communication act, individuals express affects in different ways by their choice of words, facial expression, vocal features, gesture and body language. Both verbal and nonverbal cues play an important role in the way that affect is expressed and interpreted through communication [1, 2]. With the rise of digital media, communications are increasingly performed using text-based computer-mediated communication. The lack of nonverbal cues in mediated communication has led to the assumption that text-based communication has a reduced capacity for exchanging affective states [3]. However, text-based communication can convey various ranges of emotions and affects by adapting the forms that are distinct from those found in non verbal communication [4, 5]. During text-based communication, communicators encode the emotions and affects that they would normally communicate through nonverbal cues into the other forms such as emoticons, deformed spellings, punctuation, acronyms and special abbreviations [6, 7, 8]. In addition, synchronous real-time text communication can capture some of the synchronicity that is associated with voice or face-to-face communication [9]. Thus, the emergence of real-time and online communication platforms has created new avenues for studying the verbal behaviour phenomenon and its psychological correlates. Individual differences refer to the variations that exist between humans with regard to personality, cognition and behaviours. Personality has been defined as "a stable, organized collection of psychological traits and processes in the human being that influences his or her interactions with and modifications to the psychological, social and physical environment surrounding them" [10]. The different personality traits can manifest in various ways including how individuals experience and express affects or emotions in verbal communication. The Five Factor Model (FFM) [11] is the most accepted and widely used personality theory that provides a systematic assessment of emotional, interpersonal, experiential, attitudinal, and motivational styles. While the five overarching domains of the FFM are too broad to capture the complex human personality in detail, underlying individual facets form a more precise description of personality to differentiate between individuals and their behaviours, including the expression of affects [12]. Video games have the potential to place individuals in the continuous mode of interaction that evoke emotional and affective responses. Players are drawn to play games not only for enjoyment and achieving rewards but also for engaging in experiences that may even elicit negative emotions like sadness, anxiety, and frustration [13]. Game features such as mechanics, interactive gameplay, storyline and immersive graphics make them a unique platform in affective computing research [14] for studying psychological constructs and social phenomena. In particular, Alternative Reality Games (ARG) can construct a close connection to reality, as they embed players in a fictional narrative that unfolds through interaction with real-world applications, such as mobile phones, text messages and social networks [15]. In ARGs, the interactions and in-game events often mimic real life situations that can engage participants for an extended period of time. Utilizing ARGs allows researchers to incorporate engaging and ecologically valid methods to study various aspects of human behavior by capturing multi-dimensional data on humans' interactions and communications. Altogether, this creates a unique opportunity to study the impact of personality on verbal expression of affects set in ARG-mediated communication, which we approach in this work. Investigating the connection between personality and expression of affect can lead to several potential benefits such as more inclusive design, adaptive personalization and tailored interventions through understanding individual differences. Thus, we formulate our research endeavor into the following research question: * Can we identify connections between individual personality differences and the tendencies to express oneself in distinct affective categories from in-game chat conversation? By exploring and presenting initial relations between personality and affect expression through game communication, we contribute to games user research and affective computing. ## II Related Work Previous studies show that the expression of emotions or affects in conversation varies as a function of individual differences and personality traits [16, 17, 18, 19, 20]. Holtgraves investigated the correlations between the five-factor model of personality (extroversion, neuroticism, agreeableness, conscientiousness, and openness to experience) and how it impacted the use of language in text messaging [16]. He reported that increased neuroticism was associated with the more frequent occurrence of negative emotion words, higher scores on extroversion were associated with the occurrence of fewer negative words, and agreeableness was negatively correlated with the use of negative emotion words. Another study also found that agreeableness is positively correlated with the occurrence of positive emotion words and negatively with negative emotion words [20]. Komulainen et al. reported that conscientiousness positively associate with positive affect and negatively associate with negative affect [18]. Consistent with previous findings, recent studies show that individuals high on self-reported extroversion tend to use more positive emotion words [21] and individuals high in conscientiousness demonstrate their prudence by refraining from expressing negative emotions [22]. For different application purposes, Volkmar et al. tailored in-game achievements to individual differences and measured an increase of player experience if matching properly [23]. Teng et al. used player journey map segmentation to investigate differences in gameplay based on - or influencing - higher-level metrics, which are not limited to personality variables [24, 25]. Habibi et al. measured differences in physiological responses between different personalities, especially higher impact of stress on more extroverted persons [26]. In subsequent work, they also predicted individual personality differences from low-level in-game behavior and (pre-defined) communication choices [27], which were yet far from unconstrained, natural speech. The mentioned studies considered only five factors of personality traits and none of them examined how facets manifested in natural text-based communication. In addition, these studies reported the impact of personality traits on the broad emotional or affective states (positive and negative). Therefore, the specific and discrete expression of emotions and affects were not examined. The current study attempts to address those limitations by utilizing a serious ARG to examine how the occurrence of expression of distinct affects in verbal communication varies as a function of individuals' personality traits and facets. ## III Study To situate our investigation into a suitable Alternate Reality Game, we draw on the game called _LUX_[28], which was developed and the data were collected by a group of researchers and developers with the aim of measuring resilience and coping strategies in first-year undergraduate students. _LUX_ is a multiplayer team-based cooperative game designed to foster communication within solving complex puzzles and challenges. It is set in a fictional narrative and takes place and interacts with the real world, while presenting challenges and stressors to assess emotional and affective responses. The game is composed of multiple episodes, and each episode consists of a series of puzzles in which players need to communicate with a bot and other team members through Discord in order to solve them. We collected the players' chat data, identified affective states throughout the messages and linked them to their self-reported FFM personality variables (cf. Section III-A). ### _Measures_ For measuring the participants' personality, we have utilized the Revised NEO Personality Inventory (NEO PI-R) [12] as the standard self-report questionnaire measure of the Five Factor Model (FFM), which provides a systematic assessment of emotional, interpersonal, experiential, attitudinal, and motivational styles. The NEO PI-R is a concise measure of the five major domains of personality (Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness). Within each of those broader domains, six specialized traits (facets) together represent a given domain score (which add up to 30 facets in total). For qualitative classification of the chat data, we have employed Plutchik's Wheel of Emotion [29] and the taxonomy of affects as discussed by [9] to develop a set of labels. In total, we ended up with ten labels as outlined in Table I. This label set was then utilized to label the players' conversation.. We also considered a "no affect" label to exclude messages that do not express any affective state. For this investigation, one researcher served as an annotator for all players' conversations. They considered the impact of the situational context and surrounding messages in the conversation to apply the code that captures the affect expressed in the message. They also took into account the impact of verbal cues such as emojis, slang and abbreviations that influenced the affective meaning of the entire sentence. In some cases, these did express actual affects, or were used to avoid misleading others (e.g. after being sarcastic). A total number of 3748 lines of utterances has been labeled line by line, with up to one affective label each. ### _Procedure_ We recruited participants through an on-boarding event on campus, where they agreed to informed consent and data collection. We then asked participants to form a team with three members to start the game, resulting in five teams in total: four teams with three members each and one team with four members. A total of 16 players played the game through two weeks of playtesting, and submitted a post-study questionnaire containing the discussed metrics afterwards. Yet, one team with four members did not finish the game and another single player failed to submit the NEO personality questionnaire, which we excluded from further investigation. From the remaining eleven players, six identified as male and five identified as female, distributed into four teams. ## IV Results We applied a multi-linear regression model using Python Scikit-learn library and analyzed the data to predict the occurrence of each affect based on a personality domain/facet or a combination of up to four personality domains/facets. To evaluate the results of the prediction, we calculated the Mean Square Error (MSE) values for each personality facet/domain combination. Table II shows the combination of personality domains/facets that can predict the conversational affect occurrence with highest accuracy (lowest MSE) after five-fold cross-validation. In addition, we calculated the coefficient associated with each particular personalty domain/facet to assess the direction and effect size onto the expressed affect. To benchmark these outcomes against a control condition, we considered two baselines that follow assumptions that personality would have no impact on the prediction of affect. In the first, the possibility of the occurrence of each affective state in the conversation is equal for all the affects. Considering the ten different affective states in our sample, the probability of occurrence of each affect states in conversation is thus 10%. The second baseline acknowledges that different affective states are differently likely to appear in the data and is thus constructed based on the mean of the total occurrence of each affect in our sample (cf. columns BL1 and BL2, respectively). Since the number of the "Supportive" affect label in the players' conversation is higher than the other affect labels, the naive BL1 and BL2 would show especially high MSE in contrast. The results showed that the combination of four personality domains/facets predict the conversational affect occurrence with highest accuracy (lowest MSE) on the testing set. We included the top three combinations together with their coefficient towards the affective state. For example, when predicting the occurrence of "Anger", the combination of "Anxiety", "Modesty", "Deliberation" and "Neuroticism" had a comparatively low MSE of 0.84, as compared to the two baselines (\(MSE_{BL1}=60.6\) and \(MSE_{BL2}=9.8\)). When taking the coefficients into account as well, personalities with higher "Modesty" and "Deliberation" indicated less expressions of "Anger", while higher "Anxiety" and "Neuroticisim" correlated rather positively with "Anger". ## V Discussion When interpreting the results of the prediction (as summarized in Table II), certain relationships could be identified that are arguably reasonable, while others do not necessary align with the background literature, or display inconclusive results, which we outline in the following. Although we attempted to support our findings with relevant psychological literature, we encountered a lack of sufficient research in some areas. Therefore, we proceeded with interpreting the outcomes. "Confusion" was best predicted by the personality facets of E5, C1, N and O. The strong negative correlation between a person's perceived self-competence (C1) and the probability to be confused by a logical puzzle of the game seems coherent. Also, as this method does not necessarily measure confusion itself but rather the chance of expressing that one is confused (in comparison to other personalities), it is reasonable that this willingness to admit one's own confusion goes along with higher Openness (O) and lower Neuroticism (N) in general. Individuals who are more vulnerable to stress (N6) also expressed to be "Annoyed" more often. Same holds for people with less emotional warmth (E1) and less modesty or humbleness (A5), which definitely stands to reason. This is only underlined by the positive correlation to straight-forward personalities (A2), as they are arguable less likely to withhold their frustration. "Anger" is positively correlated with the proneness to anxiety, worriedness and nervousness (N1), which make up reasonable predictors for this. This is similarly justifiable as with the connection to players that are less humble and modest (A5), think less carefully before they act (C6) and have higher Neuroticism (N) in general. The likelihood of expressing more "Supportive" statements is highly correlated with an individual's perceived self-competence (C1), as players could probably give better support when understanding the current challenge themselves. The connections to high openness towards other people (O) and high conscientiousness (C) similarly play well into this, while a positive connection to Neuroticism (N) remains at least debatable. The tendency to experience anger, frustration and bitterness (N2) consistently correlates negatively with players that expressed their "Excitement" more often, which arguably makes sense. Same might hold for personalities that highly value other people's welfare and experience (A3) or tend to be less organized (O2). Yet, one would have hypothesized that the personality facet of seeking excitement (E5) would also have a stronger connection to the expression of "Excitement" throughout chat data. Gregarious's people enjoy the company of others (E2), which explains the high correlation with their expressed "Amusement". The negative correlation of self-perceived competence (C1) and expressed amusement is however debatable, as there is no simple linear connection between objective ability (or subjective competence) and happiness [30]. The tendency to experience positive emotions (E6) is highly predictive of the players' expressed frustration, which does not compute at first glance, yet the facet does not exclude the experience of negative emotions per se. This correlation could still stand to reason for people that are generally more prone to experience both negative as well as positive emotions, but a connection to facets that particularly target negative emotions would have been more reasonable. However, at least with regards to background theories, we cannot justify the correlation between "Disagreement" and people who are more excitement-seeking (E5), or personalities that have a deep appreciation for art and beauty (O2). In fact, the traits that indicates the openness to accepting new ideas and other opinions (O5) is negatively correlated with "Disagreement" in these results, where the opposite would be more intuitive. Thus, we engage with our introductory research question, arguing that we delivered initial insights that individual personality differences can strongly impact affective expression in game communication, and that most of the derived connections are reasonably justifiable, barring some limitations that are discussed in the following. ## VI Limitations and Future Work Altogether, most of the predicted multi-linear correlations stand to reason, with some exceptions that are presumably caused by the highly noisy domain of individual personality. For the sake of brevity, we did not explicate on all, but only the most predictive facet combinations, and leave remaining interpretations open for the reader through Table II. Certain connections that we hypothesized to be trivially true (such as the tendency of agreeableness (A) and the expressed "Agreements", the personal desire for excitement (A5) and the expression of it, or the hostility towards Anger (N2) and its utterance) were not reflected in the prediction. Yet, we only considered the four major factors that could predict the affect expression in the end, while the former still might have had a smaller effect. In our current labeling process, we only appointed a single annotator to decide affective labels for the particular chat utterances. While this could already show a working trend of the approach that can come up with reasonable results, personal bias might have influenced the classification of the conversations, which is why we are expanding this process in the next iterative step of this work to multiple annotators and a proper assessment of the inter-rater reliability. An essential part of the noise that led to the inconclusive parts of the results could be overcome by incorporating a larger data set of participants, which is what we are currently working towards. Especially the highly variational personality data requires a broad range of different personality combinations in order to come to conclusions that are accurate and usable for large-scale applications. Using in-game and conversational behavior from a vast community of players of _Sky: Children of the Light_[31], we are striving to scale our approach and investigate if we can extract comparable or even more accurate findings. The proposed technique is obviously limited in its applicability to domains that incorporate recorded chat communication. This constrains it to multi-player environments, and only those who actively engage humans in natural language conversation. Yet, with the current rise of large language models and increasing use of novel application cases, we are interested to investigate single-player games that embed natural language conversations with non-player characters for narrative, quest or mechanical reasons, and will derive if there are significant differences in emotional expression when interacting with artificial agents instead of fellow human players. Eventually, for this proof of concept that reasonably accurate connections between personality and affect expression through chat are derivable, we only considered a single method for the modeling process. While the outcomes of the multi-linear regression are intuitively understandable, more sophisticated machine learning approaches could have approximated this connection with even more accuracy. Thus, our future work includes the investigation of such models, while we constrain ourselves to techniques with high explainability (such as random forest regression or Bayesian belief networks) to still be able to ground and justify the underlying functions (in contrast to black-box models). Limitations with respect to the ethical component of using this and related methodologies are further discussed in Section VIII. ## VII Conclusion Individual personality differences influence how we make decisions, take stances, display emotions and express ourselves. Video games, especially when incorporating or being based on communication, have the opportunity to engage players in conversation, control topics and insert stimuli, record context-sensitive utterances and can even benefit from assessing affective states of their players to tailor content, difficulty or experiences. Thus, this work explored how the personality of players of a multi-player alternate reality game impacted their expression of affective states when solving puzzles and coordinating with their teams. By classifying their communication into affective labels and modeling the role of their Five Factor Model facets towards that, we present initial results that identify a first differentiation between individuals and their expression of affect in text-based communication. We considered ten primary conversational effects from Plutchik's established wheel of emotions and a combination of up to four facets/domains, which often led to reasonable connections between personality and affective expression already. Based on this, we are looking forward to investigate large-scale observations between personality and expression, how to accurately model these in the context of games, and how to make avail of these to tailor player experiences through difficulty, content and matchmaking. ## VIII Ethical Statement The realized study closely followed procedure, framing and informed consent as approved by the institutional review board of the authors' affiliated university. While this proposed technology aims at opening up understanding individual differences and could tailor game mechanics, environments or matchmaking towards inclusiveness and accessibility, it still bears certain risks and ethical implications that should be addressed. First of all, as this approach is working on conversation data which can be highly sensitive and personal, the question of data ownership comes into play. Even if companies provide game environments and services and therefore often have control over incoming and outgoing data, chat data should ideally only be leveraged with the actively confirmed approval from the particular player (i.e. _opt-in_). Ideally, echoing data transparency, players should have full insight and control over the history of their chat logs, so that unwanted entries could be permanently removed from storage and usage for the model. Moreover, even when being able to control their individual input, regular users can hardly estimate the impact of their data and how it could change in-game or higher-level decisions that certain use cases could determine - thus, in the spirit of explainability, users should be able to clearly follow the decisions of the model, its outcomes and implications on their experience with the product. After all, modeling relationships between chat, personality and affect and (algorithmically) deriving decisions from that should only be deployed for the benefit (e.g. improved experience) of the user, but bears the risk to be exploited to further facilitate dark patterns of (game) design, such as taking advantage of purchasing patterns or reinforcing addictive tendencies. These risks excel in the case of erroneous decision making of the model, which could steer the individual's experience in the wrong direction or completely spoil it. Thus, if such a model is used for tailoring or adapting any element, it should only do if it can satisfy the prediction following a reasonable confidence. ## Acknowledgment LUX was developed and the data were collected by the group of researchers and developers including Reza Habibi, Bjarke Larsen, Sai Siddartha Maram, Shweta Sisodiya, Jonatan Holmes, Zhaoqing Teng, and Jessica Wei at the University of California, Santa Cruz.
2309.04768
Influence of the curvature in the existence of solutions for a two Hardy-Sobolev critical exponents
For $N\geq 4$, we let $\Omega$ be a bounded domain of $\mathbb{R}^N$ and $\Gamma$ be a closed curve contained in $\Omega$. We study existence of positive solutions $u \in H^1_0\left(\Omega\right)$ to the equation \begin{equation}\label{Atusi} -\Delta u+hu=\lambda\rho^{-s_1}_\Gamma u^{2^*_{s_1}-1}+\rho^{-s_2}_\Gamma u^{2^*_{s_2}-1} \qquad \textrm{ in } \Omega \end{equation} where $h$ is a continuous function and $\rho_\Gamma$ is the distance function to $\Gamma$. We prove the existence of a mountain pass solution for this Euler-Lagrange equation depending on the local geometry of the curve and the potential $h$.
El Hadji Abdoulaye Thiam, Abdourahmane Diatta
2023-09-09T11:50:36Z
http://arxiv.org/abs/2309.04768v2
A nonlinear elliptic PDE involving two Hardy-Sobolev critical exponents in domains with curve singularity ###### Abstract. For \(N\geq 4\), we let \(\Omega\) be a bounded domain of \(\mathbb{R}^{N}\) and \(\Gamma\) be a closed curve contained in \(\Omega\). We study existence of positive solutions \(u\in H^{1}_{0}\left(\Omega\right)\) to the equation \[-\Delta u+hu=\lambda\rho_{\Gamma}^{-x_{1}}u^{2^{*}_{1}-1}+\rho_{\Gamma}^{-x_{2 }}u^{2^{*}_{2}-1}\qquad\text{ in }\Omega \tag{0.1}\] where \(h\) is a continuous function and \(\rho_{\Gamma}\) is the distance function to \(\Gamma\). We prove the existence of a mountain pass solution for this Euler-Lagrange equation depending on the local geometry of the curve and the potential \(h\). In this paper, we also study existence, symmetry and decay estimates of the positive entire solutions of (0.1) with \(\Omega=\mathbb{R}^{N}\) and \(\Gamma\) the real line. **Key Words**: Two Hardy-Sobolev exponents; Curvature; Positive mountain Pass solution; Curve singularity. ## 1. Introduction For \(N\geq 3\), the famous Caffarelli-Kohn-Nirenberg inequality asserts that: there exists a positive constant \(C=C_{N,a,b}\) only depending on \(N,a,b\), such that \[C\left(\int_{\mathbb{R}^{N}}|x|^{-bq}|u|^{q}dx\right)^{2/q}\leq\int_{\mathbb{R }^{N}}|x|^{-2a}|\nabla u|^{2}dx\qquad\forall u\in\mathcal{D}^{\infty}_{c}( \mathbb{R}^{N}), \tag{1.1}\] where \(N\geq 3\), \(-\infty<a<\frac{N-2}{2},0\leq b-a\leq 1\) and \(q=\frac{2N}{N-2+2(b-a)}\), see for instance [9]. Note that the case \(b=a+1\) and \(p=2\), (1.1) corresponds to the following Hardy inequality: \[\left(\frac{N-2}{2}\right)^{2}\int_{\mathbb{R}^{N}}|x|^{-2}|u|^{2}dx\leq\int_{ \mathbb{R}^{N}}|\nabla u|^{2}dx\qquad\forall u\in\mathcal{D}^{1,2}(\mathbb{R} ^{N}), \tag{1.2}\] where \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\) denotes the completion of \(\mathcal{C}^{\infty}_{c}(\mathbb{R}^{N})\) with respect to the norm \[u\longmapsto\sqrt{\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx}.\] The constant \(\left(\frac{N-2}{2}\right)^{2}\) is sharp and never achieved in \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\). The case \(a=b=0\) and \(p=\frac{2N}{N-2}\)corresponds to the famous Sobolev inequality: \[S_{N,0}\left(\int_{\mathbb{R}^{N}}|u|^{2^{*}}dx\right)^{2/2^{*}}\leq\int_{ \mathbb{R}^{N}}|\nabla u|^{2}dx\qquad\forall u\in\mathcal{D}^{1,2}(\mathbb{R} ^{N}), \tag{1.3}\] where the best constant \[S_{N,0}=\frac{N(N-2)}{4}\omega_{N}^{2/N}\] is achieved in \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\). Here \(\omega_{N}=|S^{N-1}|\) is the volume of the N-sphere and \(2^{*}:=2^{*}(0)=\frac{2N}{N-2}\) is the critical Sobolev exponent. By Holder's inequality, we get the interpolation between the Hardy and the Sobolev inequalities, called Hardy-Sobolev inequality given by \[S_{N,s}\left(\int_{\mathbb{R}^{N}}|x|^{-s}|u|^{2^{*}(s)}dx\right)^{2/2^{*}(s)} \leq\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx\qquad\forall u\in\mathcal{D}^{1,2}( \mathbb{R}^{N}), \tag{1.4}\] where for \(s\in[0,2]\), we have \(2^{*}(s)=\frac{2(N-s)}{N-2}\) is the critical Hardy-Sobolev exponent. We refer to [15] for more details about Hardy-Sobolev inequality. The value of the best constant is \[S_{N,s}:=(N-2)(N-s)\left[\frac{w_{N-1}}{2-s}\frac{\Gamma^{2}(N-\frac{s}{2-s})}{ \Gamma(\frac{2(N-s)}{2-s})}\right]^{\frac{2-s}{N-s}},\] where \(\Gamma\) is the Gamma Euler function. It was computed by Lieb [27] when \(s\in(0,2)\). The ground state solution is given, up to dilation, by \[w(x)=C_{N,s}(1+\left|x\right|^{2-s})^{\frac{2-N}{2-s}},\] for some positive known constant \(C_{N,s}\). The Caffarelli-Kohn-Nirenberg's inequality on domains and related problems have been studied these last years. For instance, we let \(\Omega\) be a domain of \(\mathbb{R}^{N}\) and consider the equation \[\begin{cases}-\mathrm{div}(\left|x\right|^{-2a}\nabla u)=\left|x\right|^{-bq} u^{q-1},\quad u>0&\text{in }\Omega\\ u=0&\text{on }\partial\Omega.\end{cases} \tag{1.5}\] To study (1.5), one could let \(w(x)=\left|x\right|^{-a}u(x)\). Direct computations show that \[\int_{\Omega}\left|x\right|^{-2a}\left|\nabla u\right|^{2}dx=\int_{\Omega} \left|\nabla w\right|^{2}dx-a(n-2-a)\int_{\Omega}\left|x\right|^{-2}w^{2}dx.\] Then solutions of (1.5) can be obtained by minimizing the following quotient \[S_{a,b}^{N}(\Omega):=\inf_{u\in\mathcal{D}_{a}^{1,2}(\Omega)\setminus\{0\}} \frac{\int_{\Omega}\left|\nabla w\right|^{2}dx-a(n-2-a)\int_{\Omega}\left|x \right|^{-2}w^{2}dx}{\left(\int_{\Omega}\left|x\right|^{-bq}\left|u\right|^{q }dx\right)^{2/q}}, \tag{1.6}\] where \(\mathcal{D}_{a}^{1,2}(\Omega)\) be the completion of \(\mathcal{C}_{c}^{\infty}(\Omega)\) with respect to the norm \[u\longmapsto\sqrt{\int_{\Omega}\left|x\right|^{-2a}\left|\nabla u\right|^{2}dx}.\] The question related to the attainability of the best constant \(S_{a,b}^{N}(\Omega)\) in (1.6) is studied by many authors. For more developments related to that, we refer the readers to [4, 5, 8, 10, 11, 13, 15, 16, 23, 27, 29, 30] and references therein. When \(0\in\partial\Omega\), the existence of minimizers for \(S_{a,b}^{N}(\Omega)\) was first studied by Ghoussoub-Kang [15] and Ghoussoub-Robert [16]. Later Chern and Lin [10] proved the existence of minimizer provided the mean curvature of the boundary at the origin is negative and (\(a<b<a+1\) and \(N\geq 3\)) or (\(b=a>0\) and \(N\geq 4\)). The case \(a=0\) and \(0<b<1\) was first studied by [16] before the generalization in [10]. More generally questions related to Partial Differential Equations involving multiples Hardy-Sobolev critical exponents have been investigated these last decades. In particular, we let \(\Omega\) be a domain of \(\mathbb{R}^{N}\) such that \(0\in\partial\Omega\) and consider the equation \[\begin{cases}-\Delta u=\lambda\frac{u^{2^{*}_{1}-1}(x)}{\left|x\right|^{s_{1}} }+\frac{u^{2^{*}_{2}-1}}{\left|x\right|^{s_{2}}}&\text{in }\Omega\\ u(x)>0&\text{in }\Omega,\end{cases} \tag{1.7}\] where \(0\leq s_{2}<s_{1}<2\), \(\lambda\in\mathbb{R}\) and for \(i=1,2\), \(2^{*}_{s_{1}}:=\frac{2(N-s_{i})}{N-2}\) it the critical Hard-Sobolev exponent. When \(s_{2}=0\) and \(\lambda<0\), then equation (1.7) has no nontrivial solution. For \(\lambda>0\), \(0<s_{1}<2\) and \(s_{2}=0\), then using variational methods, Hsia Lin and Wadade [25] proved existence of solutions provided \(N\geq 4\) and the mean curvature at the origin is negative. For the case \(N=3\), \(\lambda\in\mathbb{R}\) and \(0<s_{2}<s_{1}<2\), the equation (1.7) has a least-energy solution provided the mean curvature at the origin is negative, see [28]. Concerning the existence and non-existence of solution related to equation (1.7) in the half-space \(\Omega=\mathbb{R}^{N}_{+}\), we refer to Bartsch-Peng and Zhang [4] for the case \(0<s_{2}<s_{1}=2\) and \(\lambda<\left(\frac{N-2}{2}\right)^{2}\); to Musina [31] when \(N\geq 4\), \(s_{2}=0\), \(s_{1}=2\) and \(0<\lambda<\left(\frac{N-2}{2}\right)^{2}\) and to Hsia, Lin and Wadade [25] when \(s_{2}=0\), \(0<s_{1}<2\) and \(\lambda>0\). In this paper, we are concerned with the effect of the local geometry of the singularity \(\Gamma\) in the existence of solutions of the following non-linear partial differential equation involving two Hardy-Sobolev critical exponents. More precisely, letting \(h\) be a continuous function and \(\lambda\) be a real parameter, we consider \[\begin{cases}-\Delta u(x)+hu(x)=\lambda\frac{u^{2^{*}_{s_{1}}-1}(x)}{\rho_{ \Gamma}^{s_{1}}(x)}+\frac{u^{2^{*}_{s_{2}}-1}(x)}{\rho_{\Gamma}^{s_{2}}(x)}& \text{in }\Omega\\ \\ u(x)>0\qquad\text{and}\qquad u(x)=0&\text{on }\partial\Omega,\end{cases} \tag{1.8}\] where \(\rho_{\Gamma}(x):=\inf_{y\in\Gamma}|y-x|\) is the distance function to the curve \(\Gamma\), \(0<s_{2}<s_{1}<2\), \(2^{*}_{s_{1}}:=\frac{2(N-s_{1})}{N-2}\) and \(2^{*}_{s_{2}}:=\frac{2(N-s_{2})}{N-2}\) are two critical Hardy-Sobolev exponents. To study the equation (1.8), we consider the following non-linear functional \(\Psi:H^{1}_{0}(\Omega)\to\mathbb{R}\) defined by: \[\Psi(u):=\frac{1}{2}\int_{\Omega}|\nabla u|^{2}dx+\frac{1}{2}\int_{\Omega}h(x) u^{2}dx-\frac{\lambda}{2^{*}_{s_{1}}}\int_{\Omega}\frac{|u|^{2^{*}_{s_{1}}}}{ \rho_{\Gamma}^{s_{1}}(x)}dx-\frac{1}{2^{*}_{s_{2}}}\int_{\Omega}\frac{|u|^{2^{ *}_{s_{2}}}}{\rho_{\Gamma}^{s_{2}}(x)}dx. \tag{1.9}\] It is easy to verify that there exists a positive constant \(r>0\) and \(u_{0}\in H^{1}_{0}(\Omega)\) such that \(\|u_{0}\|_{H^{1}_{0}(\Omega)}>r\) and \[\inf_{\|u\|_{H^{1}_{0}(\Omega)}=r}\Psi(u)>\Psi(0)\geq\Phi(u_{0}),\] see for instance Lemma 4.5 below. Then the point \((0,\Psi(0))\) is separated from the point \((u_{0},\Psi(u_{0}))\) by a ring of mountains. Set \[c^{*}:=\inf_{P\in\mathcal{P}}\max_{v\in\mathcal{P}}\Psi(v), \tag{1.10}\] where \(\mathcal{P}\) is the class of continuous paths in \(H^{1}_{0}(\Omega)\) connecting \(0\) to \(u_{0}\). Since \(2^{*}_{s_{2}}>2^{*}_{s_{1}}\), the function \(t\longmapsto\Psi(tv)\) has the unique maximum for \(t\geq 0\). Furthermore, we have \[c^{*}:=\inf_{u\in H^{1}_{0}(\Omega),u\geq 0,\,u\neq 0}\max_{t\geq 0}\Psi(tu).\] Due to the fact that the embedding of \(H^{1}_{0}(\Omega)\) into the weighted Lebesgue spaces \(L^{2^{*}_{s_{i}}}(\rho_{\Gamma}^{-s_{i}}dx)\) is not compact, the functional \(\Psi\) does not satisfy the Palais-Smale condition. Therefore, in general \(c^{*}\) might not be a critical value for \(\Psi\). To recover compactness, we study the following non-linear problem: let \(x=(y,z)\in\mathbb{R}\times\mathbb{R}^{N-1}\) and consider \[\left\{\begin{aligned} -\Delta u&=\lambda\frac{u^{2^{*}_{s_ {1}}-1}(x)}{|z|^{s_{1}}}+\frac{u^{2^{*}_{s_{2}}-1}}{|z|^{s_{2}}}& \text{in }\mathbb{R}^{N}\\ u(x)&>0&\text{in }\mathbb{R}^{N}.\end{aligned}\right. \tag{1.11}\] To obtain solutions of (1.11), we consider the functional \(\Phi:\mathcal{D}^{1,2}(\mathbb{R}^{N})\) defined by \[\Phi(u):=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx-\frac{\lambda}{2^{*} _{s_{1}}}\int_{\mathbb{R}^{N}}|z|^{-s_{1}}|u|^{2^{*}_{s_{1}}}dx-\frac{1}{2^{*} _{s_{2}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|u|^{2^{*}_{s_{2}}}dx.\] Next, we define \[\beta^{*}:=\inf_{u\in D^{1,2}(\mathbb{R}^{N}),u\geq 0,u\neq 0}\max_{t\geq 0} \Phi(tu).\] Then we get compactness provided \[c^{*}<\beta^{*},\] see Proposition 4.3 below. So it is important to study existence, symmetry and decay estimates of non-trivial solution \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\) of (1.11). Then we have the following results. **Theorem 1.1**.: _Let \(N\geq 3\), \(0\leq s_{2}<s_{1}<2\), \(\lambda\in\mathbb{R}\). Then equation_ \[\left\{\begin{aligned} -\Delta u&=\lambda\frac{u^{2^{*}_{s_ {1}}-1}(x)}{|z|^{s_{1}}}+\frac{u^{2^{*}_{s_{2}}-1}}{|z|^{s_{2}}}& \text{in }\mathbb{R}^{N}\\ u(x)&>0&\text{in }\mathbb{R}^{N}\end{aligned}\right. \tag{1.12}\] _has a positive ground state solution \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\). Moreover \(w\) depend only on \(|y|\) and \(|z|\). In other words, there exists a function \(\theta:\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that_ \[w(x)=\theta(|y|,|z|).\] Next we have the following decay estimates of the solution \(w\) and its higher order derivatives. **Theorem 1.2**.: _Let \(w\) be a solution of the Euler-Lagrange equation (1.12). Then_ * _there exists two positive constants_ \(c_{1}<c_{2}\) _such that:_ \[\frac{c_{1}}{1+|x|^{N-2}}\leq u(x)\leq\frac{c_{2}}{1+|x|^{N-2}},\qquad\forall x \in\mathbb{R}^{N}.\] * _For_ \(|x|=|(t,z)|\leq 1\)__ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}|z|^{1-s_{1}}\] * _For_ \(|x|=|(t,z)|\geq 1\)__ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}\max(1,|z|^{-s_{1}})|x|^{1-N}.\] These two theorems will play a crucial role in the following which is our main result. Then we have **Theorem 1.3**.: _Let \(N\geq 4\), \(0\leq s_{2}<s_{1}<2\) and \(\Omega\) be a bounded domain of \(\mathbb{R}^{N}\). Consider \(\Gamma\) a smooth closed curve contained in \(\Omega\). Let \(h\) be a continuous function such that the linear operator \(-\Delta+h\) is coercive. Then there exists a positive constant \(A^{N}_{s_{1},s_{2}}\), only depending on \(N\), \(s_{1}\) and \(s_{2}\) with the property that if there exists \(y_{0}\in\Gamma\) such that_ \[A^{N}_{s_{1},s_{2}}|\kappa(y_{0})|^{2}+h(y_{0})<0\qquad\text{ for }N\geq 4, \tag{1.13}\] _then \(c^{*}<\beta^{*}\), where \(\kappa:\Gamma\to\mathbb{R}^{N}\) is the curvature vector of \(\Gamma\). Moreover there exists \(u\in H^{1}_{0}(\Omega)\setminus\{0\}\) non-negative solution of_ \[-\Delta u(x)+hu(x)=\lambda\frac{u^{2^{*}_{s_{1}}-1}(x)}{\rho^{*1}_{\Gamma}(x) }+\frac{u^{2^{*}_{s_{2}}-1}(x)}{\rho^{*2}_{\Gamma}(x)}\qquad\text{ in }\Omega.\] The effect of curvatures in the study of Hardy-Sobolev inequalities have been intensively studied in the recent years. For each of these works, the sign of the curvatures at the point of singularity plays important roles for the existence a solution. The first paper, to our knowledge, being the one of Ghoussoub and Kang [15] who considered the Hardy-Sobolev inequality with singularity at the boundary. For more results in this direction, see the works of Ghoussoub and Robert in [17, 18, 19, 20], Demyanov and Nazarov [12], Chern and Lin [10], Lin and Li [28], the author, Fall and Minlend in [14] and the references there in. The Hardy-Sobolev inequality with interior singularity on Riemannian manifolds have been studied by Jaber [26] and the author [34]. Here also the impact of the scalar curvature at the point singularity plays an important role for the existence of minimizers in higher dimensions \(N\geq 4\). The paper [26] contains also existence result under positive mass condition for \(N=3\). We point out that the \(3\)-dimensional version of this paper is presented in [36]. The existence of solution does not depends on the local geometry of the singularity but on the regular part of the Green function of the operator \(-\Delta+h\). The proof of Theorem 1.3 relies on test function methods. Namely to build appropriate test functions allowing to compare \(c^{*}\) and \(\beta^{*}\). While it always holds that \(c^{*}\leq\beta^{*}\), our main task is to find a function for which \(c^{*}<\beta^{*}\), see Section 5. This then allows to recover compactness and thus every minimizing sequence for \(c^{*}\) converges to a minimizer, up to a subsequence. Building these approximates solutions requires to have sharp decay estimates of a minimizer \(w\) for \(\beta^{*}\), see Section 2. Section 3 is devoted to the local parametrization and computation of the local metric. ## 2. Proof of Theorem 1.1 and Theorem 1.2 **Theorem 2.1**.: _Let \(N\geq 3\), \(x:=(y,z)\in\mathbb{R}\times\mathbb{R}^{N-1}\), \(0<s_{2}<s_{1}<2\) and \(\lambda\in\mathbb{R}\). Then there exists \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\) positive, satisfying_ \[-\Delta w=\lambda\frac{w^{2^{*}_{s_{1}}-1}}{|z|^{s_{1}}}+\frac{w^{2^{*}_{s_{2}} -1}}{|z|^{s_{2}}}\qquad\text{in }\mathbb{R}^{N}.\] Proof.: By Ekland variational principle there exits a minimizing sequence \((w_{n})_{n}\) for \(\beta^{*}\) such that \[\beta^{*}=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla w_{n}|^{2}dx-\frac{\lambda} {2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}|z|^{-s_{1}}|w_{n}|^{2^{*}_{s_{1}}}dx- \frac{1}{2^{*}_{s_{2}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|w_{n}|^{2^{*}_{s_{2} }}dx+o(1) \tag{2.1}\] and for \(\eta\in\mathcal{C}^{\infty}(\mathbb{R}^{N})\), \[\int_{\mathbb{R}^{N}}\nabla w_{n}\cdot\nabla\eta dx-\lambda\int_{\mathbb{R}^{ N}}|z|^{-s_{1}}|w_{n}|^{2^{*}_{s_{1}}-2}w_{n}\eta dx-\int_{\mathbb{R}^{N}}|z|^{-s_{1 }}|w_{n}|^{2^{*}_{s_{1}}-2}w_{n}\eta dx=o(1). \tag{2.2}\] By (2.1) and (2.2), we have \[\beta^{*}=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\mathbb{R}^{ N}}|\nabla w_{n}|^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}} \right)\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|w_{n}|^{2^{*}_{s_{2}}}dx+o(1)\] By continuity, there exists \(r_{n}>0\), such that \[\frac{\beta^{*}}{2}:=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{B _{r_{n}}}|\nabla w_{n}|^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_ {2}}}\right)\int_{B_{r_{n}}}|z|^{-s_{2}}|w_{n}|^{2^{*}_{s_{2}}}dx+o(1). \tag{2.3}\] Define \[v_{n}(x)=r_{n}^{\frac{N-2}{2}}w(r_{n}x).\] Then by the change of variable formula \(\tilde{x}=r_{n}x\), it easy follows that \[\int_{\mathbb{R}^{N}}|\nabla w_{n}|^{2}dx=\int_{\mathbb{R}^{N}}|\nabla v_{n}| ^{2}dx;\quad\int_{\mathbb{R}^{N}}|z|^{-s_{i}}|w_{n}|^{2^{*}_{s_{i}}}dx=\int_{ \mathbb{R}^{N}}|z|^{-s_{i}}|v_{n}|^{2^{*}_{s_{i}}}dx,\] for all \(i=1,2\). Moreover we have \[\int_{B_{r_{n}}}|\nabla w_{n}|^{2}dx=\int_{B_{1}}|\nabla v_{n}|^{2}dx\qquad \text{and}\qquad\int_{B_{r_{n}}}|z|^{-s_{2}}|w_{n}|^{2^{*}_{s_{2}}}=\int_{B_{1 }}|z|^{-s_{2}}|v_{n}|^{2^{*}_{s_{2}}}. \tag{2.4}\] Therefore \((v_{n})_{n}\) is also a minimizing sequence. In particular \(v_{n}\rightharpoonup v\) for some \(v\) in \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\). We wish to show that \(v\neq 0\). If not then \(v_{n}\to 0\) in \(L^{2}_{loc}(\mathbb{R}^{N})\). Let \(\varphi\in C^{\infty}_{c}(B_{1})\) such that \(\varphi\equiv 1\) on \(B_{\frac{1}{2}}\). Using \(\varphi^{2^{*}_{s_{1}}}v_{n}\) as test function in (2.2) and using integration by parts, we obtain \[\beta^{*}-\frac{1}{2} \int_{\mathbb{R}^{N}}|\nabla(\varphi v_{n})|^{2}dx+\frac{1}{2^{*}_ {s_{2}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|\varphi v_{n}|^{2^{*}_{s_{2}}}dx\] \[\leq-\frac{1}{2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}\nabla v_{n} \cdot\nabla(v_{n}\varphi^{2^{*}_{s_{1}}})dx+\frac{1}{2^{*}_{s_{1}}}\int_{ \mathbb{R}^{N}}|z|^{-s_{2}}|v_{n}|^{2^{*}_{s_{2}}-2}v_{n}^{2}\varphi^{2^{*}_{s _{1}}}dx+o(1)\] \[=-\frac{1}{2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}|\nabla(\varphi v_{n })|dx+\frac{1}{2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|v_{n}|^{2^{*}_{s _{2}}-2}v_{n}^{2}\varphi^{2^{*}_{s_{1}}}dx+o(1)\] \[\leq-\frac{1}{2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}|\nabla(\varphi v _{n})|dx+\frac{1}{2^{*}_{s_{1}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|\varphi v_{n}| ^{2^{*}_{s_{2}}}dx+o(1).\] Therefore \[\beta^{*}\leq\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\mathbb{R}^{ N}}|\nabla(\varphi v_{n})|dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}} \right)\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|\varphi v_{n}|^{2^{*}_{s_{2}}}dx+o(1). \tag{2.5}\] Moreover by (2.3) and (2.4), we have \[\frac{\beta^{*}}{2}:=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{B_{1 }}|\nabla w_{n}|^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}} \right)\int_{B_{1}}|z|^{-s_{2}}|w_{n}|^{2^{*}_{s_{2}}}dx+o(1). \tag{2.6}\] Hence combining (2.5) and (2.6), we obtain \[\beta^{*} \leq\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\mathbb{R }^{N}}|\nabla(\varphi v_{n})|^{2}dx+\left(\frac{1}{2^{*}_{s_{2}}}-\frac{1}{2^{* }_{s_{1}}}\right)\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|\varphi v_{n}|^{2^{*}_{s_{2} }}dx+o(1)\] \[=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{B_{1}}| \nabla(\varphi v_{n})|^{2}dx+\left(\frac{1}{2^{*}_{s_{2}}}-\frac{1}{2^{*}_{s_{ 1}}}\right)\int_{B_{1}}|z|^{-s_{2}}|\varphi v_{n}|^{2^{*}_{s_{2}}}dx+o(1)=\frac {\beta^{*}}{2}+o(1).\] Then taking the limit as \(n\to\infty\), we obtain \[0<\beta^{*}\leq\frac{\beta^{*}}{2}.\] which is false. Therefore \(v\neq 0\) is a minimizer. Standard arguments show that \(v^{+}=\max(v,0)\) is also a minimizer and the proof is completed. Next we will establish symmetry and decay estimates properties of positive solutions \(u\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\) of the following Euler-Lagrange equations \[-\Delta u=\lambda\frac{u^{2^{*}_{s_{1}}-1}}{|z|^{s_{1}}}+\frac{u^{2^{*}_{s_{2} }-1}}{|z|^{s_{2}}}\quad\text{ in }\mathbb{R}^{N} \tag{2.7}\] where for \(N\geq 3\), we have \(x:=(y,z)\in\mathbb{R}\times\mathbb{R}^{N-1}\), \(0<s_{2}<s_{1}<2\) and \(2^{*}_{s_{i}}:=\frac{2(N-s_{i})}{N-2}\quad(i=1,2)\) are two Hardy-Sobolev critical exponents. Next, rewrite equation (2.7) as follows \[-\Delta u=\frac{f(x)}{|z|^{s_{1}}}u+\frac{g(x)}{|z|^{s_{1}}},\] where \(f,g\in L^{P}_{loc}(\mathbb{R}^{N})\) for some \(p>\frac{N}{2-s_{1}}\). Then the following result follows from [[21], Lemma 3.2 and Lemma 3.3]. **Proposition 2.2**.: _Let \(u\) is a solution of the Euler-Lagrange equation (2.7). We assume that_ \[\begin{cases}s_{1}<1+\frac{1}{N}&\text{ if }N\geq 4\\ \\ s_{1}<\frac{3}{2}&\text{ if }N=3.\end{cases}\] _Then \(u\in\mathcal{C}^{\infty}\) in the \(z\) variable while, in the \(y\) variable, it is \(\mathcal{C}^{1,\alpha}\) for all \(\alpha<1-s_{1}\) if \(s_{1}<1\) and \(\mathcal{C}^{0,\alpha}(\mathbb{R}^{N})\) for all \(\alpha<2-s_{1}\) if \(1\leq s_{1}<2\)._ This then allows to prove the following symmetry and decay estimates result. **Proposition 2.3**.: _Let \(u\) be a solution of the Euler-Lagrange equation (2.7). Then_ * _the function_ \(u\) _depends only on_ \(|y|\) _and_ \(|z|\)__ * _there exists two constants_ \(0<c_{1}<c_{2}\) _such that:_ \[\frac{c_{1}}{1+|x|^{N-2}}\leq u(x)\leq\frac{c_{2}}{1+|x|^{N-2}},\qquad\forall x \in\mathbb{R}^{N}.\] (2.8) Proof.: The proof of the symmetry is based on the moving plane method, see for instance [2, 3, 6, 7, 33] and references therein. We let \(x=(y,t)\in\mathbb{R}\times\mathbb{R}^{N-1}\). For \(\mu>0\), we define \[\Omega_{\mu}=\{x=(y,z)\in\mathbb{R}\times\mathbb{R}^{N}:y>\mu\}\] and for all \(x\in\Omega_{\mu}\), we set \(x_{\mu}=(2\mu-y,z)\in\Omega_{\mu}.\) Next, we set \[w_{\mu}(x):=u_{\mu}(x)-u(x)=u(x_{\mu})-u(x)\qquad\text{ in }\Omega_{\mu}.\] Then \(w_{\mu}\in H^{1}_{0}(\Omega_{\mu})\). **Step 1**: We first prove that: \[w_{\mu}\geq 0\qquad\text{ in }\Omega_{\mu} \tag{2.9}\] for \(\mu\) large enough. Thanks to (2.7), we have \[-\Delta w_{\mu}(x)=\lambda\left(\frac{u^{2^{*}_{s_{1}}-1}(x)}{|z|^{s_{1}}}- \frac{u^{2^{*}_{s_{1}}-1}_{\mu}(x)}{|z|^{s_{1}}}\right)+\left(\frac{u^{2^{*}_ {s_{2}}-1}(x)}{|z|^{s_{2}}}-\frac{u^{2^{*}_{s_{2}}-1}_{\mu}(x)}{|z|^{s_{2}}} \right)\qquad\text{ in }\Omega_{\mu}. \tag{2.10}\] We multiply (2.10) by \(w_{\mu}^{-}:=\min\{w_{\mu},0\}\) and we integrate by parts to get \[\int_{\Omega_{\mu}}|\nabla w_{\mu}^{-}|^{2}dx=\lambda\int_{\Omega_{\mu}}w_{\mu}^{ -}(x)\left(\frac{u^{2_{s_{1}}^{*}-1}(x)}{|z|^{s_{1}}}-\frac{u_{\mu}^{2_{s_{1}}^{ *}-1}(x)}{|z|^{s_{1}}}\right)dx+\int_{\Omega_{\mu}}w_{\mu}^{-}(x)\left(\frac{u^ {2_{s_{2}}^{*}-1}(x)}{|z|^{s_{2}}}-\frac{u_{\mu}^{2_{s_{2}}^{*}-1}(x)}{|z|^{s_{ 2}}}\right)dx\] \[\leq|\lambda|\int_{\Omega_{\mu}}\frac{w_{\mu}^{-}(x)}{|z|^{s_{1}}}\left(u^{2_{ s_{1}}^{*}-1}(x)-u_{\mu}^{2_{s_{1}}^{*}-1}(x)\right)dx+\int_{\Omega_{\mu}} \frac{w_{\mu}^{-}(x)}{|z|^{s_{2}}}\left(u^{2_{s_{2}}^{*}-1}(x)-u_{\mu}^{2_{s_{ 2}}^{*}-1}(x)\right)dx.\] We have \(u_{\mu}(x)\leq u(x)\) in \(\Omega_{\mu}\). Then using the convexity of the function \(t\longmapsto t^{2_{s_{i}}^{*}}\) (\(i=1,2\)) \[u_{\mu}^{2_{s_{i}}^{*}-1}(x)-u^{2_{s_{i}}^{*}-1}(x)\leq(2_{s_{i}}^{*}-1)u^{2_{ s_{i}}^{*}-2}(x)\left(u(x)-u_{\mu}(x)\right)=(1-2_{s_{i}}^{*})u^{2_{s_{i}}^{*}-2}(x )w_{\mu}^{-}(x)\qquad\text{in }\Omega_{\mu}.\] Therefore \[\int_{\Omega_{\mu}}|\nabla w_{\mu}^{-}|^{2}dx\leq|\lambda|(2_{s_{1}}^{*}-1)\int _{\Omega_{\mu}}\frac{|w_{\mu}^{-}(x)|^{2}}{|z|^{s_{1}}}|u(x)|^{2_{s_{1}}^{*}-2 }dx+(2_{s_{2}}^{*}-1)\int_{\Omega_{\mu}}\frac{|w_{\mu}^{-}(x)|^{2}}{|z|^{s_{2} }}|u(x)|^{2_{s_{2}}^{*}-2}dx.\] Next, by Holder's inequality, we have \[\int_{\Omega_{\mu}}\frac{|w_{\mu}^{-}(x)|^{2}}{|z|^{s_{i}}}|u(x)|^{2_{s_{i}}^ {*}-2}dx\leq\left(\int_{\Omega_{\mu}}\frac{|w_{\mu}^{-}(x)|^{2_{s_{i}}^{*}}}{ |z|^{s_{i}}}dx\right)^{2/2_{s_{i}}^{*}}\left(\int_{M_{\mu}\cap\Omega_{\mu}} \frac{|u(x)|^{2_{s_{i}}^{*}}}{|z|^{s_{i}}}dx\right)^{\frac{2_{s_{i}}^{*}-2}{2 _{s_{i}}^{*}}},\] where \(i=1,2\) and \[M_{\mu}:=\{x\in\Omega_{\mu}:\quad u(x)>u_{\mu}(x)\}.\] Since \[\lim_{\mu\to\infty}\int_{M_{\mu}\cap\Omega_{\mu}}\frac{|u(x)|^{2_{s_{1}}^{*}} }{|z|^{s_{1}}}dx=\lim_{\mu\to\infty}\int_{M_{\mu}\cap\Omega_{\mu}}\frac{|u(x)| ^{2_{s_{2}}^{*}}}{|z|^{s_{2}}}dx=0,\] we deduce that \[\int_{\Omega_{\mu}}|\nabla w_{\mu}^{-}|^{2}dx<S_{N,0}\left(\int_{\Omega_{\mu} }|w_{\mu}^{-}|^{\frac{2N}{N-2}}dx\right)^{\frac{N-2}{2N}},\] where \(S_{N,0}\) is the Sobolev best constant. As a consequence, for \(\mu\) large enough, we have \(w_{\mu}^{-}=0\) and hence \[u_{\mu}(x)\geq u(x).\] Next, we let \[\mu^{*}:=\inf\{\mu>0:\quad u(x)\leq u_{\mu^{\prime}}(x)\quad\text{ for all }x\in\Omega_{\mu^{\prime}}\text{ and all }\mu^{\prime}>\mu\}<\infty.\] **Step 2**: Then we will prove that \(\mu^{*}=0\). By contradiction, we assume that \(\lambda^{*}>0\). Then \[-\Delta w_{\mu^{*}}(x)=c_{\mu^{*}}(x)w_{\mu^{*}}(x)\qquad\text{ in }\Omega_{\mu^{*}},\] where \[c_{\mu^{*}}:=\left\{\begin{aligned} &\left(\frac{u_{\mu^{*}}^{2_{s_{1}}^{*}-1}(x)-u _{\mu}^{2_{s_{1}}^{*}-1}(x)}{|z|^{s_{1}}}-\frac{u_{\mu^{*}}^{2_{s_{2}}^{*}-1}( x)-u_{\mu}^{2_{s_{2}}^{*}-1}(x)}{|z|^{s_{2}}}\right)w_{\mu^{*}}^{-1}&& \text{if }w_{\lambda^{*}}\neq 0\\ & 0&&\text{if }w_{\mu^{*}}=0.\end{aligned}\right.\] Clearly \(c_{\mu^{*}}\in L^{\infty}(\Omega_{\mu^{*}})\). Moreover \(w_{\mu^{*}}\geq 0\quad\text{in }\partial\Omega_{\mu^{*}}.\) Then applying the maximum principle, we have \[w_{\mu^{*}}>0\quad\text{in }\Omega_{\mu^{*}}.\] Let \(D\) be any smooth compact set in \(\Omega_{\mu^{*}}\) such that \(|\Omega_{\mu}\setminus D|\) is sufficiently small for any \(\mu\) near \(\mu^{*}\). Since \[w_{\mu^{*}}(x)\geq\delta>0\qquad\text{ in }D,\] we have, by continuity, \[w_{\mu}(x)\geq 0\qquad\text{ in }\text{D},\] for all \(\mu\) near \(\mu^{*}\). In particular \[w_{\mu}(x)\geq 0\qquad\text{ on }\partial(\Omega_{\mu}\setminus D).\] Using again the maximum principle, we have \[w_{\mu}(x)\geq 0\qquad\text{ in }\Omega_{\mu}\setminus D\] and thus \(w_{\mu}(x)\geq 0\) in \(\Omega_{\mu}\), contrary to the definition of \(\mu^{*}\). We therefore conclude that \(\mu^{*}=0.\) Consequently \[u(-y,z)\geq u(y,z)\qquad\forall y>0.\] We apply the same arguments as before on the function \(v(y,z)=u(-y,z)\) in \(\mathbb{R}^{N}\) leads to \[u(-y,z)\leq u(y,z).\] Therefore \[u(-y,z)=u(y,z)\qquad\text{ in }\mathbb{R}^{N}.\] Hence the solution \(u\) of (2.7) is symmetric with respect to \(y\). **Step 3:** Repeating the same arguments as before for the functions \[x\longmapsto w(\mathcal{R}_{N-1}y,z)\quad\text{and}\quad x\longmapsto w(y, \mathcal{R}_{N-1}z)\] where \(\mathcal{R}_{N-1}\in O(N-1)\) is a \((N-1)\)-dimensional rotation, we conclude that \(w\) only depends on \(|y|\) and \(|z|\) and \(w\) is strictly decreasing in \(|y|\). This then ends the proof of (i). For the decay estimate, we write the Euler-Lagrange equation (2.7), as follow \[-\Delta w(x)=A(x)w\qquad\text{in }\mathbb{R}^{N},\] where \[A(x)=\lambda\frac{w^{2^{*}_{z_{1}}-2}(x)}{|z|^{s_{1}}}+\frac{w^{2^{*}_{z_{2}}- 2}(x)}{|z|^{s_{2}}}.\] For \(x\neq 0\), we let \(v(x)=|x|^{2-N}w(x|x|^{-2})\), the Kelvin transformation of \(w\). It also satisfies (2.7). Therefore, using the fact that a solution of (2.7) is bounded, we can find two constants \(0<c_{1}<c_{2}\) such that \[\frac{c_{1}}{1+|x|^{N-2}}\leq w(x)\leq\frac{c_{1}}{1+|x|^{N-2}},\qquad\forall x \in\mathbb{R}^{N}.\] This then ends the proof. We close this section by proving the following decay properties of \(w\) involving its higher derivatives. **Proposition 2.4**.: _Let \(w\) be a ground state for \(S_{N,\sigma}\) then there exist positive constant \(C\), only depending on \(N\) and \(s_{1}\) and \(s_{2}\), such that_ 1. _For_ \(|x|=|(t,z)|\leq 1\)__ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}|z|^{1-s_{1}}\] 2. _For_ \(|x|=|(t,z)|\geq 1\)__ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}\max(1,|z|^{-s_{1}})|x|^{1-N}.\] Proof.: Let \(\theta:\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a function such that \[w(x)=w(y,z)=\theta(|y|,|z|).\] Using polar coordinates, the function \(\theta=\theta(t,\rho)\) verifies \[\rho^{2-N}(\rho^{N-2}\theta_{2})_{2}+\theta_{11}=\lambda\rho^{-s_{1}}\theta^{ 2^{*}_{z_{1}}-1}+\rho^{-s_{2}}\theta^{2^{*}_{z_{2}}-1}\qquad\text{ for }t,\rho\in\mathbb{R}_{+}, \tag{2.11}\] where \(\theta_{1}\) and \(\theta_{2}\) are respectively the derivatives of \(\theta\) with respect to the first and the second variables. Then integrating this identity in the \(\rho\) variable, we therefore get, for every \(\rho>0\), \[\theta_{2}(t,\rho)=-\frac{1}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2}\theta_{11}(t,r)dr +\frac{\lambda}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2}r^{-s_{1}}\theta^{2^{*}_{s_{1}} -1}(t,r)dr\] \[\frac{1}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2}r^{-s_{2}}\theta^{2^{*}_{s_{2}}-1}(t, r)dr.\] Next differentiating with respect to the first variable, we get \[\theta_{12}(t,\rho)=\frac{-1}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2} \theta_{111}(t,r)dr +\frac{\lambda}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2}r^{-s_{1}} \theta_{1}(t,r)\theta^{2^{*}_{s_{1}}-2}(t,r)dr\] \[+\frac{1}{\rho^{N-2}}\int_{0}^{\rho}r^{N-2}r^{-s_{2}}\theta_{1}(t,r)\theta^{2^{*}_{s_{2}}-2}(t,r)dr.\] By Proposition 2.2 and the fact that \(2^{*}_{s_{2}}>2^{*}_{s_{1}}\geq 2\), we obtain \[|\theta_{2}(t,\rho)|+|\theta_{12}(t,\rho)|\leq C\left(\rho+\rho^{1-s_{1}}+ \rho^{1-s_{2}}\right)\leq C\rho^{1-s_{1}}\qquad\text{for }|(t,\rho)|\leq 1. \tag{2.12}\] Now using this in (2.11), we get \[|\theta_{22}|\leq C\rho^{-s_{1}},\quad\text{ for }|(t,\rho)|\leq 1. \tag{2.13}\] By (2.12) and (2.13), we obtain \[|\theta_{2}(t,\rho)|+|\theta_{12}(t,\rho)|+\rho|\theta_{22}|\leq C\rho^{1-s_{1 }}.\] Therefore, it easy follows that \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}|z|^{1-s_{1}},\qquad\text{ for all }|x|=|(t,z)|\leq 1\] and for \(|x|=|(t,z)|\geq 1\) that \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}\max(1,|z|^{-s_{1}})|x|^{1-N}.\] This then completes the proof. ## 3. Local Parametrization and metric Let \(\Gamma\subset\mathbb{R}^{N}\) be a smooth closed curve. Let \((E_{1};\ldots;E_{N})\) be an orthonormal basis of \(\mathbb{R}^{N}\). For \(y_{0}\in\Gamma\) and \(r>0\) small, we consider the curve \(\gamma:(-r,r)\to\Gamma\), parameterized by arclength such that \(\gamma(0)=y_{0}\). Up to a translation and a rotation, we may assume that \(\gamma^{\prime}(0)=E_{1}\). We choose a smooth orthonormal frame field \((E_{2}(y);...;E_{N}(y))\) on the normal bundle of \(\Gamma\) such that \((\gamma^{\prime}(y);E_{2}(y);...;E_{N}(y))\) is an oriented basis of \(\mathbb{R}^{N}\) for every \(y\in(-r,r)\), with \(E_{i}(0)=E_{i}\). We fix the following notation, that will be used a lot in the paper, \[Q_{r}:=(-r,r)\times B_{\mathbb{R}^{N-1}}(0,r),\] where \(B_{\mathbb{R}^{k}}(0,r)\) denotes the ball in \(\mathbb{R}^{k}\) with radius \(r\) centered at the origin. Provided \(r>0\) small, the map \(F_{y_{0}}:Q_{r}\to\Omega\), given by \[(y,z)\mapsto F_{y_{0}}(y,z):=\gamma(y)+\sum_{i=2}^{N}z_{i}E_{i}(y),\] is smooth and parameterizes a neighborhood of \(y_{0}=F_{y_{0}}(0,0)\). We consider \(\rho_{\Gamma}:\Gamma\to\mathbb{R}\) the distance function to the curve given by \[\rho_{\Gamma}(y)=\min_{\overline{y}\in\mathbb{R}^{N}}|y-\overline{y}|.\] In the above coordinates, we have \[\rho_{\Gamma}\left(F_{y_{0}}(x)\right)=|z|\qquad\text{for every }x=(y,z)\in Q_{r}. \tag{3.1}\] Clearly, for every \(t\in(-r,r)\) and \(i=2,\ldots N\), there are real numbers \(\kappa_{i}(y)\) and \(\tau_{i}^{j}(y)\) such that \[E_{i}^{\prime}(y)=\kappa_{i}(y)\gamma^{\prime}(y)+\sum_{j=2}^{N}\tau_{i}^{j}(y)E _{j}(y). \tag{3.2}\] The quantity \(\kappa_{i}(y)\) is the curvature in the \(E_{i}(y)\)-direction while \(\tau_{i}^{j}(y)\) is the torsion from the osculating plane spanned by \(\{\gamma^{\prime}(y);E_{j}(y)\}\) in the direction \(E_{i}\). We note that provided \(r>0\) small, \(\kappa_{i}\) and \(\tau_{i}^{j}\) are smooth functions on \((-r,r)\). Moreover, it is easy to see that \[\tau_{i}^{j}(y)=-\tau_{j}^{i}(y)\qquad\text{ for }i,j=2,\ldots,N. \tag{3.3}\] The curvature vector is \(\kappa:\Gamma\to\mathbb{R}^{N}\) is defined as \(\kappa(\gamma(y)):=\sum_{i=2}^{N}\kappa_{i}(y)E_{i}(y)\) and its norm is given by \(|\kappa\gamma(y)|:=\sqrt{\sum_{i=2}^{N}\kappa_{i}^{2}(y)}\). Next, we derive the expansion of the metric induced by the parameterization \(F_{y_{0}}\) defined above. For \(x=(y,z)\in Q_{r}\), we define \[g_{11}(x)=\partial_{y}F_{y_{0}}(x)\cdot\partial_{y}F_{y_{0}}(x),\quad g_{1i}( x)=\partial_{y}F_{y_{0}}(x)\cdot\partial_{z_{i}}F_{y_{0}}(x),\quad g_{ij}(x)= \partial_{z_{j}}F_{y_{0}}(x)\cdot\partial_{z_{i}}F_{y_{0}}(x).\] We have the following result. **Lemma 3.1**.: _There exits \(r>0\), only depending on \(\Gamma\) and \(N\), such that for ever \(x=(t,z)\in Q_{r}\)_ \[\left\{\begin{aligned} g_{11}(x)&=1+2\sum_{i=2}^{N}z_{i }\kappa_{i}(0)+2y\sum_{i=2}^{N}z_{i}\kappa_{i}^{\prime}(0)+\sum_{ij=2}^{N}z_{ i}z_{j}\kappa_{i}(0)\kappa_{j}(0)+\sum_{ij=2}^{N}z_{i}z_{j}\beta_{ij}(0)+O \left(|x|^{3}\right)\\ g_{1i}(x)&=\sum_{j=2}^{N}z_{j}\tau_{j}^{i}(0)+y\sum_{ j=2}^{N}z_{j}\left(\tau_{j}^{i}\right)^{{}^{\prime}}(0)+O\left(|x|^{3} \right)\\ g_{ij}(x)&=\delta_{ij},\end{aligned}\right. \tag{3.4}\] _where \(\beta_{ij}(y):=\sum_{l=2}^{N}\tau_{i}^{l}(y)\tau_{j}^{l}(y).\)_ As a consequence we have the following result. **Lemma 3.2**.: _There exists \(r>0\) only depending on \(\Gamma\) and \(N\), such that for every \(x\in Q_{r}\), we have_ \[\sqrt{|g|}(x)=1+\sum_{i=2}^{N}z_{i}\kappa_{i}(0)+y\sum_{i=2}^{N}z_{i}\kappa_{ i}^{\prime}(0)+\frac{1}{2}\sum_{ij=2}^{N}z_{i}z_{j}\kappa_{i}(0)\kappa_{j}(0)+O \left(|x|^{3}\right), \tag{3.5}\] _where \(|g|\) stands for the determinant of \(g\). Moreover \(g^{-1}(x)\), the matrix inverse of \(g(x)\), has components given by_ \[\left\{\begin{aligned} g^{11}(x)&=1-2\sum_{i=2}^{N}z_{ i}\kappa_{i}(0)-2y\sum_{i=2}^{N}z_{i}\kappa_{i}^{\prime}(0)+3\sum_{ij=2}^{N}z_{i}z_{j} \kappa_{i}(0)\kappa_{j}(0)+O\left(|x|^{3}\right)\\ g^{i1}(x)&=-\sum_{j=2}^{N}z_{j}\tau_{j}^{i}(0)-y\sum_{ j=2}^{N}z_{j}\left(\tau_{j}^{i}\right)^{{}^{\prime}}(0)+2\sum_{j=2}^{N}z_{i}z_{j} \kappa_{l}(0)\tau_{j}^{i}(0)+O\left(|x|^{3}\right)\\ g^{ij}(x)&=\delta_{ij}+\sum_{lm=2}^{N}z_{l}z_{m}\tau_ {l}^{j}(0)\tau_{m}^{i}(0)+O\left(|x|^{3}\right).\end{aligned}\right. \tag{3.6}\] We will also need the following estimates result. **Lemma 3.3**.: _Let \(v\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\), \(N\geq 3,\) satisfy \(v(y,z)=\overline{\theta}(|y|,|z|)\), for some some function \(\overline{\theta}:\mathbb{R}_{+}\times\mathbb{R}_{+}\to\mathbb{R}\). Then for \(0<r<R\), we have_ \[\int_{Q_{R}\setminus Q_{r}}|\nabla v|_{g}^{2}\sqrt{|g|}dx =\int_{Q_{R}\setminus Q_{r}}|\nabla v|^{2}dx+\frac{|\kappa(x_{0}) |^{2}}{N-1}\int_{Q_{R}\setminus Q_{r}}|z|^{2}\left|\partial_{y}v\right|^{2}dx\] \[+\frac{|\kappa(x_{0})|^{2}}{2(N-1)}\int_{Q_{R}\setminus Q_{r}}|z| ^{2}|\nabla v|^{2}dx+O\left(\int_{Q_{R}\setminus Q_{r}}|x|^{3}|\nabla v|^{2} dx\right).\] For the proofs of these Lemma 3.1, Lemma 3.2 and Lemma 3.3, we refer to the paper of the author and Fall [22]. See also [35] for a generalization. ## 4. Existence Result in domains The aim of this section is to prove the following result. **Proposition 4.1**.: _Let \(N\geq 4\), \(0\leq s_{2}<s_{1}<2\) and \(\Omega\) be a bounded domain of \(\mathbb{R}^{N}\). Consider \(\Gamma\) a smooth closed curve contained in \(\Omega\). Let \(h\) be a continuous function such that the linear operator \(-\Delta+h\) is coercive. We assume that_ \[c^{*}:=\sup_{t\geq 0}\Psi(v)<\beta^{*}. \tag{4.1}\] _Then there exists a positive function \(u\in H^{1}_{0}(\Omega)\) solution of the Euler-Lagrange equation_ \[-\Delta u+hu=\lambda\rho_{\Gamma}^{-s_{1}}u^{2^{*}_{s_{1}}-1}+\rho_{\Gamma}^{- s_{2}}u^{2^{*}_{s_{2}}-1}\qquad\text{in }\Omega. \tag{4.2}\] The proof of Proposition 4.1 is divided into various preliminaries results. We start by the following. **Lemma 4.2**.: _For \(N\geq 3\), we let \(\Omega\) be an open subset of \(\mathbb{R}^{N}\) and let \(\Gamma\subset\Omega\) be a smooth closed curve contained in \(\Omega\). Then for every \(r>0\), there exists \(c_{r}>0\), only depending on \(\Omega,\Gamma,N,\sigma\) and \(r\), such that for every \(u\in H^{1}_{0}(\Omega)\)_ \[\left(\frac{1}{2}-\frac{1}{2^{*}_{\sigma}}\right)\int_{\Omega}|\nabla u|^{2} dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}}\right)\int_{\Omega} \rho_{\Gamma}^{-s_{2}}dx+c_{r}\int_{\Omega}|u|^{2}dy\geq c^{*}.\] _where, for \(0\leq s_{2}<s_{1}<2\), \(2^{*}_{s_{1}}=\frac{2(N-s_{1})}{N-2}\) and \(2^{*}_{s_{2}}=\frac{2(N-s_{2})}{N-2}\)._ Proof.: We let \(r>0\) small. We can cover a tubular neighborhood of \(\Gamma\) by a finite number of sets \(\left(T_{r}^{y_{i}}\right)_{1\leq i\leq m}\) given by \[T_{r}^{y_{i}}:=F_{y_{i}}\left(Q_{r}\right),\qquad\text{ with }y_{i}\in\Gamma.\] We refer to Section 3 for the parameterization \(F_{y_{i}}:Q_{r}\to\Omega\). Let \(\left(\varphi_{i}\right)_{1\leq i\leq m}\) be a partition of unity subordinated to this covering such that \[\sum_{i}^{m}\varphi_{i}=1\qquad\text{and}\qquad|\nabla\varphi_{i}^{\frac{1}{2 ^{*}_{\sigma}}}|\leq C\qquad\text{ in }U:=\cup_{i=1}^{m}T_{r}^{y_{i}}, \tag{4.3}\] for some positive constant \(C\). We define \[\psi_{i}(y):=\varphi_{i}^{\frac{1}{2^{*}_{\sigma}}}(y)u(y)\qquad\text{ and}\qquad\widetilde{\psi}_{i}(x)=\psi_{i}(F_{y_{i}}(x)). \tag{4.4}\] Then, we have \[\int_{\Omega}\rho_{\Gamma}^{-\sigma}|u|^{2^{*}_{\sigma}}dy\geq\int_{U}\rho_{ \Gamma}^{-\sigma}\,|u|^{2^{*}_{\sigma}}\,dy=\sum_{i}^{m}\int_{T_{r}^{y_{i}}} \rho_{\Gamma}^{-\sigma}\,|\psi_{i}|^{2^{*}_{\sigma}}dy. \tag{4.5}\] By change of variables and Lemma 3.2, we have \[\int_{T_{r}^{y_{i}}}\rho_{\Gamma}^{-\sigma}|\psi_{i}|^{2^{*}_{\sigma}}dy=\int _{Q_{r}}|z|^{-\sigma}|\widetilde{\psi}_{i}|^{2^{*}_{\sigma}}\sqrt{|g|}(x)dx \geq(1-cr)\int_{Q_{r}}|z|^{-\sigma}|\widetilde{\psi}_{i}|^{2^{*}_{\sigma}}dx, \tag{4.6}\] for some positive constant \(c\). By (4.5) and (4.6) and the summing over \(i=1,\cdots,m\), we obtain \[\int_{\Omega}\rho_{\Gamma}^{-\sigma}|u|^{2^{*}_{\sigma}}dy\geq(1-cr)\sum_{i=1 }^{m}\int_{Q_{r}}|z|^{-\sigma}|\widetilde{\psi}_{i}|^{2^{*}_{\sigma}}dx=(1-cr) \int_{U}|z|^{-\sigma}|\tilde{u}(x)|^{2^{*}_{\sigma}}dx, \tag{4.7}\] with \(\tilde{u}:=u(F_{y_{i}}(x))\). Next, we have \[\int_{\Omega}|\nabla u|^{2}dx\geq\int_{U}|\nabla u|^{2}dy=\sum_{i}^{m}\int_{T _{r}^{y_{i}}}|\nabla\psi_{i}|^{2}dy. \tag{4.8}\] By change of variables, Lemma 3.2, (4.3) and (4.4), we have \[\int_{T_{r}^{y_{i}}}|\nabla\psi_{i}|^{2}dy =\int_{Q_{r}}|\nabla\widetilde{\psi}_{i}|^{2}\sqrt{|g|}(x)dx\geq(1 -cr)\int_{Q_{r}}|\nabla\widetilde{\psi}_{i}|^{2}dx\] \[\geq\left(1-c^{\prime}r\right)\int_{T_{r}^{y_{i}}}|\nabla(\varphi _{i}^{\frac{1}{2^{*}_{\sigma}}}u)|^{2}dy=\int_{T_{r}^{y_{i}}}\varphi_{i}^{ \frac{2^{*}_{\sigma}}{2^{*}}}|\nabla\tilde{u}|^{2}dy-c_{r}\int_{\Omega}|u|^{2}dy,\] for some positive constants \(c\) and \(c_{r}\). Therefore \[\int_{\Omega}\nabla u_{n}\nabla\varphi dx+\int_{\Omega}hu_{n}\varphi dx-\lambda \int_{\Omega}\rho_{\Gamma}^{-s_{1}}|u_{n}|^{2^{*}_{s_{1}}-2}u_{n}\varphi dx- \frac{1}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{\Gamma}^{-s_{2}}|u_{n}|^{2^{*}_{s_{2 }}-2}u_{n}\varphi dx+o(1), \tag{4.12}\] for all \(\varphi\in H^{1}_{0}(\Omega)\) as \(n\to\infty\). Combining (4.11) and (4.12), we obtain \[\alpha=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\Omega}|\nabla u _{n}|^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}}\right)\int_ {\Omega}hu_{n}^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}} \right)\int_{\Omega}\rho_{\Gamma}^{-s_{2}}|u_{n}|^{2^{*}_{s_{2}}}dx+o(1). \tag{4.13}\] Now we use the fact that \(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}}\) and \(\frac{1}{2}-\frac{1}{2^{*}_{s_{2}}}\) are positive and the coercivity of the linear operator \(-\Delta+h\), we obtain \[\frac{\alpha}{\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)}+o(1)\geq\int_ {\Omega}|\nabla u_{n}|^{2}dx+\int_{\Omega}hu_{n}^{2}dx\geq\|u_{n}\|_{H^{1}( \Omega)}.\] Consequently, up to a subsequence, there exists \(u\in H^{1}_{0}(\Omega)\) such that \(u_{n}\) converges weakly to \(u\) in \(H^{1}_{0}(\Omega)\) and strongly to \(L^{p}(\Omega)\) for all \(2\leq p<2^{*}_{0}\). We assume by contradiction that \(u=0\). Therefore, by (4.13), we obatin \[\alpha=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\Omega}|\nabla u _{n}|^{2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}}\right)\int_ {\Omega}\rho_{\Gamma}^{-s_{2}}|u_{n}|^{2^{*}_{s_{2}}}dx+o(1). \tag{4.14}\] Moreover by Lemma 4.2, we get \[\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{1}}}\right)\int_{\Omega}|\nabla u_{n}|^{ 2}dx+\left(\frac{1}{2^{*}_{s_{1}}}-\frac{1}{2^{*}_{s_{2}}}\right)\int_{\Omega }\rho_{\Gamma}^{-s_{2}}|u_{n}|^{2^{*}_{s_{2}}}dx+o(1)\geq\beta^{*}. \tag{4.15}\] Hence by (4.14) and (4.15), we obtain \[\alpha\geq\beta^{*},\] which contradicts the fact that \(\alpha<\beta^{*}\). Then \(u\neq 0\) and \[u_{n}\to u\qquad\text{in }H^{1}_{0}(\Omega).\] This then ends the proof. Next, we will need the following so-called mountain pass lemma due to Ambrosetti and Robinowitz, see [1]. Then we have **Lemma 4.4**.: _(Mountain Pass Lemma) Let \((X,\|\cdot\|_{X})\) be a Banach space and \(\Psi:X\to\mathbb{R}\) a functional of class \(\mathcal{C}^{1}\). Wa assume that_ 1. \(\Psi(0)=0\)_;_ 2. _there exist_ \(\lambda,r>0\) _such that_ \(\Psi(u)\geq\lambda\) _for all_ \(u\in X\)_, we have_ \(\|u\|_{X}=r\)_;_ 3. _there exists_ \(u_{0}\in X\) _such that_ \[\lim_{t\to+\infty}\sup\Psi(tu_{0})<0.\] _Consider \(t_{0}>0\) sufficiently large such that \(\|t_{0}u_{0}\|_{X}>r\) and \(\Psi(t_{0}u_{0})<0\). Define \(\beta\leq\lambda\) as_ \[\beta:=\inf_{\gamma\in\mathcal{P}}\sup_{t\in[0,1]}\Psi(\gamma(t)),\] _where_ \[\mathcal{P}=\{\gamma\in\mathcal{C}^{0}([0,1];X)\text{ such that }\gamma(0)=0\text{ and } \gamma(1)=t_{0}u_{0}\}.\] _Then, there exists a sequence \((u_{n})_{n}\subset X\) such that \(\Psi(u_{n})\to\beta\) and \(\Psi^{\prime}(u_{n})\to 0\) strongly in \(X^{\prime}\). Moreover, we have that_ \[\beta\leq\sup_{t\geq 0}\Psi(tu_{0}).\] **Lemma 4.5**.: _Let \(\Omega\) be a bounded domain of \(\mathbb{R}^{N}\), \(\Gamma\) be a closed curve included in \(\Omega\) and \(h\) be a continuous function such that the linear operator is \(-\Delta+h\) is coercive. Let \(u_{0}\in H^{1}_{0}(\Omega)\setminus\{0\}\). Then there exists \(c_{0}\) a positive constant depending on \(u_{0}\) and \((u_{n})_{n}\subset H^{1}_{0}(\Omega)\) a a Palais-Smale sequence for \(\Psi\) at level \(c_{0}\). Moreover_ \[c_{0}\leq\sup_{t\geq 0}\Psi(tu_{0}).\] Proof.: We let \(t\in\mathbb{R}\). Recall that for all \(u\in H^{1}_{0}(\Omega)\), we have \[\Psi(tu):=\frac{t^{2}}{2}\int_{\Omega}(|\nabla u|^{2}+hu^{2})dx-\lambda\frac{ |t|^{2^{*}_{s_{1}}}}{2^{*}_{s_{1}}}\int_{\Omega}\rho_{\Gamma}^{-s_{1}}|u|^{2^ {*}_{s_{1}}}dx-\frac{|t|^{2^{*}_{s_{2}}}}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{ \Gamma}^{-s_{2}}|u|^{2^{*}_{s_{2}}}dx\] Then \(\Psi\in\mathcal{C}^{1}(H^{1}_{0}(\Omega),\mathbb{R})\). Since \(0<s_{2}<s_{1}<2\) and the fact that the function \(t\longmapsto 2^{*}_{s}:=\frac{2(N-s)}{N-2}\) is decreasing, we have \[\lim_{t\to\infty}\Psi(tu)=-\infty. \tag{4.16}\] Moreover, using the fact that \(2^{*}_{s_{1}},2^{*}_{s_{2}}>2\), then there exists sufficiently positive numbers \(\lambda,r\) such that \[\inf_{\|u\|=r}\Psi(u)\geq\lambda.\] Therefore by the Mountain pass Lemma (4.4), we get the desired result. Proof.: **of Proposition 4.1**. Let \(u_{0}\in H^{1}_{0}(\Omega)\) be a non-negative, non-vanishing function such that \[\sup_{t\geq 0}\Psi(tu_{0})<\beta^{*}.\] Then by Lemma 4.5, there exists \(c_{0}>0\) depending on \(u_{0}\) and a Palais-Smale sequence \((u_{n})_{n}\subset H^{1}_{0}(\Omega)\) for \(\Psi\) at level \(c_{0}\) such that \[c_{0}\leq\sup_{t\geq 0}\Psi(tu_{0})<\beta^{*}.\] By Lemma 4.3, there exists \(u\in\mathcal{H}^{1}_{0}(\Omega)\setminus\{0\}\) such that, up to a subsequence, \[u_{n}\longmapsto u\quad\text{strongly in $H^{1}_{0}(\Omega)$ as $n\to\infty$ and $\Psi^{\prime}(u)=0$}.\] The last equality corresponds exactly to the Euler-Lagrange equation (4.2). This then ends the proof. ## 5. Existence of solution in domains: Proof of Theorem 1.3 Next, we let \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{N})\) be a positive ground state solution of \[-\Delta w=\lambda|z|^{-s_{1}}w^{2^{*}_{s_{1}}-1}+|z|^{-s_{2}}w^{2^{*}_{s_{2}}-1 }\qquad\text{ in $\mathbb{R}^{N}$} \tag{5.1}\] and \[\beta^{*}=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla w|^{2}dx-\frac{\lambda}{2^{* }_{s_{1}}}\int_{\mathbb{R}^{N}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}}\,dx-\frac{1}{2^ {*}_{s_{2}}}\int_{\mathbb{R}^{N}}|z|^{-s_{2}}|w|^{2^{*}_{s_{2}}}\,dx.\] In what follows, we define \[A_{N,s_{1},s_{2}}:=\frac{\int_{\mathbb{R}^{N}}|z|^{2}|\partial_{y}w|^{2}dx+ \int_{\mathbb{R}^{N}}|z|^{2}|\nabla w|^{2}dx-\frac{\lambda}{2^{*}_{s_{1}}}\int _{\mathbb{R}^{N}}|z|^{2-s_{1}}|w|^{2^{*}_{s_{1}}}\,dx-\frac{1}{2^{*}_{s_{2}}} \int_{\mathbb{R}^{N}}|z|^{2-s_{2}}|w|^{2^{*}_{s_{2}}}\,dx}{2(N-1)\int_{\mathbb{ R}^{N}}w^{2}dx},\] for \(N\geq 5\) and \(A_{4,s_{1},s_{2}}:=3/2.\) Then we have the following result. **Proposition 5.1**.: _For \(N\geq 4\), we let \(\Omega\) be a bounded domain of \(\mathbb{R}^{N}\). We assume that_ \[A_{s_{1},s_{2}}^{N}|\kappa(y_{0})|^{2}+h(y_{0})<0, \tag{5.2}\] _for some positive constant. Then there exists \(u\in H^{1}_{0}(\Omega)\setminus\{0\}\) such that_ \[c^{*}:=\max_{t\geq 0}\Psi(tu)<\beta^{*}\,.\] Let \(\Omega\) a bounded domain of \(\mathbb{R}^{N}\) and \(\Gamma\subset\Omega\) be a smooth closed curve. We let \(\eta\in\mathcal{C}^{\infty}_{c}\left(F_{y_{0}}\left(Q_{2r}\right)\right)\) be such that \[0\leq\eta\leq 1\qquad\text{ and }\qquad\eta\equiv 1\quad\text{in $Q_{r}$}.\] For \(\varepsilon>0\), we consider the test function \(u_{\varepsilon}:\Omega\to\mathbb{R}\) given by \[u_{\varepsilon}(y):=\varepsilon^{\frac{2-N}{2}}\eta(F_{y_{0}}^{-1}(y))w\left( \varepsilon^{-1}F_{y_{0}}^{-1}(y)\right). \tag{5.3}\] In particular, for every \(x=(t,z)\in\mathbb{R}\times\mathbb{R}^{N-1}\), we have \[u_{\varepsilon}\left(F_{y_{0}}(x)\right):=\varepsilon^{\frac{2-N}{2}}\eta \left(x\right)\theta\left(|t|/\varepsilon,|z|/\varepsilon\right). \tag{5.4}\] It is clear that \(u_{\varepsilon}\in H^{1}_{0}(\Omega).\) Moreover, for \(t\geq 0\), we have \[\Psi(tu_{\varepsilon})=\frac{t^{2}}{2}\int_{\Omega}|\nabla u_{\varepsilon}|^ {2}+h(x)u_{\varepsilon}^{2}dx-\lambda\frac{t^{2^{*}_{s_{1}}}}{2^{*}_{s_{1}}} \int_{\Omega}\rho_{\Gamma}^{-s_{1}}|u_{\varepsilon}|^{2^{*}_{s_{1}}}\,dx- \frac{t^{2^{*}_{s_{2}}}}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{\Gamma}^{-s_{2}}|u _{\varepsilon}|^{2^{*}_{s_{2}}}\,dx. \tag{5.5}\] To simplify the notations, we will write \(F\) in the place of \(F_{y_{0}}\). Recalling (5.3), we write \[u_{\varepsilon}(y)=\varepsilon^{\frac{2-N}{2}}\eta(F^{-1}(y))W_{\varepsilon}( y),\] where \(W_{\varepsilon}(y)=w\left(\frac{F^{-1}(y)}{\varepsilon}\right)\). **Lemma 5.2**.: _As \(\varepsilon\to 0\), we have_ \[\int_{\Omega} |\nabla u_{\varepsilon}|^{2}dy+\int_{\Omega}h(x)u_{\varepsilon}^{2} (x)dx=\int_{\mathbb{R}^{N}}|\nabla w|^{2}dx+\varepsilon^{\frac{2}{|\kappa(y_{0 })|^{2}}}\frac{|\kappa(y_{0})|^{2}}{N-1}\int_{\mathbb{R}^{N}}|z|^{2}\left| \partial_{t}w\right|^{2}dx\] \[+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)}\int_{\mathbb{ R}^{N}}|z|^{2}|\nabla w|^{2}dx+\varepsilon^{2}h(y_{0})\int_{\mathbb{R}^{N}}w^{2}(x) dx+O\left(\varepsilon^{N-2}\right)\qquad\text{ for $N\geq 5$}.\] _For \(N=4\), there exists \(C>0\), we have_ \[\int_{\Omega} |\nabla u_{\varepsilon}|^{2}dy+\int_{\Omega}h(x)u_{\varepsilon}^{2} (x)dx\leq\int_{\mathbb{R}^{N}}|\nabla w|^{2}dx+C\varepsilon^{2}\left(\frac{3} {2}|\kappa(y_{0})|^{2}+h(y_{0})\right)|\ln(\varepsilon)|+O(\varepsilon^{2}).\] Proof.: We have \[|\nabla u_{\varepsilon}|^{2}=\varepsilon^{2-N}\left(\eta^{2}|\nabla W_{ \varepsilon}|^{2}+\eta^{2}|\nabla W_{\varepsilon}|^{2}+\frac{1}{2}\nabla W_{ \varepsilon}^{2}\cdot\nabla\eta^{2}\right).\] Then integrating by parts, we get \[\int_{\Omega}|\nabla u_{\varepsilon}|^{2}dy =\varepsilon^{2-N}\int_{F(Q_{2r})}\eta^{2}|\nabla W_{ \varepsilon}|^{2}dy+\varepsilon^{2-N}\int_{F(Q_{2r})\setminus F(Q_{r})}W_{ \varepsilon}^{2}\left(|\nabla\eta|^{2}-\frac{1}{2}\Delta\eta^{2}\right)dy\] \[=\varepsilon^{2-N}\int_{F(Q_{2r})}\eta^{2}|\nabla W_{\varepsilon }|^{2}dy-\varepsilon^{2-N}\int_{F(Q_{2r})\setminus F(Q_{r})}W_{\varepsilon}^ {2}\eta\Delta\eta dy\] \[=\varepsilon^{2-N}\int_{F(Q_{2r})}\eta^{2}|\nabla W_{\varepsilon }|^{2}dy+O\left(\varepsilon^{2-N}\int_{F(Q_{2r})\setminus F(Q_{r})}W_{ \varepsilon}^{2}dy\right). \tag{5.6}\] By the change of variable \(y=\frac{F(x)}{\varepsilon}\) and (5.4), we can apply Lemma 3.3, to get \[\int_{\Omega}|\nabla u_{\varepsilon}|^{2}dy=\int_{\mathcal{Q}_{ r/\varepsilon}}|\nabla w|_{g_{\varepsilon}}^{2}\sqrt{|g_{\varepsilon}|}dx+O \left(\varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}w^{2 }dx+\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|\nabla w|^{2}dx\right)\] \[=\int_{\mathbb{R}^{N}}|\nabla w|^{2}dx+\varepsilon^{2}\frac{| \kappa(y_{0})|^{2}}{N-1}\int_{Q_{r/\varepsilon}}|z|^{2}\left|\partial_{t}w \right|^{2}dx+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)}\int_{Q_{r/ \varepsilon}}|z|^{2}|\nabla w|^{2}dx\] \[+O\left(\varepsilon^{2}\int_{Q_{r/\varepsilon}}|x|^{3}|\nabla w|^ {2}dx+\varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|w|^ {2}dx+\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|\nabla w|^{2}dx+ \varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{2}| \nabla w|^{2}dx\right).\] By Proposition 2.4, we have, for \(N\geq 4\), that \[\varepsilon^{3}\int_{Q_{r/\varepsilon}}|x|^{3}|\nabla w|^{2}dx+ \varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|w|^{2}dx +\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|\nabla w|^{2}dx\] \[+\varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r/ \varepsilon}}|z|^{2}|\nabla w|^{2}dx=O(\varepsilon^{N-2})\] and \[\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}w^{2}dx+\int_{ \mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|z|^{2}\left|\partial_{t}w\right|^ {2}dx+\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|z|^{2}|\nabla w|^{2}dx =O(\varepsilon^{N-4})\qquad\forall N\geq 5.\] Therefore if \(N\geq 5\), we have \[\int_{\Omega}|\nabla u_{\varepsilon}|^{2}dy=\int_{\mathbb{R}^{N} }|\nabla w|^{2}dx+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{N-1}\int_{ \mathbb{R}^{N}}|z|^{2}\left|\partial_{t}w\right|^{2}dx\] \[+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)}\int_{\mathbb{ R}^{N}}|z|^{2}|\nabla w|^{2}dx+O\left(\varepsilon^{N-2}\right). \tag{5.7}\] For \(N=4\), we have \[\int_{\Omega}|\nabla u_{\varepsilon}|^{2}dy\leq\int_{\mathbb{R}^{N}}|\nabla w |^{2}dx+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{2}\int_{Q_{r/\varepsilon}}|z |^{2}\left|\nabla w\right|^{2}dx+O\left(\varepsilon^{2}\right). \tag{5.8}\] Next, by the change of variable formula \(y=\frac{F(x)}{\varepsilon}\),(5.4) and the continuity of the function \(h\), we have \[\int_{\Omega}h(x)u_{\varepsilon}^{2}(x)dx=\varepsilon^{2}h(y_{0})\int_{Q_{r/ \varepsilon}}w^{2}(x)dx+\varepsilon^{2}\int_{Q_{2r/\varepsilon}\setminus Q_{r /\varepsilon}}w^{2}(x)dx.\] Using again Proposition 2.3, we get \[\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}w^{2}(x)dx=O\left( \varepsilon^{N-2}\right).\] Moreover for \(N\geq 5\), we have \[\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}w^{2}(x)dx=O\left(\varepsilon^ {N-2}\right)\] Therefore \[\int_{\Omega}u_{\epsilon}^{2}(x)dx=\varepsilon^{2}h(y_{0})\int_{\mathbb{R}^{N}}w^{2 }(x)dx+o\left(\varepsilon^{2}\right). \tag{5.9}\] If \(N=4\), we have \[\int_{\Omega}u_{\epsilon}^{2}(x)dx=\varepsilon^{2}h(y_{0})\int_{Q_{r/\epsilon} }w^{2}(x)dx+O\left(\varepsilon^{2}\right). \tag{5.10}\] Next, we assume that \(N=4\) and we let \(\eta_{\epsilon}(x)=\eta(\varepsilon x)\). We multiply (5.1) by \(|z|^{2}\eta_{\epsilon}w\) and integrate by parts to get \[\lambda\int_{Q_{2r/\epsilon}}\eta_{\epsilon}|z|^{2-s_{1}}w^{2^{* }_{\epsilon_{1}}}dx+\int_{Q_{2r/\epsilon}}\eta_{\epsilon}|z|^{2-s_{2}}w^{2^{* }_{\epsilon_{2}}}dx=\int_{Q_{2r/\epsilon}}\nabla w\cdot\nabla\left(\eta_{ \epsilon}|z|^{2}w\right)dx\] \[=\int_{Q_{2r/\epsilon}}\eta_{\epsilon}|z|^{2}|\nabla w|^{2}dx+ \frac{1}{2}\int_{Q_{2r/\epsilon}}\nabla w^{2}\cdot\nabla\left(|z|^{2}\eta_{ \epsilon}\right)dx\int_{Q_{2r/\epsilon}}\eta_{\epsilon}|z|^{2}|\nabla w|^{2}dx -\frac{1}{2}\int_{Q_{2r/\epsilon}}w^{2}\Delta\left(|z|^{2}\eta_{\epsilon} \right)dx\] \[=\int_{Q_{2r/\epsilon}}\eta_{\epsilon}|z|^{2}|\nabla w|^{2}dx-3 \int_{Q_{2r/\epsilon}}w^{2}\eta_{\epsilon}dx=\quad-\frac{1}{2}\int_{Q_{2r/ \epsilon}\backslash Q_{r/\epsilon}}w^{2}(|z|^{2}\Delta\eta_{\epsilon}+4\nabla \eta_{\epsilon}\cdot z)dx.\] We then deduce that \[\lambda\int_{Q_{2r/\epsilon}}|z|^{2-s_{1}}w^{2^{*}_{\epsilon_{1}}}dx +\int_{Q_{2r/\epsilon}}|z|^{2-s_{2}}w^{2^{*}_{\epsilon_{2}}}dx= \int_{Q_{r/\epsilon}}|z|^{2}|\nabla w|^{2}dx-(N-1)\int_{Q_{r/\epsilon}}w^{2}dx\] \[+O\left(\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}|z|^{2- \sigma}w^{2^{*}_{\epsilon}}dx+\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}} |z|^{2}|\nabla w|^{2}dx+\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}w^{2}dx\right)\] \[+O\left(\varepsilon\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon} }|z||\nabla w|dx+\varepsilon^{2}\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon} }|z|^{2}w^{2}dx\right).\] By Proposition 2.3, we have \[\lambda\int_{Q_{2r/\epsilon}}|z|^{2-s_{1}}w^{2^{*}_{\epsilon_{1}}}dx+\int_{Q_{ 2r/\epsilon}}|z|^{2-s_{2}}w^{2^{*}_{\epsilon_{2}}}dx=O(1)\] and \[+\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}|z|^{2-\sigma}w^{ 2^{*}_{\sigma}}dx +\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}|z|^{2}|\nabla w |^{2}dx+\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}w^{2}dx\] \[+\varepsilon\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}|z|| \nabla w|dx+\varepsilon^{2}\int_{Q_{2r/\epsilon}\backslash Q_{r/\epsilon}}|z|^ {2}w^{2}dx=O(\varepsilon^{2}).\] Therefore \[\int_{Q_{r/\epsilon}}|z|^{2}|\nabla w|^{2}dx=3\int_{Q_{r/\epsilon}}w^{2}dx+O( 1). \tag{5.11}\] To finish, we use Proposition 2.3 to get \[\int_{Q_{r/\epsilon}}w^{2}dx\leq C\int_{Q_{r/\epsilon}}\frac{dx}{1+|x|^{4}}=C| S^{3}|\int_{0}^{r\varepsilon}\frac{t^{3}dt}{1+t^{4}}\leq C(1+|\ln( \varepsilon)|), \tag{5.12}\] where \(C\) is a positive constant that may change from an inequality to another. Thus the result follows immediately from (5.7), (5.8), (5.9), (5.10) (5.11) and (5.12). This then ends the proof. **Lemma 5.3**.: _Let \(s\in(0,2)\). Then we have_ \[\int_{\Omega}\rho_{\Gamma}^{-s}|u_{\epsilon}|^{2^{*}_{\epsilon}}dx=\int_{ \mathbb{R}^{N}}|z|^{-s}w^{2^{*}_{\epsilon}}dx+\varepsilon^{2}\frac{|\kappa(y_{ 0})|^{2}}{2(N-1)}\int_{\mathbb{R}^{N}}|z|^{2-s}w^{2^{*}_{\epsilon}}dx+O\left( \varepsilon^{N-s}\right).\] Proof.: Let \(s\in[0,2)\). Then by the change of variable \(y=\frac{F(x)}{\varepsilon}\), (3.1) and (3.5), we get \[\int_{\Omega}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2^{s}}dy=\int_{Q_ {r/\varepsilon}}|z|^{-s}w^{2^{s}_{s}}\sqrt{|g_{\varepsilon}|}dx+O\left(\int_{Q_ {2r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{-s}(\eta(\varepsilon x)w)^{ 2^{s}_{s}}dx\right)\\ =\int_{Q_{r/\varepsilon}}|z|^{-s}w^{2^{s}_{s}}dx+\varepsilon^{2} \frac{|\kappa(y_{0})|^{2}}{2(N-1)}\int_{Q_{r/\varepsilon}}|z|^{2-s}w^{2^{s}_{s }}dx\\ +O\left(\varepsilon^{3}\int_{Q_{r/\varepsilon}}|x|^{3}|z|^{-s}w^{ 2^{s}_{s}}dx+\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{ 2^{s}_{s}}dx\right)\\ =\int_{\mathbb{R}^{N}}|z|^{-s}w^{2^{s}_{s}}dx+\varepsilon^{2} \frac{|\kappa(y_{0})|^{2}}{2(N-1)}\int_{Q_{r/\varepsilon}}|z|^{2-s}w^{2^{s}_{ s}}dx\\ +O\left(\varepsilon^{3}\int_{Q_{r/\varepsilon}}|x|^{3}|z|^{-s}w^{ 2^{s}_{s}}dx+\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2^{s }_{s}}dx+\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2^{s}_ {s}}dx\right).\] By Proposition 2.3, we have \[\varepsilon^{3}\int_{Q_{r/\varepsilon}}|x|^{3}|z|^{-s}w^{2^{s}_{s}}dx+\int_{ \mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2^{s}_{s}}dx+\int_{Q_{2r/ \varepsilon}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2^{s}_{s}}dx=O\left( \varepsilon^{N-s}\right)\] and \[\int_{\mathbb{R}^{N}\setminus Q_{r/\varepsilon}}|z|^{2-s}w^{2^{s}_{s}}dx=O \left(\varepsilon^{N-2-s}\right)\qquad\forall N\geq 4. \tag{5.13}\] Therefore \[\int_{\Omega}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2^{s}_{s}}dx=\int_{\mathbb{ R}^{N}}|z|^{-s}w^{2^{s}_{s}}dx+\varepsilon^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)} \int_{\mathbb{R}^{N}}|z|^{2-s}w^{2^{s}_{s}}dx+O\left(\varepsilon^{N-s}\right), \tag{5.14}\] as \(\varepsilon\to 0\). This then ends the proof. Now we are in position to prove Proposition 5.1. Proof.: **of Proposition 5.1** Recall that, for all \(t\geq 0\) and all \(u\in H^{1}_{0}(\Omega)\), we have \[\Psi(tu):=\frac{t^{2}}{2}\int_{\Omega}|\nabla u|^{2}dx+\frac{1}{2}\int_{\Omega }h(x)u^{2}dx-t^{2^{s}_{s}}\frac{\lambda}{2^{s}_{s_{1}}}\int_{\Omega}\frac{|u|^ {2^{s}_{s_{1}}}}{\rho_{\Gamma}^{1}(x)}dx-t^{2^{s}_{s_{2}}}\frac{1}{2^{s}_{s_{2 }}}\int_{\Omega}\frac{|u|^{2^{s}_{s_{2}}}}{\rho_{\Gamma}^{1}(x)}dx.\] Then by Lemma 5.2 and Lemma 5.3, we have \[J\left(tu_{\varepsilon}\right)=\Psi(tw) +\varepsilon^{2}t^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)}\left(\int _{Q_{r/\varepsilon}}|z|^{2}\left|\partial_{t}w\right|^{2}dx+\int_{Q_{r/ \varepsilon}}|z|^{2}|\nabla w|^{2}dx\right)\] \[+\varepsilon^{2}t^{2}h(y_{0})\int_{Q_{r/\varepsilon}}w^{2}dx+ \varepsilon^{2}\lambda\frac{t^{2^{s}_{s_{1}}}}{2^{s}_{s_{1}}}\frac{|\kappa(y_ {0})|^{2}}{2(N-1)}\int_{Q_{r/\varepsilon}}|z|^{2-s_{1}}w^{2^{s}_{s_{1}}}dx\] \[+\varepsilon^{2}\frac{t^{2^{s}_{s_{2}}}}{2^{s}_{s_{2}}}\frac{| \kappa(y_{0})|^{2}}{2(N-1)}\int_{Q_{r/\varepsilon}}|z|^{2-s_{2}}w^{2^{s}_{s_{2 }}}dx+O\left(\varepsilon^{N-2}\right)\qquad\text{for }N\geq 5.\] For \(N=4\), there exists \(C>0\), we have \[J(tu_{\varepsilon})\leq\Psi(tw)+C\varepsilon^{2}t^{2}\left(\frac{3}{2}| \kappa(y_{0})|^{2}+h(y_{0})\right)|\ln(\varepsilon)|+O(\varepsilon^{2}).\] Since \(2^{*}_{s_{2}}>2^{*}_{s_{1}}\), \(J(tu_{\varepsilon})\) has a unique maximum, we have \[\max_{t\geq 0}\Psi(tw)=\Psi(w)=\beta^{*}.\] Therefore, the maximum of \(J(tu_{\varepsilon})\) occurs at \(t_{\varepsilon}:=1+o_{\varepsilon}(1)\). Next setting \[\mathcal{G}(tw): =\varepsilon^{2}t^{2}\frac{|\kappa(y_{0})|^{2}}{2(N-1)}\left(\int _{\mathbb{R}^{N}}|z|^{2}\left|\partial_{t}w\right|^{2}dx+\int_{\mathbb{R}^{N}}| z|^{2}|\nabla w|^{2}dx\right)\] \[+\varepsilon^{2}t^{2}h(y_{0})\int_{\mathbb{R}^{N}}w^{2}dx+ \varepsilon^{2}\lambda\frac{t^{2^{*}_{s_{1}}}}{2^{*}_{s_{1}}}\frac{|\kappa(y_ {0})|^{2}}{2(N-1)}\int_{\mathbb{R}^{N}}|z|^{2-s_{1}}w^{2^{*}_{s_{1}}}dx\] \[+\varepsilon^{2}\frac{t^{2^{*}_{s_{2}}}}{2^{*}_{s_{2}}}\frac{| \kappa(y_{0})|^{2}}{2(N-1)}\int_{\mathbb{R}^{N}}|z|^{2-s_{2}}w^{2^{*}_{s_{2}} }dx+o(\varepsilon^{2})\quad\text{ for }N\geq 5,\] and \[\mathcal{G}(tw)=C\varepsilon^{2}|\ln(\varepsilon)|t^{2}\left(\frac{3}{2}| \kappa(y_{0})|^{2}+h(y_{0})\right)+O(\varepsilon^{2})\qquad\text{ for }N=4.\] Thanks to assumption (5.2), we have \[\mathcal{G}(w)<0.\] Therefore \[\max_{t\geq 0}J(tu_{\varepsilon}):=J(t_{\varepsilon}u_{\varepsilon})\leq \Psi(t_{\varepsilon}w)+\varepsilon^{2}\mathcal{G}(t_{\varepsilon}w)\leq\Psi(t_ {\varepsilon}w)<\Psi(w)=\beta^{*}\] We thus get the desired result. Proof.: **of Theorem 1.3** The proof of Theorem 1.3 is a direct consequence of Proposition 4.1 and Proposition 5.1.
2309.08152
DA-RAW: Domain Adaptive Object Detection for Real-World Adverse Weather Conditions
Despite the success of deep learning-based object detection methods in recent years, it is still challenging to make the object detector reliable in adverse weather conditions such as rain and snow. For the robust performance of object detectors, unsupervised domain adaptation has been utilized to adapt the detection network trained on clear weather images to adverse weather images. While previous methods do not explicitly address weather corruption during adaptation, the domain gap between clear and adverse weather can be decomposed into two factors with distinct characteristics: a style gap and a weather gap. In this paper, we present an unsupervised domain adaptation framework for object detection that can more effectively adapt to real-world environments with adverse weather conditions by addressing these two gaps separately. Our method resolves the style gap by concentrating on style-related information of high-level features using an attention module. Using self-supervised contrastive learning, our framework then reduces the weather gap and acquires instance features that are robust to weather corruption. Extensive experiments demonstrate that our method outperforms other methods for object detection in adverse weather conditions.
Minsik Jeon, Junwon Seo, Jihong Min
2023-09-15T04:37:28Z
http://arxiv.org/abs/2309.08152v2
# DA-RAW: Domain Adaptive Object Detection for Real-World ###### Abstract Despite the success of deep learning-based object detection methods in recent years, it is still challenging to make the object detector reliable in adverse weather conditions such as rain and snow. For the robust performance of object detectors, unsupervised domain adaptation has been utilized to adapt the detection network trained on clear weather images to adverse weather images. While previous methods do not explicitly address weather corruption during adaptation, the domain gap between clear and adverse weather can be decomposed into two factors with distinct characteristics: a style gap and a weather gap. In this paper, we present an unsupervised domain adaptation framework for object detection that can more effectively adapt to real-world environments with adverse weather conditions by addressing these two gaps separately. Our method resolves the style gap by concentrating on style-related information of high-level features using an attention module. Using self-supervised contrastive learning, our framework then reduces the weather gap and acquires instance features that are robust to weather corruption. Extensive experiments demonstrate that our method outperforms other methods for object detection in adverse weather conditions. ## I Introduction Object detection plays a crucial role in enabling machines, such as autonomous vehicles and surveillance systems, to perceive and comprehend their surrounding environment. While deep learning has significantly improved object detection capabilities, ensuring the accuracy of these systems under adverse weather conditions like rain and snow remains an ongoing challenge. To ensure the detector's dependability, it is necessary to develop a learning method that can adapt object detectors to adverse weather conditions. Due to the laborious process of obtaining labeled data for real-world adverse weather conditions, various methods have utilized synthetic datasets to improve detection performance. By generating synthetic weather effects on clear weather images without degradation, fully annotated images of adverse weather are obtained. These images are utilized to train the robust model in a supervised manner [1, 2, 3], or the removal network can be trained to restore a clear image from adverse weather images [4, 5, 6, 7]. However, prior knowledge of weather conditions cannot effectively capture the intricate characteristics of adverse weather conditions in the real world, which have diverse and complex effects on images. Therefore, relying on synthetic datasets does not significantly enhance the model's performance when applied to real-world environments. Recent works also suggest that separately trained removal networks do not help downstream tasks [8, 9], implying the need for methods that improve the performance of downstream tasks in adverse weather conditions. Recent studies have focused on Unsupervised Domain Adaptation (UDA) to enhance the robustness of object detectors in adverse weather conditions [10, 11, 12, 13, 14, 15, 16, 17]. These methods adapt the model trained in the source domain of clear weather to the target domain of adverse weather by considering adverse weather as a factor contributing to the domain gap [18, 19]. Without requiring the ground truth labels of target domain images, most UDA methods align the feature distributions of the two domains globally in an adversarial manner [10, 11, 12, 13, 20]. While most UDA methods regard the domain gap between clear and adverse weather data similarly to conventional domain adaptation settings, the gap can be broken down into two distinct factors: the _style gap_ and the _weather gap_[21]. Style gaps are caused by variations in the operating environment (e.g., background, color, texture), whereas weather gaps result from weather-induced corruption (e.g., rain stains, snowflakes). Unlike style gaps, which are caused by global and semantic factors, weather corruption produces arbitrary Fig. 1: We propose a novel unsupervised domain adaptation method capable of adapting an object detector from clear weather to real-world adverse weather conditions with a significant domain gap. This gap can be divided into two distinct factors: the _Style Gap_ and the _Weather Gap_. The _Style Gap_ stems from environmental changes such as the image’s background or color, whereas the _Weather Gap_ is caused by weather corruptions like rain stains, which introduce random and localized image degradation. Due to the distinct characteristics of the two gaps, we employ separate modules to address each of them independently. and localized image degradation that is hard to characterize using prior knowledge [9]. Existing UDA methods consider weather corruption as part of the image's style and align the source and target distributions globally. As some features are arbitrarily and severely distorted by weather corruption, these methods frequently lead to suboptimal alignment under adverse weather conditions. Consequently, they are only effective on synthetic datasets with minor domain gaps [10], whereas their performance degrades when applied to real-world datasets with a large style gap and complex weather corruption [21]. Separately addressing the two aspects of the domain gap improves domain alignment and enables robust object detection in real-world adverse weather conditions. In this paper, we propose an unsupervised domain adaptation method to enhance the robustness of object detection in real-world adverse weather conditions. Specifically, we resolve the style and weather gaps separately to achieve optimal feature alignment. To bridge the style gap, our method aligns high-level style-related features using an attention module. Moreover, self-supervised contrastive learning is employed to resolve the weather gap. Based on the assumption that each instance consists of an object and random weather corruption, our model encourages the similarity between instance features within the same class, resulting in a robust representation against corruption. To demonstrate the efficacy of our method in a variety of real-world scenarios, we collect actual driving data in a wide range of environments and weather conditions. Through extensive experiments, we demonstrate that our method effectively adapts to various real-world datasets. ## II Related Works ### _Object Detection in Adverse Weather Condition_ For the reliable perception of environments, numerous methods attempt to train robust object detectors under adverse weather [22, 23, 18, 24]. The most intuitive approach is to utilize annotated datasets of adverse weather conditions [9, 25, 26]. Due to the difficulty of acquiring labeled data for real-world adverse weather, synthetic weather effects are generated on labeled clear images using prior knowledge of image formulation under adverse weather condition [1, 2, 3]. The object detector is then trained in a supervised manner using this synthetic dataset. Other methods train removal networks to restore clear images from adverse weather images using paired data of clear and synthetic weather images with the same background [27, 4, 28], or using unpaired data [29, 7, 30]. To acquire more realistic synthetic data, some methods jointly train a synthetic data generation model and its removal network [5, 6]. In real-world environments, however, the efficacy of methods that utilize synthetic data decreases due to the complexity and diversity of real-world weather corruptions [9]. In addition, removal networks are computationally intensive to be attached to the front of the detection network, and they are trained independently to downstream tasks, which provides insufficient performance improvement for these tasks on real-world images [8, 9]. While some methods have attempted to jointly train the removal network and downstream tasks [31, 32, 33, 34], they still rely on synthetic data or impose a computational burden. ### _Unsupervised Domain Adaptation for Object Detection_ Unsupervised domain adaptation can be used to directly adapt a detector trained on the source domain of clear weather to the target domain of adverse weather [14, 15, 16, 35, 36]. Most UDA methods jointly train a domain classifier and a detector so that the classifier distinguishes between the source and target features, while the detector is optimized to confuse the classifier and align the feature distributions globally [10, 37]. Alignment can be performed on image-level features from various backbones, such as ResNet [11] or Feature Pyramid Network (FPN) [20]. In addition, aligning instance-level features extracted from Region-of-Interest (RoI) can improve domain alignment [38, 39, 13]. Diverging from conventional UDA approaches, some methods employ mathematical formulations of adverse weather conditions to enhance the alignments of features [12, 17]. However, these methods consider the weather corruption as a part of an image's style and do not distinguish between the style and the weather gap [40, 41], despite their distinct characteristics. This oversight results in suboptimal adaptation performance in real adverse weather conditions with both significant style and weather gaps [21]. ## III Methods ### _Preliminaries_ Given labeled images of clear weather conditions from the source domain \(\mathcal{S}\) and unlabeled images of adverse weather conditions from the target domain \(\mathcal{T}\), each minibatch consists of the same number of source and target data. Note that the source and target data are distinct in terms of both weather conditions and the surrounding environment. We utilize the Faster R-CNN [42] pipeline with an FPN [43] backbone during training. The FPN backbone employs pyramid architecture to generate multi-scale feature maps \((P_{2},P_{3},P_{4},P_{5})\) from an image, allowing the efficient detection of objects of varying scales. The Region Proposal Network (RPN) proposes RoI on these features and extracts instance features from each RoI, following the Region Classification Network (RCN) which makes final class and bounding box predictions. The supervised loss \(\mathcal{L}_{\text{sup}}\) obtained from RPN and RCN is applied only to the source data. The overall architecture of our method is depicted in Fig 2. We aim to train a robust object detector that performs well in real-world environments with adverse weather. Based on the architecture of the FPN-based Faster RCNN framework, we propose two components for domain adaptation to handle both style and weather gaps. First, an image-level style alignment is used to reduce the style gap through adversarial training. An instance-level weather alignment is then utilized to reduce the weather gap and learn the corruption-invariant features. The entire model is trained simultaneously in an end-to-end manner. ### _Image-level Style Alignment_ The distinct styles of the source and target domains' environments result in different feature distributions. This style gap between domains is resolved through the alignment of image-level features. Similar to [20], a domain classifier is attached to each layer of the FPN backbone to distinguish between domains. The backbone is then trained to generate the domain-invariant feature by confusing the domain classifier through adversarial training. These objectives are accomplished in a single backpropagation step by a Gradient Reversal Layer (GRL) with a weight coefficient \(\lambda\). We intend to perform image-level feature adaptation by focusing solely on the style properties of images. However, some features are severely degraded due to weather corruption, making it difficult to concentrate on the style differences. To enable the network to emphasize style-related features during alignment, the Convolutional Block Attention Module (CBAM) [44] is employed to emphasize features essential for domain alignment. CBAM is attached to each feature map and applies channel and spatial attention modules to acquire refined features \(\mathbf{x}^{p}\) at feature level \(p\). The attended feature is then fed into the discriminator \(\mathcal{D}_{p}\) that predicts the domain of a feature, leading it to align features through the GRL by concentrating on essential information. Since low-level features with fine-grained details are more susceptible to weather corruption, only high-level features are used for alignment. Therefore, image-level style alignment is performed on the \(P_{4}\) and \(P_{5}\) layers of the FPN backbone. The loss for image-level alignment, \(\mathcal{L}_{\text{img}}\), is given by the following equation: \[\begin{split}\mathcal{L}_{\text{img}}=-\sum_{p}\sum_{\mathbf{x} _{i}^{p}}&\left[y_{i}\log\mathcal{D}_{p}\left(\mathbf{x}_{i}^{p} \right)\right.\\ &\left.+\left(1-y_{i}\right)\log\left(1-\mathcal{D}_{p}\left( \mathbf{x}_{i}^{p}\right)\right)\right],\end{split} \tag{1}\] where \(p\in\{P_{4},P_{5}\}\) represents feature level, \(\mathbf{x}_{i}^{p}\) represents each feature of level \(p\) at location \(i\) after CBAM layers, and \(y_{i}\in\{0,1\}\) indicates the domain label of each feature at location \(i\). ### _Instance-level Weather Alignment_ The weather corruption has a local effect on the image and substantially degrades the instance features that are essential for object detection. We aim to obtain corruption-invariant features and reduce the weather gap through prototype-based contrastive learning. In particular, we assume that an instance feature of the target domain image is composed of an object and an arbitrary pattern of weather corruption. Then, the similarity between instance features of object proposals within the same category is encouraged, resulting in instance features invariant to weather corruption. From the source and target domain training images, instance features are obtained, and each of them is pseudo-labeled as class \(\hat{c_{i}}\) using the classwise score of each instance provided by the RCN. To be utilized for contrastive learning, instance features are forwarded to an MLP head to generate instance embeddings \(\mathbf{Z}=\{\mathbf{z}_{i}\}_{i=1}^{N}\) with dimension \(D\), where \(N\) is the total number of instances. Note that MLPs exist independently for each level of features without sharing weights, and only instance features with scores over a threshold \(\delta\) from the low-level image feature are utilized to align fine-grained features. To maximize the similarity between instance embeddings with the same pseudo-labels, prototype-anchored metric learning is used to design the contrastive loss [45]. Using learnable prototypes as representatives of each class, each instance embedding is assigned to prototypes, and a network is learned to increase their similarity. \(K\) learnable prototypes are used for each class \(c\) as \(\mathbf{P}_{c}\in\mathbb{R}^{D\times K}\) to account for the intra-class variation of instance features, and each prototype Fig. 2: Overall pipeline of the proposed method. Faster-RCNN with an FPN backbone is adopted for a detection network. _Image-level style alignment_ reduces the style gap by aligning the FPN’s high-level features. During alignment, they focus on style-related features by incorporating CBAM and highlighting important spatial and channel details. _Instance-level weather alignment_ uses instance embedding and its corresponding pseudo-label from RCN to establish a soft assignment for each feature to learnable class prototypes. Using multi-prototype-based contrastive learning, it resolves the weather gap and constructs a weather-resistant feature representation by increasing the similarity between an instance embedding and its assigned prototypes. \(\mathbf{p}_{c,k}\in\mathbb{R}^{D}\) serves as the \(k^{th}\) cluster center of a class \(c\). Also, to further boost the performance of contrastive learning, instance embeddings with the background class also adopt the same number of learnable prototypes, which can be used as negative samples for other instances. Each instance with a pseudo-label \(c\) is assigned to prototypes of the same class by computing the soft assignment matrix for class \(c\), \(\mathbf{L}_{c}\in\mathbb{R}_{+}^{K\times n_{c}}\). The soft assignment matrix satisfies the condition that the sum of soft assignment probabilities for each instance is one, _i.e._, \(\mathbf{L}_{c}^{\top}\cdot\mathbf{1}^{K}=\mathbf{1}^{n_{c}}\), where \(\mathbf{1}^{K}\) and \(\mathbf{1}^{n_{c}}\) denotes the vector of all ones with dimensions \(K\) and \(n_{c}\), respectively. The assignment matrix can be obtained by maximizing the similarity between instance embeddings and the class prototypes, \(\mathbf{Q}_{c}=\mathbf{P}_{c}^{\top}\mathbf{Z}_{c}\in\mathbb{R}^{K\times n_{c}}\), where \(\mathbf{Z}_{c}\in\mathbb{R}^{D\times n_{c}}\) and \(n_{c}\) represent instance embeddings and the number of instances pseudo-labeled as class \(c\), respectively. To avoid a trivial solution in which all the instance embeddings are assigned to a single prototype, an equipartition constraint is added to ensure that instances are equally distributed among prototypes within a class. By adding the entropy regularization term [46] with a parameter \(\kappa\) that controls the smoothness of assignment, the objective for obtaining the assignment matrix for class \(c\) is as follows: \[\max_{\mathbf{L}_{c}}\text{Tr}\left(\mathbf{L}_{c}^{\top}\mathbf{Q}_{c} \right)+\kappa\ \mathcal{H}(\mathbf{L}_{c}),\quad\textit{s.t.}\quad\mathbf{L}_{c}\cdot \mathbf{1}^{n_{c}}=\frac{n_{c}}{K}\cdot\mathbf{1}^{K}, \tag{2}\] which turns into an optimal transport problem. The solution can be computed by a few iterations of the _Sinkhorn-Knopp_ algorithm [46], which outputs the re-normalization vectors \(\mathbf{u}\in\mathbb{R}^{K}\) and \(\mathbf{v}\in\mathbb{R}^{n_{c}}\): \[\mathbf{L}_{c}=\mathrm{diag}(\mathbf{u})\exp\left(\frac{\mathbf{Q}_{c}}{ \kappa}\right)\mathrm{diag}(\mathbf{v}). \tag{3}\] After obtaining the soft assignment matrix, the network is trained so that similarities between prototypes and instance embeddings correspond to the soft assignment matrix. The prototypes and instance embeddings are simultaneously optimized by minimizing the following cross-entropy loss between the similarity and assignment matrix: \[\mathcal{L}_{inst}=-\frac{1}{N\cdot K}\sum_{i=1}^{N}\sum_{j=1}^{K}\mathbf{L}_{ c_{i}}^{i,j}\cdot\log\frac{\exp\left(\mathbf{z}_{i}\cdot\mathbf{p}_{\hat{c},j}/ \tau\right)}{\sum_{c}^{C}\sum_{k}^{K}\exp\left(\mathbf{z}_{i}\cdot\mathbf{p}_{ c,k}/\tau\right)}, \tag{4}\] where \(\hat{c}_{i}\) is the pseudo-label for the \(i^{th}\) instance, and \(\mathbf{L}_{c_{i}}^{i,j}\) is a soft assignment of the instance to the \(j^{th}\) prototype of the pseudo-labeled class. Also, \(\tau\) is a temperature parameter, and \(C\) denotes the number of classes, including the background class. For each instance \(\mathbf{z}_{i}\), minimizing \(\mathcal{L}_{inst}\) increases its similarity with assigned prototypes \(\mathbf{p}_{\hat{c}_{i},j}\), and decreases its similarity with all the others. Note that the loss is computed on both the source and target domain features using the same prototypes to reduce the domain gap. As a result, instance embeddings are grouped around their assigned prototypes. This produces corruption-resistant instance features by promoting instance embeddings with similar semantics and variable weather corruption to become closer. The final objective of our method is as follows: \[\mathcal{L}=\mathcal{L}_{sup}+\alpha\mathcal{L}_{img}+\beta\mathcal{L}_{inst}. \tag{5}\] ## IV Experiments In this section, we validate that our unsupervised domain adaptation method can effectively enhance object detection performance in real-world environments with adverse weather. Using publicly available datasets and our own datasets, the results of our method are quantitatively and qualitatively compared to those of other methods for object detection under adverse weather conditions. In addition, ablation studies are conducted to assess the validity of each component of our methodology. ### _Datasets_ Our source domain dataset is _Cityscapes_[48], which consists of real-world urban driving images captured under clear weather conditions. For the target domain dataset, multiple datasets are used to validate the efficacy of our method in various environments with adverse weather conditions. Using two synthetic weather datasets, we investigate the efficacy of other methods employing synthetic data or UDA in synthetic weather contexts. On clear images of the _Cityscapes_, the _Rain Rendering_[1] generates synthetic rain images with a physical particle simulator for each, and _RainCityscapes_[47] generates rain and fog effects subject to scene depth. To validate the efficacy of methods in real-world environments, _BDD 100K_[26], a real-world driving dataset captured in various weather conditions, is employed. The rainy and snowy subsets are used as our target domain to evaluate the efficacy of methods under diverse weather conditions. To further validate our model across a wider range of environments, we collect _Our Dataset_ in adverse weather conditions using our platform, which is equipped with an external vehicle RGB camera [49, 50]. In comparison to the _BDD 100K_, which uses a camera mounted inside the windshield of the vehicle, our dataset utilizes an external camera that is consistent with the source domain dataset. In addition, raindrops and snowfalls on the lens result in much more severe blurring of the images. Furthermore, unlike other datasets primarily focusing on urban scenes, our datasets include data collected from rural and mountainous environments, providing a broader range of background environments and significant domain gaps. Our dataset is divided into _Rainy_ and _Snowy_ subsets. The rainy subset consists of 2845 training images and 677 validation images, and the snowy subset consists of 1656 training images and 598 validation images. Our dataset comprises three classes: _person_, _car_, and _motorcycle_. For alignment with _Cityscapes_ dataset categories, _Cityscapes_ classes are assigned to our dataset during the experiment as follows: 1) _person_, _rider_ to _person_, 2) _car_, _truck_, _bus_ to _car_, and 3) _motorcycle_, _bicycle_ to _motorcycle_. ### _Experimental Setup_ **Implementation Details.** We adopt Faster R-CNN with ImageNet-pretrained ResNet-50 [51] and FPN as our object detection network. Initially, the model is trained with only the \(\mathcal{L}_{\text{sup}}\) using source data in order to obtain pseudo-labels, and \(\mathcal{L}_{\text{img}}\) and \(\mathcal{L}_{\text{inst}}\) are applied after 7.5k iterations with \(\alpha=1.0\) \(\beta=1.0\). The image-level style alignment is performed on the FPN features at levels \(P_{4}\) and \(P_{5}\), with \(\lambda\) of GRL set as \(0.01\). The instance-level weather alignment is performed on features at levels \(P_{2}\) and \(P_{3}\). We execute three iterations of the Sinkhorn-Knopp algorithm with smoothness parameter \(\kappa=0.05\). The number of prototypes \(K\) for each class is set to \(5\), with other hyperparameters empirically set to \(\tau=0.05\), \(\delta=0.8\), \(D=128\). During training, stochastic gradient descent (SGD) is used as an optimizer with a weight decay of \(5e^{-4}\) and momentum of \(0.9\). Each batch consists of 16 images, eight from the source domain and eight from the target domain. We resized all the input images so that the shorter side has a length of 800 pixels and applied random horizontal flipping with a probability of 0.5. The entire model is trained with an initial learning rate of \(2.5e^{-3}\) for \(9.5k\) iterations and then reduced to \(2.5e^{-4}\) for another \(2k\) iterations. **Comparison Methods.** To demonstrate the efficacy of our method under real-world adverse weather conditions, our method is compared to other detection methods designed for such conditions. For comparison with the method utilizing synthetic weather data, _Physics-based_[1] is adopted, which trains the model in a supervised manner using synthetic rainy images. Note that _Physics-based_ and _Rain Rendering_ dataset uses the same method to synthesize rain. The rain removal network, _MPRNet_[28], is also utilized. The removal network is trained using synthetic paired images of _Cityscapes_ and _Rain Rendering_, and evaluation is conducted on restored images from the network using a detector trained only with clear source domain data. We also include results from several UDA methods. _SADA_[20] directly aligns source and target features at both the image-level and instance-level through adversarial training, whereas _SWDA_[11] focuses solely on aligning image-level features. However, none of the aforementioned methods address the style and weather gaps separately. **Evaluation Metric.** The mean Average Precision (mAP) of all categories is used for evaluation with an Intersection over Union (IoU) threshold of \(0.5\) to compute the Average Precision (AP). Due to class imbalance issues in our dataset, class-agnostic AP is calculated for all the bounding boxes with an IoU threshold of \(0.5\) for a fair comparison. ### _Experimental Results_ **Comparisons with Other Methods.** The quantitative and qualitative results are summarized in Table I and Fig 3. In both rainy and snowy conditions, our approach outperforms other methods when applied to real-world datasets. Existing \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Source Data**} & \multirow{2}{*}{**Synthetic Data**} & \multirow{2}{*}{**Target Data**} & \multicolumn{3}{c}{**Rain**} & \multicolumn{3}{c}{**Snowy**} \\ \cline{5-10} & & & & & \multicolumn{2}{c}{**Synthetic**} & \multicolumn{2}{c}{**Real-World**} & \multicolumn{2}{c}{**Real-World**} \\ \cline{5-10} & & & & & _Rain_Cityscapes_[47] & _Rain Rendering_[1] & _BDD 100K_[26] & _Our Dataset_ & _BDD 100K_[26] & _Our Dataset_ \\ \hline \hline _Source Only_ & ✓ & ✗ & ✗ & 35.0 & 31.4 & 31.6 & 49.4 & 27.9 & 57.8 \\ _Physics-based_[1] & ✓ & ✓ & ✗ & **40.5** & 41.8 & 22.1 & 35.1 & 18.1 & 42.0 \\ _MPRNet_[28] & ✓ & ✓ & ✗ & 37.7 & **46.9** & 12.8 & 38.6 & 12.3 & 41.8 \\ _S_ADA[20] & ✓ & ✗ & ✓ & 38.7 & 40.1 & 29.1 & 48.2 & 27.6 & 53.5 \\ _SWDA_[11] & ✓ & ✗ & ✓ & 37.7 & 36.7 & 31.1 & 49.3 & 28.4 & 58.4 \\ _Ours_ & ✓ & ✗ & ✓ & 37.7 & 35.6 & **34.5** & **51.2** & **30.3** & **62.6** \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative results on both synthetic and real-world datasets. mAP (%) is used as an evaluation metric. **Synthetic Data** column indicates whether synthetically generated adverse weather data are utilized during training, and the **Target Data** column indicates whether unlabeled data from the target domain is incorporated during training. Our method outperforms other methods when applied to datasets with real-world adverse weather conditions. Fig. 3: Qualitative results on real-world target datasets. Compared to other methods, _Ours_ successfully detects the objects even in the presence of severe weather corruption and style variations. Even though _MPRNet_ removes raindrops in the first two images, the detector performance remains low, indicating images generated by the removal network do not consistently help object detection. In the remaining images, the removal network fails to remove corruptions and instead creates some artifacts due to the disparity between real weather data and synthetic weather data, which _MPRNet_ was trained on. While _SWDA_ directly adapts the network to the target domain, it fails to detect objects under severe weather corruption and environmental differences. More qualitative results are available in our multimedia material. [link] UDA methods such as _SADA_ and _SWDA_ show a significant improvement in performance on synthetic datasets, but their performance on real-world datasets is only marginally improved or even decreases. This implies that existing methods that globally align distributions are ineffective when adapting to real-world datasets with a large style gap and severe weather corruption, in contrast to synthetic datasets with a small domain gap from synthetic weather. In addition, _SWDA_ outperforms _SADA_, despite the fact that _SADA_ incorporates instance-level alignment while _SWDA_ focuses solely on image-level alignment. This suggests that directly aligning the instance-level features that are severely contaminated in real adverse weather conditions reduces performance, necessitating the use of alternative alignment methods. Using image-level style alignment and instance-level weather alignment, our method optimizes feature alignment by resolving both style and weather gaps, thereby improving detection performance. **Efficacy of Synthetic Weather Dataset.** Methods that utilize synthetic weather images during training perform well when evaluated on synthetic data. However, their performance decreases when evaluated on real-world data, indicating that synthetic weather fails to accurately represent the complexities of real weather. The use of removal networks on real-world datasets also has a negative effect on performance, despite requiring more computation. As shown in Fig 3, the removal network trained on synthetic data has difficulty restoring a clear image from real-world images under both rainy and snowy conditions, showing its inability to remove complex real rain and generalize to other weather conditions. In addition, detection performance decreases even in visually restored areas. This suggests that the features obtained through the removal network do not contribute to improving the detection performance. By directly adapting to downstream tasks, our method achieves superior performance on real-world datasets without relying on synthetic priors. **Ablation Studies on Each Component.** To validate the efficacy of each component, we conducted an ablation analysis on real-world datasets. The results are shown in Table II. Image-level style alignment increases performance, indicating that high-level feature alignment bridges the domain gap between image-level features effectively. Particularly, there is a significant performance increase when CBAM is present, implying that CBAM improves alignment by focusing on essential features. Incorporating instance-level weather alignment further enhances performance. This suggests that instance features obtained by self-supervised contrastive learning are robust to weather corruption and domain-invariant. Overall, combining both components achieves the highest performance, which effectively addresses both gaps. **Efficacy of Instance-level Weather Alignment.** To evaluate the impact of the instance-level weather alignment on feature embeddings, we visualize proposals whose instance embeddings are highly similar to each of the car class prototypes. As shown in Fig 4, proposals with similar object shapes but varying degrees of corruption are assigned to the same prototype. This demonstrates that our weather align module contributed to extracting semantically meaningful features that are resilient to corruption. Moreover, the fact that objects with similar shapes are gathered together in the same prototype demonstrates the efficacy of employing multiple prototypes to address intra-class variation. ## V Conclusion This paper presents a novel framework for domain adaptive object detection that improves robustness under real-world adverse weather conditions. The proposed method effectively addresses two distinct aspects of the domain gap, the style gap and the weather gap, by using image-level style alignment and instance-level weather alignment, respectively. Diverging from previous approaches that were mostly evaluated in synthetic datasets, our method shows robust performance on real-world datasets which have been validated through extensive experiments. We believe that our method can expand the range of applications for machines as it can detect objects in a variety of real adverse weather conditions. To make our method more applicable to real-world applications, we are investigating techniques that can adapt to dynamic adverse weather conditions during inference time. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Module**} & \multicolumn{2}{c}{**Rainy**} & \multicolumn{2}{c}{**Snowy**} \\ \cline{2-7} & **Style** & **Wearther** & _BDD 100K_ & _Our Dataset_ & _BDD 100K_ & _Our Dataset_ \\ \hline \multirow{5}{*}{_\(\check{\mathcal{J}}\)_ (w.c. CBM)_} & \(\check{\mathcal{K}}\) & 31.6 & 49.4 & 27.9 & 57.8 \\ & \(\check{\mathcal{K}}\) & 31.5 & 50.2 & 27.1 & 59.2 \\ & \(\check{\mathcal{K}}\) & \(\check{\mathcal{K}}\) & 32.4 & 50.7 & 28.7 & 59.6 \\ & \(\check{\mathcal{K}}\) & \(\check{\mathcal{J}}\) & 33.7 & 51.0 & 29.3 & 61.5 \\ & \(\check{\mathcal{J}}\) & **34.5** & **51.2** & **30.3** & **62.6** \\ \hline \hline \end{tabular} \end{table} TABLE II: Results of the ablation studies. mAP (%) is used as an evaluation metric. Incorporating each module improves detection performance. Fig. 4: Visualization of proposals assigned to each prototype in our rainy dataset. Each row displays the proposals whose instance embedding is highly similar to each car class prototype. Similar-shaped objects with diverse corruption and styles are assigned to identical prototypes, indicating the effectiveness of prototype-based contrastive learning. For example, the first row contains car proposals captured from a rear-view perspective and showing varying degrees of corruption.
2301.13436
Closed Form Expressions for Certain Improper Integrals of Mathematical Physics
We present new closed-form expressions for certain improper integrals of Mathematical Physics such as certain Ising, Box, and Associated integrals. The techniques we employ here include (a) the Method of Brackets and its modifications and suitable extensions to obtain the Mellin-Barnes representation. (b) The evaluation of the resulting Mellin-Barnes representations via the recently discovered Conic Hull method via the automated package $\textit{MBConichulls.wl}$. Finally, the analytic continuations of these series solutions are then produced using the automated package \texttt{Olsson.wl}, based on the method of Olsson. Thus, combining all these recent advances allows for closed-form evaluation of the hitherto unknown $B_3(s)$, $B_4(s)$, and related integrals in terms of multivariable hypergeometric functions. Along the way, we also discuss certain complications while using the Original Method of Brackets for these evaluations and how to rectify them. The interesting cases of $C_{5,k}$ are also studied. It is not yet fully resolved for the reasons we discuss in this paper.
B. Ananthanarayan, Tanay Pathak, Kartik Sharma
2023-01-31T06:24:22Z
http://arxiv.org/abs/2301.13436v2
# Closed Form Expressions for Certain Improper Integrals of Mathematical Physics ###### Abstract We present new closed-form expressions for certain improper integrals of Mathematical Physics such as Ising, Box, and Associated integrals. The techniques we employ here include (a) the Method of Brackets and its modifications and suitable extensions and (b) the evaluation of the resulting Mellin-Barnes representations via the recently discovered Conic Hull method. Analytic continuations of these series solutions are then produced using the automated method of Olsson. Thus, combining all the recent advances allows for closed-form solutions for the hitherto unknown \(B_{3}(s)\) and related integrals in terms of multivariable hypergeometric functions. Along the way, we also discuss certain complications while using the Original Method of Brackets for these evaluations and how to rectify them. The interesting cases of \(C_{5,h}\) is also studied. It is not yet fully resolved for the reasons we discuss in this paper. ## 1 Introduction In studies of theoretical physics and mathematics, various integrals appear whose symbolic evaluation is sought after. Gradshteyn and Ryzik [1] compiled a long list of such integrals. Recently there have been attempts to provide a derivation of a large number of these integrals, specifically the improper integral with limits from \(0\) to \(\infty\) using the Original Method of Brackets (OMOB) [2, 3, 4, 5, 6, 7]. Apart from this, some of the present authors have also evaluated the integral of quadratic and quartic types and their generalization using the OMOB, which has been reported in [8]. In the present investigation, we turn to other interesting improper integrals that appear in Mathematical Physics, such as the Ising integrals and the Box integrals. Our work is motivated by the need to express them in terms of elegant closed-form expression or in terms of known functions of mathematical physics, especially the hypergeometric functions [9, 10]. In the recent past, several tools have also been developed to facilitate tasks of symbolic evaluation of these integrals. Our results here have been facilitated by the recent development of tools and advances in various theoretical treatments. Note for instance, the recently proposed solution to the problem of finding the series solution of the \(N\)-dimensional Mellin-Barnes (MB) representation [11, 12, 13], using what has been termed as the Conic Hull Mellin Barnes (CHMB) method. This has also been automated as the _MATHEMATICA_ package MBConichulls.wl [14, 15]. The series representation hence obtained, in general, can be written as hypergeometric functions or their derivatives. Independently, the issue of finding the analytic continuations (ACs) of the multivariable hypergeometric function using the method of Olsson [16, 17], which has also been automated as a _MATHEMATICA_ package Olsson.wl [18] have been addressed recently. In this work, we show how these tools together, which were primarily directed at solving Feynman integrals, are of sufficient generality to find their use in the evaluation of the integrals considered here. We will consider the Ising integrals which have been studied in the Ising model [19, 20, 21, 22] and also have been in the context of OMOB [3]. Apart from the evaluation with these newly developed tools, we will also consider certain complications while doing similar evaluations with the OMOB [23]. One of them is the use of regulators for the evaluation of the Ising integrals. This arises in the case of Ising integrals \(C_{3,1}\) and \(C_{4,1}\). For the case of \(C_{4,1}\), it is further complicated due to the use of two regulators, which, when the proper limiting procedure is applied, will give the final result. However, we point out that such a procedure is complicated and thus use the Modified Method of Brackets (MMOB) [24] to get the MB-integral. This MB integral can then be evaluated without any introduction of such regulators and thus provides an efficient way to deal with these integrals. Using a similar procedure, we attempt to evaluate the elusive \(C_{5,k}\) integral. However, we hit a roadblock for the same, as the resulting series does not converge and would require a proper analytic continuation procedure. At present, we find this task beyond the reach of the tools at hand, though we provide a possible way to achieve the same. Yet such results still shed some light on the form that these integrals can be evaluated to. All the results are provided in the ancillary _MATHEMATICA_ file Ising.nb. Box integrals [25, 26, 27, 28] are another interesting integrals where such techniques can be applied to get new results. They do carry a physical meaning in the sense that they provide the expected distance between two randomly chosen points over the unit \(n\)-cube. We consider the two special cases of them, namely the \(B_{n}(s)\) and the \(\Delta_{n}(s)\). We use the same techniques and derive the closed form results for already known \(B_{1}(s)\) and \(B_{2}(s)\) and new evaluation for \(B_{3}(s)\) and \(B_{4}(s)\) for general values of \(s\). The results are in terms of multi-variable hypergeometric function. These evaluations further require the use of an analytic continuation procedure which has been done using Olsson.wl. All the results are provided in the ancillary _MATHEMATICA_ file Box.nb. These results for box integrals can then be further used to evaluate the Jellium potential \(J_{n}\), which can be related to box integral \(B_{n}(s)\)[26, 29]. Finally, we give a general MB integral for \(B_{n}(s)\), which can be used to find the closed form result for all values of \(n\) and \(s\) using MbConicHull.wl. With all this, we find new connections between the Box integrals and the multivariable hypergeometric functions. All our calculations rely heavily on _MATHEMATICA_ as we try to achieve the symbolic results for all the problems. The paper is structured as follows: In section (2) using an example given in [4], we point out the problem in the OMOB and discuss the alternative to surpass this problem. We then, in section (3), proceed to the evaluation of Ising integrals up to \(n=4\) while contrasting our method with the method used before to achieve the same in [3]. In section (4) we attempt to solve the \(C_{5,k}\) integral and point out a general integral \(C_{5,k}(\alpha,\beta)\) which gives \(C_{5,k}\) as a special case. Though we point out that it is not the final result, a proper analytic continuation procedure is required to get \(C_{5,k}\) from it. We then evaluate box integral \(B_{n}(s)\) for \(n=3,4\) in section (5). The new results for \(\Delta_{n}(s)\) and \(J_{n}\) with the above new results are also provided. Finally, we conclude the paper with some conclusions and possible future directions in section (6). In appendix C, we provide the table for all the _MATHEMATICA_ files that we give and the packages required. ## 2 Method of Brackets revisited We will first illustrate the OMOB using a simple example of integral evaluation as given in [4]. We will first evaluate the integral by directly using the OMOB, then briefly propose a possible resolution while doing such evaluations, and then illustrate the alternative method to do the same. We consider the following integral \[H_{1}(a,b)=\int_{0}^{\infty}K_{0}(ax)K_{0}(bx) \tag{1}\] The integral is introduced to facilitate the evaluation of another integral, which is given by putting \(a=b\) \[H(a)=\int_{0}^{\infty}K_{0}^{2}(ax)dx \tag{2}\] We can express \(K_{0}(x)\) using the following series expansion: \[K_{0}(ax)=\sum_{n_{1}}\phi_{n_{1}}\frac{a^{2n_{1}}\Gamma(-n_{1})}{2^{2n_{1}+ 1}}x^{2n_{1}} \tag{3}\] where \(\phi_{n}=\frac{(-1)^{n}}{\Gamma(n+1)}\). This expansion uses a divergent series, and we can express the result in the form of an integral representation as \[K_{0}(bx)=\frac{1}{2}\int_{0}^{\infty}\exp\left(-t-\frac{b^{2}x^{2}}{4t} \right)\frac{\mathrm{d}t}{t} \tag{4}\] Using the OMOB, we get: \[K_{0}(bx)=\sum_{n_{2},n_{3}}\phi_{n_{2},n_{3}}\frac{b^{2n_{3}}x^{2n_{3}}}{2^{2 n_{3}+1}}\langle n_{2}-n_{3}\rangle \tag{5}\] Substituting the bracket series in Eq.(1), we get \[H_{1}(a,b)=\sum_{n_{1},n_{2},n_{3}}\phi_{n_{1},n_{2},n_{3}}\frac{a^{2n_{1}}b^{ 2n_{3}}\Gamma(-n_{1})}{2^{2n_{1}+2n_{3}+2}}\langle n_{2}-n_{3}\rangle\langle 2n_{1} +2n_{3}+1\rangle \tag{6}\] Now, we need to solve the bracket equations, which involve 2 equations but 3 variables. Evaluating this we get following 3 series, \(T_{i}\) where \(n_{i}\) is the free variable: \[T_{1} =\frac{1}{4a}\sum_{n}\phi_{n}\Gamma(-n)\Gamma^{2}\bigg{(}n+\frac{1 }{2}\bigg{)}\bigg{(}\frac{b}{a}\bigg{)}^{2n}\] \[T_{2} =\frac{1}{4a}\sum_{n}\phi_{n}\Gamma(-n)\Gamma^{2}\bigg{(}n+\frac{ 1}{2}\bigg{)}\bigg{(}\frac{b}{a}\bigg{)}^{2n}\] \[T_{3} =\frac{1}{4a}\sum_{n}\phi_{n}\Gamma(-n)\Gamma^{2}\bigg{(}n+\frac{ 1}{2}\bigg{)}\bigg{(}\frac{b}{a}\bigg{)}^{2n} \tag{7}\] Using the rules of the OMOB, all the 3 series of Eq.(2) have to be discarded as they are divergent. A solution to such a problem, as implemented in [4], is to regularize the singularity. This amounts to modifying the bracket \(\langle n_{2}-n_{3}\rangle\rightarrow\langle n_{2}-n_{3}+\epsilon\rangle\). With this modification, when \(n_{1}\) is a free variable, one gets the series that contains \(\Gamma(-n)\), which is diverging and is thus discarded. While for the other cases, one gets two series with \(\epsilon\) parameter (in the form of \(\Gamma(-n+\epsilon)\) and \(\Gamma(-n-\epsilon)\)). In these series, when the proper limiting procedure is done, along with the condition \(a=b\) to ease the calculation, they give the result for the integral of Eq.(2). Thus, the original integral of Eq.(1) we started with still remains elusive, as the calculation is much more involved (the limiting procedure) within this present framework. An alternative to the above evaluation, free from choosing the regulator and doing the tedious limiting procedure, is to use the MB representation derived using the MMOB [24]. Using it, we get the following MB representation for the integral given by Eq.(1) \[H_{1}(a,b)=\frac{1}{4}\int\limits_{c-i\infty}^{c+i\infty}\frac{\mathrm{d}z}{2 ni}\,a^{-2z-1}b^{2z}\Gamma(-z)^{2}\Gamma\bigg{(}\frac{1}{2}(2z+1)\bigg{)}^{2} \tag{8}\] The above MB integral can be readily evaluated in _MATHEMATICA_ to give the following result \[H_{1}(a,b)=\frac{\pi\sqrt{\frac{a^{2}}{b^{2}}}K\bigg{(}1-\frac{a^{2}}{b^{2}} \bigg{)}}{2a} \tag{9}\] where \(K(x)\) is the complete elliptic integral of the first kind. Thus we get the value of the original integrals, Eq.(1) we started with. For the special case of \(a=b\), using \(K(0)=\frac{\pi}{2}\) we get \[H_{1}(a,a)=H(a)=\frac{\pi^{2}}{4a} \tag{10}\] So we see that for the simple cases, too, using the MB representation to evaluate these integrals provides an efficient way to evaluate these integrals. ## 3 Ising integrals In this section, we will analyze the integrals of the "Ising class". Ising models are extensively used to study the statistical nature of ferromagnets [30, 31, 32]. The model accounts for the magnetic dipole moments of the spins. The \(n\) - dimensional integrals are denoted by \(C_{n}\),\(D_{n}\),\(E_{n}\), where \(D_{n}\) is found in the magnetic susceptibility integrals essential to the Ising calculations. \[D_{n}=\frac{4}{n!}\int_{0}^{\infty}\cdots\int_{0}^{\infty}\frac{\prod_{i<j} \bigg{(}\frac{u_{i}-u_{j}}{u_{i}+u_{j}}\bigg{)}^{2}}{(\sum_{j=1}^{n}(u_{j}+1/u_ {j}))^{2}}\frac{\mathrm{d}u_{1}}{u_{1}}\cdots\frac{\mathrm{d}u_{n}}{u_{n}} \tag{11}\] The integral \(D_{n}\) provides great insights into the symmetry breaking at low-temperature phase and finds great use in Quantum Field Theories and condensed matter physics. However, it is difficult to evaluate these integrals computationally and analytically. On the other hand, the \(C_{n}\) (\(C_{n}=C_{n,1}\)) class integrals which are closely related to the \(D_{n}\) class, are easier to tackle and can produce closed-form expressions. The general Ising integrals \(C_{n,k}\) is defined as \[C_{n,k}=\frac{4}{n!}\int_{0}^{\infty}\!\!\cdots\int_{0}^{\infty}\frac{1}{( \sum_{j=1}^{n}(u_{j}+1/u_{j}))^{k+1}}\frac{\mathrm{d}u_{1}}{u_{1}}\cdots\frac {\mathrm{d}u_{n}}{u_{n}} \tag{12}\] The above expression can also be expressed as the moments of power of Bessel Function \(K_{0}\) as \[C_{n,k}=\frac{2^{n-k+1}}{n!\,k!}c_{n,k}:=\frac{2^{n-k+1}}{n!\,k!}\int_{0}^{ \infty}t^{k}K_{0}^{n}(t)\mathrm{d}t \tag{13}\] We will now analyze the special case of the \(C_{n,k}\) family with \(k=1\) using the Method of Brackets [20, 3, 2] and Mellin-Barnes representations. After this, each general integral with \(C_{n,k}\) will be treated using the same procedure. The \(C_{1,k}\) and \(C_{2,k}\) integrals are easily tractable, and the results for them have been given just for completeness' sake. The problem occurs when one considers \(C_{n,k}\) for \(n\geq 3\). Below we use the MMOB [24] and show that for the evaluation of the integrals requiring the use of regulators, it is better to use the MMOB and solve the corresponding integral using the CHMB method. The main utility of the method is that the limiting procedure is automatically taken care of while finding the residue in the case of CHMB, which is at times difficult, especially when there is more than 1 regulator, as in the case of \(C_{4,k}\). ### \(C_{1,k}\) For \(n=1\), we have \[C_{1,k}=\frac{4}{1!}\int_{0}^{\infty}\frac{1}{(u_{1}+1\!/\!u_{1})^{k+1}}\frac {\mathrm{d}u_{1}}{u_{1}} \tag{14}\] The integral can simply be evaluated to give the general closed form: \[C_{1,k}=\frac{\sqrt{\pi}2^{1-k}\Gamma\left[\frac{k+1}{2}\right]}{\Gamma\left( \frac{k}{2}+1\right)} \tag{15}\] ### \(C_{2,k}\) For \(k=1\), we get: \[C_{2,k}=\frac{4}{2!}\int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{(u_{1}+1\!/\!u_ {1}+u_{2}+1\!/\!u_{2})^{k+1}}\frac{\mathrm{d}u_{1}}{u_{1}}\frac{\mathrm{d}u_{2 }}{u_{2}} \tag{16}\] This evaluation using the MOB, for \(k=1\), gives: \[C_{2,1}=1 \tag{17}\] The integral for the general value of \(k\) can also be evaluated to give the following closed form: \[C_{2,k}=\frac{\Gamma\left(\frac{k}{2}+\frac{1}{2}\right)^{4}}{\Gamma(k+1)^{2}} \tag{18}\] ### \(C_{3,k}\) and \(C_{3,k}(\alpha,\beta,\gamma)\) For \(k=1\), we get: \[C_{3,1}=\frac{4}{3!}\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty}\frac{ 1}{(u_{1}+1\!/\!u_{1}+u_{2}+1\!/\!u_{2}+u_{3}+1\!/\!u_{3})^{2}}\frac{\mathrm{d }u_{1}}{u_{1}}\frac{\mathrm{d}u_{2}}{u_{2}}\frac{\mathrm{d}u_{3}}{u_{3}} \tag{19}\] We will illustrate the problem encountered in OMOB by writing the bracket series for the generalized case \(C_{3,k}\). Taking \(k=1\) will give us the result for \(C_{3,1}\). The following form of the integrand is motivated to maximize the number of brackets series in the expansion, which in turn reduces the number of variables: \[C_{3,k}=\frac{2}{3}\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty}\frac{ (u_{1}u_{2}u_{3})^{k}}{(u_{1}u_{2}u_{3}(u_{1}+u_{2})+u_{3}(u_{1}+u_{2})+u_{1}u _{2}u_{3}^{2}+u_{1}u_{2})^{k+1}}\mathrm{d}u_{1}\mathrm{d}u_{2}\mathrm{d}u_{3} \tag{20}\] Expanding the denominator using the rules of MOB, \[\sum_{\{n\}}\phi_{[n]}(u_{1}u_{2})^{n_{1}+n_{3}+n_{4}}z^{n_{1}+n_{2}+2n_{3}}(u _{1}+u_{2})^{n_{1}+n_{2}}\frac{\langle k+1+n_{1}+n_{2}+n_{3}+n_{4}\rangle}{ \Gamma(k+1)} \tag{21}\] Now, \((u_{1}+u_{2})^{n_{1}+n_{2}}\) has to be further expanded as: \[(u_{1}+u_{2})^{n_{1}+n_{2}}=\sum_{h_{5},n_{6}}\phi_{n_{5},n_{6}}u_{1}^{n_{5}}u_ {2}^{n_{6}}\frac{\langle-n_{1}-n_{2}+n_{5}+n_{6}\rangle}{\Gamma(-n_{1}-n_{2})} \tag{22}\] Combining the expansions, the \(C_{3,k}\) integral takes the form: \[C_{3,k} =\frac{2}{3\Gamma(k+1)}\sum_{[n]}\phi_{[n]}\frac{\langle-n_{1}-n_{2 }+n_{5}+n_{6}\rangle}{\Gamma(-n_{1}-n_{2})} \tag{23}\] \[\times\langle k+1+n_{1}+n_{3}+n_{4}+n_{5}\rangle\langle k+1+n_{1} +n_{3}+n_{4}+n_{6}\rangle\] \[\times\langle k+1+n_{1}+n_{2}+2n_{3}\rangle\langle k+1+n_{1}+n_{2 }+n_{3}+n_{4}\rangle\] Now, the rules of MOB demand that we solve the linear equations of the brackets, but that poses the problem of giving rise to divergent terms like \(\Gamma(-n)\) and renders the whole procedure useless. To solve the issue, it is suggested to introduce regulators. For the case of \(C_{3,k}\), one regulator is enough. In particular, \(\epsilon(\to 0)\) is introduced in the bracket as \(\langle k+1+n_{1}+n_{2}+2n_{3}\rangle\sim\langle k+1+n_{1}+n_{2}+2n_{3}+\epsilon\rangle\) which mimics the effect of introducing a factor of \(u_{3}^{c}\) in the integrand. Now, with this "new" bracket series, the divergent terms take the form of \(\Gamma(-n-\epsilon)\) and are easier to work with. In the regime of OMOB, one requires the expansion of \(\Gamma(x)\) around integers to deal with the problem, which increases the complexity of the task. As \(n\) increases, the number of regulators increases monotonically and complicates the limiting procedure. On the other hand, MMOB doesn't call for any regulators and is very computationally friendly. Using the MMOB in the above bracket series, we get the following MB representation for the \(C_{3,1}\) \[C_{3,1}=\frac{1}{3}\int\limits_{c-i\infty}^{c+i\infty}\frac{\mathrm{d}z}{2\pi i }\,\frac{\Gamma(-z)^{4}\,\Gamma(1+z)^{2}}{\Gamma(-2z)} \tag{24}\] This evaluates to \[C_{3,1}=\frac{2}{27}\left(6i\sqrt{3}\left(\mathrm{Li}_{2}\left(\frac{1}{4}- \frac{i\sqrt{3}}{4}\right)-\mathrm{Li}_{2}\left(\frac{i\sqrt{3}}{4}+\frac{1}{4 }\right)\right)+\pi\sqrt{3}\log(4)-\psi^{(1)}\left(\frac{1}{3}\right)+\psi^{(1 )}\left(\frac{2}{3}\right)\right) \tag{25}\] where \(\psi^{(1)}\) is the polygamma function of order \(1\). The generalized integral \(C_{3,k}\) can be similarly obtained using the MMOB to give the following MB representation: \[C_{3,k}=\frac{1}{3\Gamma(k+1)}\int\limits_{c-i\infty}^{c+i\infty}\frac{ \mathrm{d}z}{2\pi i}\frac{\Gamma(-z)^{4}\,\Gamma\left(\frac{1}{2}(k+2z+1) \right)^{2}}{\Gamma(-2z)} \tag{26}\] The above integral can be evaluated to give \[C_{3,k}=\frac{2}{3\,k!}\sqrt{\pi}G_{3,3}^{2,3}\left[\frac{1}{4}\left|\begin{array} []{c}1,1,1\\ \frac{k+1}{2},\frac{k+1}{2},\frac{1}{2}\end{array}\right.\right] \tag{27}\] where \(G\) is the Meijer-G function. A further generalization of \(C_{3,k}\) integral namely \(C_{3,k}(\alpha,\beta,\gamma)\) is given in [3] where the following integral is considered \[C_{3,k}(\alpha,\beta,\gamma)=\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{ \infty}\frac{x^{\alpha-1}y^{\beta-1}z^{\gamma-1}}{(x+1/x+y+1/y+z+1/z)^{k+1}} \mathrm{d}x\mathrm{d}y\mathrm{d}z \tag{28}\] Using the MMOB, we get the following MB representation \[C_{3,k}(\alpha,\beta,\gamma)=\frac{1}{3\Gamma(k+1)}\int\limits_{c-i\infty}^{c +i\infty}\frac{\mathrm{d}z}{2\pi i}\,\frac{\Gamma(-z)\Gamma(-z+\alpha-1)\Gamma (-z-\beta+1)\Gamma(-z+\alpha-\beta)\Gamma\left(\frac{1}{2}(k+2z-\alpha+\beta- \gamma+2)\right)\Gamma\left(\frac{1}{2}(k+2z-\alpha+\beta+\gamma)\right)}{ \Gamma(-2z+\alpha-\beta)} \tag{29}\] The result is given in the _MATHEMATICA_ file Ising.nb and is found to be : \[=-\frac{1}{3k!}\pi^{3/2}\csc(\pi\gamma)2^{-\gamma-k-1}\left(4^{ \gamma}\Gamma\left(\frac{1}{2}(k-\alpha-\beta-\gamma+4)\right)\Gamma\left( \frac{1}{2}(k+\alpha-\beta-\gamma+2)\right)\Gamma\left(\frac{1}{2}(k-\alpha+ \beta-\gamma+2)\right)\Gamma\left(\frac{1}{2}(k+\alpha+\beta-\gamma)\right) \tag{30}\] \[\times\,_{4}\bar{P}_{3}\left(\frac{1}{2}(k+\alpha+\beta-\gamma), \frac{1}{2}(k-\alpha-\beta-\gamma+4),\frac{1}{2}(k+\alpha-\beta-\gamma+2), \frac{1}{2}(k-\alpha+\beta-\gamma+2);\frac{1}{2}(k-\gamma+2),\frac{1}{2}(k- \gamma+2),\frac{1}{2}(k-\gamma+3),2-\gamma;\frac{1}{4}\right)\] \[-4\Gamma\left(\frac{1}{2}(k-\alpha-\beta+\gamma+2)\right)\Gamma \left(\frac{1}{2}(k+\alpha-\beta+\gamma)\right)\Gamma\left(\frac{1}{2}(k- \alpha+\beta+\gamma)\right)\Gamma\left(\frac{1}{2}(k+\alpha+\beta+\gamma-2)\right)\] \[\times\,_{4}\bar{P}_{3}\left(\frac{1}{2}(k-\alpha+\beta+\gamma), \frac{1}{2}(k+\alpha+\beta+\gamma-2),\frac{1}{2}(k-\alpha-\beta+\gamma+2), \frac{1}{2}(k+\alpha-\beta+\gamma);\frac{k+\gamma}{2},\frac{1}{2}(k+\gamma+1), \gamma;\frac{1}{4}\right)\right)\] ### \(C_{4,k}\) and \(C_{4,k}(\alpha,\beta,\gamma,\delta)\) For \(k=1\): \[C_{4,1}=\frac{4}{4!}\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^ {\infty}\frac{1}{(u_{1}+1/u_{1}+u_{2}+1/u_{2}+u_{3}+1/u_{3}+u_{4}+1/u_{4})^{2}} \frac{\mathrm{d}u_{1}}{u_{1}}\frac{\mathrm{d}u_{2}}{u_{2}}\frac{\mathrm{d}u_{3 }}{u_{3}}\frac{\mathrm{d}u_{4}}{u_{4}} \tag{31}\] If one proceeds with the OMOB as in the case of \(C_{3,1}\), one is now required to use 2 regulators, namely \(\epsilon\) and \(A\)[3]. The result for \(C_{4,1}\) is then obtained by taking the limit \(\epsilon\to 0\), A\(\to 1\). The use of two regulators significantly complicates the task of doing the limiting procedure. So we again proceed with the use of the MMOB. Using the MOB, we get the following MB representation for \(C_{4,1}\): \[C_{4,1}=\frac{1}{12}\int\limits_{c-i\infty}^{c+i\infty}\frac{\mathrm{d}z}{2 \pi i}\,\frac{\Gamma(-z)^{4}\,\Gamma(1+z)^{4}}{\Gamma(-2z)\,\Gamma(2+2z)} \tag{32}\] This can be evaluated to give \[C_{4,1}=\frac{7\zeta(3)}{12} \tag{33}\] The general case for \(n=4\) can be simplified to the following MB representation: \[C_{4,k}=\frac{1}{12\Gamma(k+1)}\int\limits_{c-i\infty}^{c+i\infty}\frac{ \mathrm{d}z}{2\pi i}\,\frac{\Gamma(-z)^{4}\,\Gamma\left(\frac{k+1}{2}+z\right) ^{4}}{\Gamma(-2z)\,\Gamma(k+2z+1)} \tag{34}\] This can be evaluated to give the closed-form expression: \[C_{4,k}=\frac{\pi\,2^{-k-1}}{3\Gamma(k+1)}G_{4,4}^{3,3}\left(1\biggm{|}\begin{array} []{c}1,1,1,\frac{k+2}{2}\\ \frac{k+1}{2},\frac{k+1}{2},\frac{k+1}{2},\frac{k+1}{2},\frac{1}{2}\end{array}\right) \tag{35}\] The given expression is of particular interest, as seen from its values when evaluated for any odd values of \(k\). When \(C_{4,k}\) is evaluated for any odd \(k\), it takes the form of \(\alpha\zeta(3)+b\) function, where \(a\) and \(b\) are some rational numbers. Some of the values are provided for reference in Table 1. A further generalization of \(C_{4,k}\) integral namely \(C_{4,k}(\alpha,\beta,\gamma,\delta)\) can be considered as follows \[C_{4,k}(\alpha,\beta,\gamma,\delta)=\int_{0}^{\infty}\int_{0}^{\infty}\int_{0} ^{\infty}\int_{0}^{\infty}\frac{x^{a-1}y^{b-1}z^{\gamma-1}w^{\delta-1}}{(x+1/ x+y+1/y+z+1/z+w+1/w)^{k+1}}\mathrm{d}x\mathrm{d}y\mathrm{d}z\mathrm{d}w \tag{36}\] Using the MMOB, we get the following MB representation \[C_{4,k}(\alpha,\beta,\gamma)=\frac{1}{12\Gamma(k+1)}\int\limits_{c-i\infty}^{ c+i\infty}\frac{\mathrm{d}z}{2\pi i}\,\frac{\Gamma(-z)\Gamma(-z+\gamma-1)\Gamma(-z- \delta+1)\Gamma(-z+\gamma-\delta)\Gamma\left(\frac{1}{2}(k+2z-\alpha-\beta- \gamma+\delta+3)\right)}{12\Gamma(k+1)\Gamma(-2z+\gamma-\delta)\Gamma\left( \frac{1}{2}(k+2z-\alpha-\beta-\gamma+\delta+3)+\frac{1}{2}(k+2z+\alpha+\beta- \gamma+\delta-1)\right)}\] The above integral can be evaluated as before, and the solution has been provided in the accompanying _MATHEMAT_: _ICA_ file Ising.nb. We end this section by noting that given an integral, the evaluation of its MB representation obtained using the MMOB [24] is more efficient than using the OMOB and its rules to evaluate the same. The regulators and the limiting procedure in the OMOB are automatically taken care of in the evaluation of MB integrals while evaluating the residue. Alternatively, this suggests that one can try to find a better rule that concerns the elimination of the bracket for the OMOB so that one does not require regulators and the result is obtained with their use. ## 4 An attempt at \(C_{5,k}\) Using the machinery developed so far, we now attempt to evaluate the \(C_{5}\) integral in the same spirit. Using the MOB, we get the following MB representation for \(C_{5,k}\) \[C_{5,k}=\frac{1}{60\Gamma(k+1)}\int\limits_{c_{1}-i\infty}^{c_{1}+i\infty} \frac{\mathrm{d}z}{2\pi i}\int\limits_{c_{2}-i\infty}^{c_{2}+i\infty}\frac{ \mathrm{d}z_{2}}{2\pi i}\,\frac{\Gamma(-z_{1})^{4}\Gamma(-z_{2})^{4}\Gamma \left(\frac{1}{2}(k+2z_{1}+2z_{2}+1)\right)^{2}}{\Gamma(-2z_{1})\Gamma(-2z_{2 })} \tag{38}\] Evaluation of the above integral, when done directly using the MBConicHulls.wl, would result in the divergent series. A suitable way to approach such evaluation would be by taking two parameters that serve as the variables for the series that appear and then evaluating the results with these parameters. For the \(C_{5,k}\) integral we have the following evaluation \[C_{5,k}(\alpha,\beta)=\frac{1}{60\Gamma(k+1)}\int\limits_{c_{1}-i\infty}^{c_{1} +i\infty}\frac{\text{d}z_{1}}{2\pi i}\int\limits_{c_{2}-i\infty}^{c_{2}+i \infty}\frac{\text{d}z_{2}}{2\pi i}(\alpha)^{r_{1}}(\beta)^{r_{2}}\frac{\Gamma( -z_{1})^{4}\Gamma(-z_{2})^{4}\Gamma\left(\frac{1}{2}(k+2z_{1}+2z_{2}+1)\right)^{ 2}}{\Gamma(-2z_{1})\Gamma(-2z_{2})} \tag{39}\] We notice that the integral Eq.(39) has a more general structure than the integral Eq.(38) with the introduction of the two parameters. The \(C_{5,k}\) can be obtained by putting \(\alpha=\beta=1\). The evaluation of the Eq.(39) has been done in the accompanying _MATHEMATICA_ file Ising.nb. We also note that though we have a result for integral (39), the result is not convergent for the value of interest \(\alpha=\beta=1\). Proper analytic continuation techniques have to be used to achieve this goal. At present, with the form of series that we obtain, the task is not achievable using Olsson.wl. With the form of series at hand we believe that it can be written as a derivative of'some' hypergeometric function. Then Olsson.wl can be used to find the ACs of this hypergeometric function so that it converges for \(\alpha=\beta=1\), and then the derivative can be performed to get the final result. \begin{table} \begin{tabular}{|c|c|} \hline \(k\) & \(C_{4,k}\) \\ \hline \(0\) & \(\frac{1}{6}\pi G_{4,4}^{3,3}\Big{(}1\Big{|}\begin{array}{c}1,1,1,1\\ \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\end{array}\Big{)}\) \\ \hline \(1\) & \(\frac{7(3)}{12}\) \\ \hline \(2\) & \(\frac{1}{48}\pi G_{4,4}^{3,3}\Big{(}1\Big{|}\begin{array}{c}1,1,1,1,2\\ \frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{1}{2}\end{array}\Big{)}\) \\ \hline \(3\) & \(\frac{7(3)-6}{1152}\) \\ \hline \(4\) & \(\frac{1}{2304}\pi G_{4,4}^{3,3}\Big{(}1\Big{|}\begin{array}{c}1,1,1,1,3\\ \frac{5}{2},\frac{5}{2},\frac{5}{2},\frac{1}{2}\end{array}\Big{)}\) \\ \hline \(5\) & \(\frac{49(3)-54}{368640}\) \\ \hline \(6\) & \(\frac{1}{276480}\pi G_{4,4}^{3,3}\Big{(}1\Big{|}\begin{array}{c}1,1,1,1,4\\ \frac{7}{2},\frac{7}{2},\frac{7}{2},\frac{1}{2}\end{array}\Big{)}\) \\ \hline \(7\) & \(\frac{63(3)-74}{15482880}\) \\ \hline \end{tabular} \end{table} Table 1: Values of \(C_{4,k}\) for \(k=0,\cdots,7\) Box Integrals For dimension \(n\), we define the box integral as the expected distance from a fixed point \(q\) (can be origin also) of point \(r\) chosen randomly and independently over the unit \(n\)-cube, with parameter \(s\), \[B_{n}(s)=\int_{0}^{1}\!\cdots\int_{0}^{1}\left((r_{1})^{2}+\cdots+(r_{n})^{2} \right)^{s/2}\!\mathrm{d}r_{1}\cdots\mathrm{d}r_{n} \tag{40}\] \[\Delta_{n}(s)=\int_{0}^{1}\!\cdots\int_{0}^{1}\left((r_{1}-q_{1})^{2}+\cdots+(r _{n}-q_{n})^{2}\right)^{s/2}\!\mathrm{d}r_{1}\cdots\mathrm{d}r_{n}\mathrm{d}q_{ 1}\cdots\mathrm{d}q_{n} \tag{41}\] For certain special values of parameter \(s\), the above integrals give the following interpretation: 1. \(B_{n}(1)\): It gives the expected distance from the origin for a random point of the \(n\)-cube. 2. \(\Delta_{n}(1)\): It gives the expected distance between two random points of the \(n\)-cube. Due to the physical significance of the box integrals and also their use in the electrostatic potential calculations, we wanted to evaluate these integrals and give closed-form expressions using the Method of Brackets that has been implemented throughout the paper. Using the quadrature formulae for all complex powers [25, 26, 29, 33, 34], we use the functions: \[b(u)=\int_{0}^{1}e^{-u^{2}x^{2}}\mathrm{d}x=\frac{\sqrt{\pi}\operatorname{erf} (u)}{2u} \tag{42}\] \[d(u)=\int_{0}^{1}\int_{0}^{1}e^{-u^{2}(x-y)^{2}}\mathrm{d}y\,\mathrm{d}x= \frac{\sqrt{\pi}\,u\operatorname{erf}(u)+e^{-u^{2}}-1}{u^{2}} \tag{43}\] which gives us the relation: \[B_{n}(s)=\frac{2}{\Gamma(-s/2)}\int_{0}^{\infty}u^{-s-1}b^{n}(u)\,\mathrm{d}u \tag{44}\] \[\Delta_{n}(s)=\frac{2}{\Gamma(-s/2)}\int_{0}^{\infty}u^{-s-1}d^{n}(u)\,\mathrm{ d}u \tag{45}\] ### \(B_{n}(s)\) Now, for the method of brackets to be operational, we need integrals of the form with limits from \(0\) to \(\infty\). We need to make an Euler substitution. The following substitution has been found to be the most efficient: \[x\rightarrow\frac{a}{1+a} \tag{46}\] which makes the integral \[b(u)=\int_{0}^{1}e^{-u^{2}x^{2}}\mathrm{d}x=\int_{0}^{\infty}e^{-u^{2}(\frac{ a}{1+a})^{2}}\frac{1}{(1+a)^{2}}\,\mathrm{d}a \tag{47}\] \[b(u)=\int_{0}^{\infty}\sum_{n=0}^{\infty}\frac{1}{n!}\left(\frac{-u^{2}a^{2}} {(1+a)^{2}}\right)^{n}\frac{1}{(1+a)^{2}}\,\mathrm{d}a \tag{48}\] Substituting this back in \(B_{n}(u)\) and applying MMOB, it is obtained that \(B_{n}(s)\) has a pole at \(s=-n\) and we finally get: \[B_{1}(s)=\frac{1}{s+1},s\neq-1 \tag{49}\] \[B_{2}(s)=\frac{2}{s+2}\,{}_{2}F_{1}\left(\frac{1}{2},-\frac{s}{2};\frac{3}{2} ;-1\right),s\neq-2 \tag{50}\] The first two cases were easy to handle. The first non-trivial evaluation is that of \(B_{3}(s)\). We found two different results for the same by using two different methods. Firstly we consider the following representation of \(B_{3}\)[26]: \[B_{3}(s)=\frac{3}{3+s}C_{2,0}(s,1)=\frac{6}{(3+s)(2+s)}\int_{0}^{\pi/4}\left( \left(1+\sec^{2}t\right)^{s/2+1}-1\right) \tag{51}\] The above can interestingly be evaluated in _MATHEMATICA_ using Integrate command. Using it, we get the following evaluation for the \(B_{3}(s)\) integral \[B_{3}(s) =\frac{6}{(s+2)(s+3)}\bigg{(}iF_{1}\bigg{(}1;\frac{1}{2},-\frac{s}{2 };2;2,-2\bigg{)}-\frac{2^{\frac{s+1}{2}}}{s+1}F_{1}\bigg{(}\frac{1}{2}(-s-1);- \frac{1}{2},-\frac{s}{2};\frac{1-s}{2};\frac{1}{2},-\frac{1}{2}\bigg{)}\] \[-i\,_{2}F_{1}\bigg{(}1,-\frac{s}{2};\frac{3}{2};-1\bigg{)}+2^{s \prime 2}\,_{2}F_{1}\bigg{(}\frac{1}{2},-\frac{s}{2};\frac{3}{2};-\frac{1}{2} \bigg{)}-\frac{\sqrt{\pi}}{4\Gamma\big{(}1-\frac{s}{2}\big{)}}\,_{2}F_{1}\bigg{(} -\frac{s}{2}-\frac{1}{2},-\frac{s}{2};1-\frac{s}{2};-1\bigg{)}\Gamma\bigg{(}- \frac{s}{2}-\frac{1}{2}\bigg{)}-\frac{\pi}{4}\bigg{)} \tag{52}\] where \(F_{1}(a;b_{1},b_{2};c;x,y)\) is the Appell \(F_{1}\) function which is defined for \(|x|<1\wedge|y|<1\) as: \[F_{1}(a;b_{1},b_{2};c;x,y)=\sum_{m,n=0}^{\infty}\frac{(a)_{m+n}(b_{1})_{m}(b_ {2})_{n}}{(c)_{m+n}m!n!}x^{m}y^{n} \tag{53}\] where \((q)_{n}\) is the Pochhammer symbol. The Eq.(52) requires the evaluation of the Appell \(F_{1}\) outside its region of convergence. Such evaluation requires the use of analytic continuation of \(F_{1}\), which has been done by Olsson [35]. Though we got the result using _MATHEMATICA_, it doesn't provide many insights so as to aid the computations of other \(B_{n}(s)\). So we proceed to a more systematic evaluation of the \(B_{3}(s)\) so that the results can be generalized to other values of \(n\). Using the MMOB [24] we get the following Mellin-Barnes integral for the \(B_{3}(s)\) \[B_{3}(s)=\int\limits_{c_{1}-i\infty}^{c_{1}+i\infty}\int\limits_{c_{2}-i \infty}^{c_{2}+i\infty}\frac{\Gamma(-z_{1})\,\Gamma(-z_{2})\,\Gamma(2z_{1}+1) \,\Gamma(2z_{2}+1)\,\Gamma(s-2z_{1}-2z_{2}+1)\,\Gamma\left(-\frac{s}{2}+z_{1} +z_{2}\right)}{\Gamma\left(-\frac{s}{2}\right)\Gamma\left(2z_{1}+2\right)\Gamma \left(2z_{2}+2\right)\Gamma\left(s-2z_{1}-2z_{2}+2\right)}\frac{\mathrm{d}z_{ 2}}{2ni}\frac{\mathrm{d}z_{1}}{2ni} \tag{54}\] We evaluate the above integral using the MBConicHulls.wl package [14]. The evaluation gives the following result: \[B_{3}(s) =-\frac{\pi}{2\big{(}s^{2}+5s+6\big{)}}+\frac{\sqrt{\pi}\left((s+ 2)\,_{2}F_{1}\big{(}\frac{1}{2},-\frac{s}{2}-\frac{1}{2};\frac{3}{2};-1\big{)} +_{2}F_{1}\big{(}-\frac{s}{2}-1,-\frac{s}{2}-\frac{1}{2};-\frac{s}{2};-1\big{)} \right)\Gamma\left(-\frac{s}{2}-\frac{1}{2}\right)\Gamma(s+2)}{2(s+3)\Gamma \left(-\frac{s}{2}\right)\Gamma(s+3)}+\] \[\frac{1}{1+s}F_{1:1:1}^{2:\pm 1:1}\left[\begin{array}{c}-1-s \\ \frac{1}{2}-s\end{array},\frac{1}{2}\begin{array}{c}-s\\ \vdots\end{array}\begin{array}{c}-s\\ \vdots\end{array}\begin{array}{c}-s\\ \vdots\end{array}\begin{array}{c}-s\\ \vdots\end{array}\begin{array}{c}\frac{1}{2};\frac{1}{2}\end{array}\Bigg{|} \tag{55}\] Where \(F_{1:1:1}^{2:\pm 1:}(x,y)\) is the KdF function which converges for \(|\sqrt{x}|+|\sqrt{y}|<1\). So to evaluate it at \((-1,-1)\), one needs its analytic continuations. In the _MATHEMATICA_ file Box.nb, we provide a systematic derivation of the analytic continuation for the same so that it converges at \((-1,1)\). For general \(B_{n}(s)\) we get the following MB-representation \[B_{n}(s)=\frac{1}{\Gamma\left(-\frac{s}{2}\right)}\int\limits_{c_{1}-i\infty}^ {c_{1}+i\infty}\cdots\int\limits_{c_{n-1}-i\infty}^{c_{n-1}+i\infty}\left(\prod \limits_{p=1}^{n-1}\frac{\mathrm{d}z_{p}}{2ni}\right)\frac{\left(\prod \limits_{i=1}^{n-1}\Gamma\left(2z_{i}+1\right)\right)\Gamma\left(s-2\sum \limits_{j=1}^{n-1}z_{j}+1\right)\Gamma\left(\sum\nolimits_{k=1}^{n-1}z_{k}- \frac{s}{2}\right)}{\left(\prod\limits_{l=1}^{n-1}\Gamma\left(2z_{l}+2\right) \right)\Gamma\left(s-2\sum\limits_{m=1}^{n-1}z_{m}+2\right)} \tag{56}\] Using the Eq. (56) we obtain following representation for \(B_{4}(s)\) \[B_{4}(s,\alpha,\beta,\gamma)=\frac{1}{\Gamma\left(-\frac{s}{2} \right)}\int\limits_{c_{1}-i\infty}^{c_{1}+i\infty}\int\limits_{c_{2}-i\infty}^ {c_{2}+i\infty}\int\limits_{c_{3}-i\infty}^{c_{1}-(z_{1})\,\Gamma\left(2z_{1}+1 \right)\Gamma\left(-z_{2}\right)\Gamma\left(2z_{2}+1\right)\Gamma\left(-z_{3} \right)\Gamma\left(2z_{3}+1\right)\Gamma\left(s-2z_{1}-2z_{2}-2z_{3}+1\right)} {\Gamma\left(2z_{1}+2\right)\Gamma\left(2z_{2}+2\right)\Gamma\left(2z_{3}+2 \right)\Gamma\left(s-2z_{1}-2z_{2}-2z_{3}+2\right)}\] \[\times\Gamma\left(-\frac{s}{2}+z_{1}+z_{2}+z_{3}\right)(\alpha)^{s_{1 }}(\beta)^{s_{2}}(\gamma)^{s_{2}}\frac{\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}z _{3}}{(2\pi i)^{3}} \tag{57}\] The above integral can be again evaluated readily using the MbConicHull.wl package. For the case of \(B_{4}(s)\), due to the occurrence of a 3-variable hypergeometric function, the region of convergence analysis is difficult. In the OMOB all the series which converges in the same region of convergence are kept together. For 3 or more variables this analysis becomes complicated and is not always straightforward [9]. Here the CHMB method plays an important role in that it clubs the series converging in the same region of convergence together without prior knowledge of their region of convergence. The evaluation has been provided in the file Ising.nb. ### \(\Delta_{n}(s)\) We now move on to the evaluation of \(\Delta_{n}\) integrals (41). Instead of directly doing the evaluation of the \(\delta_{n}(s)\) integral, we refer to [26], to exploit the relation between \(B_{n}(s)\) and \(\Delta_{n}(s)\). A few instances of the same are as follows: \[\Delta_{1}(s) =2\frac{1}{(s+1)(s+2)} \tag{58}\] \[\Delta_{2}(s) =8\frac{2^{\frac{s}{2}+1}(s+3)+1}{(s+2)(s+3)(s+4)}+4B_{2}(s)- \frac{4(s+4)}{s+2}B_{2}(s+2)\] (59) \[\Delta_{3}(s) =24\frac{\left((s+5)\left(2^{\frac{s}{2}+3}-3^{\frac{s}{2}+2} \right)+1\right)}{(s+2)(s+4)(s+5)(s+6)}+\frac{24}{s+2}B_{2}(s+2)-\frac{24(s+6) }{(s+2)(s+4)}B_{2}(s+4)-\frac{12(s+5)}{s+2}B_{3}(s+2)\] (60) \[\quad+\frac{4(s+6)(s+7)}{(s+2)(s+4)}B_{3}(s+4)+8B_{3}(s) \tag{61}\] where \(B_{2}(s)\) and \(B_{3}(s)\) are given by Eq.(50) and Eq.(55). The results for \(\Delta_{4}\) and \(\Delta_{5}\) are provided in the appendix B. ### Jelium Potential As an application of the evaluations done in the previous section, we refer to one more application of such evaluations, the Jellium potential [29]. It arises in the problem of electrostatics. The problem concerns finding the electrostatic potential energy of an electron (having charge -1) at the cube center, given an \(n\)-cube of uniformly charged jelly of total charge +1. For the problem, usually one takes the radial potential at a distance \(r\) from the electron as \(V_{n}(r)\) as follows \[V_{1}(r) :=r-1/2,\] \[V_{2}(r) :=\log(2r),\] \[V_{n}(r) :=2^{n-2}-\left(\frac{1}{r}\right)^{n-2},\quad n>2 \tag{62}\] The \(n\)-th Jellium potential is defined as \[J_{n}:=\langle V_{n}(r)\rangle_{\widetilde{r}\in[-1/2,1/2]^{n}} \tag{63}\] All the \(J_{n}\) can be written as a box integral up to an offset. The final result is \[J_{n}=2^{n-2}(1-B_{n}(2-n)),\quad n>2 \tag{64}\] Using the result for \(B_{n}\), \(J_{3}\) can be readily evaluated to: \[J_{3}=\frac{\pi}{2}+2-6\tanh^{-1}\left(\frac{1}{\sqrt{3}}\right) \tag{65}\] ## 6 Conclusion and Discussion We show that using the MMOB [24] for the evaluation of improper integral with limits from 0 to \(\infty\) combined with tools to evaluate such MB integrals such as MDonicHull.wl results in more efficient evaluation of these integrals. This method is particularly helpful to evaluate the integrals when using OMOB; one requires the use of'regulators' and further a proper limiting procedure to evaluate the integrals. The choice of these regulators is somewhat arbitrary, and at times more than one regulator has to be used, which further complicates the process. With these tools at hand, we then re-evaluate the Ising integral, which had been already evaluated in [3] but with regulators. We further make an attempt to evaluate the sought-after integral \(C_{5,k}\) with all these techniques. We are, though, able to evaluate a more general integral \(C_{5,k}(\alpha,\beta)\) which, when properly analytically continued, will give the result for \(C_{5,k}\). At present we are unable to do so with the techniques at hand. Though we believe that the result can be written as a derivative of some multivariable hypergeometric function. Continuing further we evaluate the \(B_{3}(s)\) and \(B_{4}(s)\) and give a general MB representation for \(B_{n}(s)\). For the case of \(B_{3}(s)\), we use Olsson.wl to find the ACs of the hypergeometric functions that appear in the solution. For \(B_{4}(s)\), similar techniques would work. It is important to note that though the OMOB and the evaluation of MB representation will give essentially the same number of series, grouping them in the same ROC is not an easy task. For the case of 3 or more variables, the problem of finding the ROC is still a problem yet to be solved in an efficient manner. This problem is essentially removed in the case of applying the CHMB method, where such grouping is automatically done without prior knowledge of the ROC. As a byproduct of these evaluations, we get the result for associated box integrals \(\Delta_{n}(s)\) and Jellium potential \(J_{n}\). We through these evaluation also discover the relations between these integrals and multivariable hypergeometric functions. As a future direction, it would be interesting to modify the rules of the OMOB so that the final evaluation of the bracket series doesn't require regulators. For the case of \(C_{5,k}(a,\beta)\) evaluated in the present work, one can try to find a way to evaluate the ACs. One way towards this direction is to write the final result as a derivative of a hypergeometric function and then find the ACs of it using Olsson.wl. After finding the ACs, the derivative can be taken to get the final result which converges in the appropriate region. We also note that a similar process can be used to evaluate \(C_{6,k}\), which also gives a 2-fold MB integral. Finally, it would be interesting to derive the result for the various Box integrals \(B_{n}(s),\Delta_{n}(s)\) and Jellium-potential \(J_{n}\) from the results given here. The result in the present work matches numerically with those results; it would still be interesting to see how they can be obtained from the present work by using various reduction formulas of multivariable hypergeometric functions. ## 7 Acknowledgements TP would like to thank Souvik Bera for his help and his useful comments. ## Appendix A Ruby's formula Ruby's formula is another interesting physical problem where the OMOB can still be used. We provide an evaluation of a general integral of which Ruby's formula is a special case in this Appendix to highlight the application of the OMOB when regulators are not required. Ruby's formula gives the solid angle subtended at a disk source by a coaxial parallel-disk detector [36]. It is given as follows \[D=\frac{R_{d}}{R_{s}}\int_{0}^{\infty}J_{1}(kR_{d})J_{1}(kR_{s})\frac{e^{-kd} }{k}\,\mathrm{d}k \tag{66}\] where \(R_{d}\) and \(R_{s}\) are the radii of the detector and the source, respectively, \(d\) is the distance between the source and the detector, and \(J_{1}(x)\) is the order one Bessel's function of the first kind. We now consider the generalization of integral 66, as discussed in [37]. We will use the MOB to evaluate the integral and show that it reproduces the result, along with two ACs. \[S=\int_{0}^{\infty}k^{l}e^{-kd}\prod_{j=1}^{N}J_{a_{j}}(kR_{j})\,\mathrm{d}k \tag{67}\] we can again apply the method of brackets by using the series expansion of the functions \[J_{a_{j}}(kR_{j})=\frac{1}{2^{a_{j}}}\sum_{n_{j}=0}^{\infty}\phi_{n_{j}}\frac{ (kR_{j})^{2n_{j}+a_{j}}}{2^{2n_{j}}\Gamma(a_{j}+n_{j}+1)}\] \[e^{-kd}=\sum_{n_{p}=0}^{\infty}\phi_{n_{p}}k^{n_{p}}d^{n_{p}}\] putting the series expansion in the above integral, we get \[S=\int_{0}^{\infty}\sum_{n_{p}=0}^{\infty}\phi_{n_{p}}k^{n_{p}+l}d^{n_{p}} \prod_{j=1}^{N}\frac{1}{2^{a_{j}}}\sum_{n_{j}=0}^{\infty}\phi_{n_{j}}\frac{(kR _{j})^{2n_{j}+1}}{2^{2n_{j}}\Gamma(a_{j}+n_{j}+1)}dk \tag{68}\] we can simplify the above by noting that \[\prod_{j=1}^{N}\frac{1}{2^{a_{j}}}\sum_{n_{j}=1}^{\infty}\phi_{n_{j}} \frac{(kR_{j})^{2n_{j}+a_{j}}}{2^{2n_{j}}\Gamma(a_{j}+n_{j}+1)}=\sum_{n_{1}=0}^{ \infty}\cdots\sum_{n_{N}=0}^{\infty}\frac{\phi_{1,2,\cdots,N}k^{\sum_{j=1}^{N}(2 n_{j}+a_{j})}}{2^{\sum_{j-1}^{N}(2n_{j}+a_{j})}}\] \[\times\frac{\prod_{j=1}^{N}(R_{j})^{(2n_{j}+a_{j})}}{\prod_{j=1}^ {N}\Gamma(a_{j}+n_{j}+1)}\] putting above value in Eq.(68) gives \[\begin{split} S=&\int_{0}^{\infty}\sum_{n_{p}=0}^{ \infty}\phi_{n_{p}}k^{(n_{p}+l+\sum_{j=1}^{N}(2n_{j}+a_{j}))}d^{n_{p}}\sum_{n_{ 1}=0}^{\infty}\cdots\sum_{n_{N}=0}^{\infty}\frac{\phi_{1,2,-,N}}{2^{\sum_{j-1 }^{N}(2n_{j}+a_{j})}}\\ &\times\frac{\prod_{j=1}^{N}(R_{j})^{(2n_{j}+a_{j})}}{\prod_{j=1} ^{N}\Gamma(a_{j}+n_{j}+1)}\text{d}k\end{split} \tag{69}\] Using the method of brackets, Eq.(69) can be written as \[\begin{split} S=\sum_{n_{1}=0}^{\infty}\cdots\sum_{n_{N}=0}^{ \infty}\sum_{n_{p}=0}^{\infty}\phi_{1,2,\cdots,N,p}\left((n_{p}+l+1+\sum_{j=1} ^{N}(2n_{j}+a_{j}))\right)\frac{d^{n_{p}}}{2^{\sum_{j-1}^{N}(2n_{j}+a_{j})}}\\ \times\frac{\prod_{j=1}^{N}(R_{j})^{(2n_{j}+a_{j})}}{\prod_{j=1}^ {N}\Gamma(a_{j}+n_{j}+1)}\end{split} \tag{70}\] where \(\phi_{1,2,\cdots,N,p}=\phi_{n_{1}}\phi_{n_{2}}\cdots\phi_{n_{N}}\phi_{n_{p}}\) The solutions to Eq.(70) are determined using the solution to the linear equation. \[n_{p}+l+1+\sum_{j=1}^{N}(2n_{j}+a_{j})=0 \tag{71}\] above equation has \((N+1)\) variables. There are \((N+1)\) different ways to write solutions to the above equation, taking \(N\) free variables each time. Out of \((N+1)\) solutions, the solution with \(n_{p}\) as the dependent variable gives the Lauricella function of \(N\) variables, as we will show. The rest of other solutions give the series representation that is the analytical continuation of the earlier. Denoting the solution to Eq.(71) by \(n_{i}^{*}\) with \(n_{i}\) being the dependent variable. The solutions to equation Eq.(71) can be written as \[n_{p}^{*}=-(l+1)-\sum_{j=1}^{N}(2n_{j}+a_{j});a=1\] \[n_{i}^{*}=-\frac{(n_{p}+l+1)}{2}-\sum_{j=1,i\neq j}^{N}(n_{j})-\sum_{j=1}^{N} \Big{(}\frac{a_{j}}{2}\Big{)};a=\frac{1}{2}\] \(a\) is the coefficient of the dependent variable if the set of linear equations obtained from brackets are written in the form \(an+b=0\) where \(n\) is the dependent variable, and \(b\) includes all the free variables and the constants. Denoting the solution of Eq.(70) by \(S_{i}\) obtained by using \(n_{i}^{*}\) (\(i=1,2,\cdots,N,p\)). I) **With \(n_{p}\) as the dependent variable** We write the solution to Eq.(70) as \[S_{p}=\frac{1}{a}\sum_{n_{1}=0}^{\infty}...\sum_{n_{N}=0}^{\infty}\phi_{1,2,..,N }F(n_{1},n_{2},...,n_{N},n_{p}^{*})\Gamma(-n_{p}^{*}) \tag{72}\] where \(F(n_{1},n_{2},\cdots,n_{N},n_{p})=\frac{d^{\alpha_{p}}\prod_{j=1}^{N}(R_{j})^{ \lceil\alpha_{j}+\alpha_{j}\rfloor}}{2^{\sum_{j=1}^{N}(2n_{j}+\alpha_{j})}\prod _{j=1}^{N}\Gamma(\alpha_{j}+n_{j}+1)}\). Putting the values, we get \[S_{p}=\sum_{n_{1}=0}^{\infty}...\sum_{n_{N}=0}^{\infty}\phi_{1,2,..,N}\frac{d^{ -(l+1)-\sum_{j=1}^{N}(2n_{j}+\alpha_{j})}\prod_{j=1}^{N}(R_{j})^{(2n_{j}+\alpha _{j})}}{2^{\sum_{j=1}^{N}(2n_{j}+\alpha_{j})}\prod_{j=1}^{N}\Gamma(\alpha_{j}+ n_{j}+1)}\Gamma\Big{(}(l+1)+\sum_{j=1}^{N}(2n_{j}+\alpha_{j})\Big{)} \tag{73}\] Using Legendre's duplication formula \[\Gamma\Big{(}2\Big{(}\frac{l+1}{2}+\sum_{j=1}^{N}\Big{(}n_{j}+\frac{\alpha_{ j}}{2}\Big{)}\Big{)}\Big{)}=\frac{2^{\Big{(}l+\sum_{j=1}^{N}(2n_{j}+\alpha_{j}) \Big{)}}\Gamma\Big{(}\frac{l+1}{2}+\sum_{j=1}^{N}\Big{(}n_{j}+\frac{\alpha_{j} }{2}\Big{)}\Big{)}\Gamma\Big{(}\frac{l}{2}+1+\sum_{j=1}^{N}\Big{(}n_{j}+\frac {\alpha_{j}}{2}\Big{)}\Big{)}}{\sqrt{\pi}} \tag{74}\] putting above value in equation Eq.(73) and simplifying gives \[\begin{split} S_{p}=\sum_{n_{1}=0}^{\infty}...\sum_{n_{N}=0}^{ \infty}\phi_{1,2,..,N}\frac{d^{-(l+1)-\sum_{j=1}^{N}(2n_{j}+\alpha_{j})}\prod _{j=1}^{N}(R_{j})^{(2n_{j}+\alpha_{j})}}{2^{\sum_{j=1}^{N}(2n_{j}+\alpha_{j}) }\prod_{j=1}^{N}\Gamma(\alpha_{j}+n_{j}+1)}\\ \times\frac{\Gamma\Big{(}\frac{l+1}{2}+\sum_{j=1}^{N}\Big{(}n_{j} +\frac{\alpha_{j}}{2}\Big{)}\Big{)}\Gamma\Big{(}\frac{l}{2}+1+\sum_{j=1}^{N} \Big{(}n_{j}+\frac{\alpha_{j}}{2}\Big{)}\Big{)}}{\sqrt{\pi}}\end{split} \tag{75}\] this equation can be written in compact form as follow \[\begin{split} S_{p}=\frac{1}{\sqrt{\pi}}\Big{(}\frac{2}{d}\Big{)}^ {l}\Big{(}\frac{1}{d}\Big{)}\Gamma\Big{(}\sum_{j=1}^{N}\frac{\alpha_{j}}{2}+ \frac{l+1}{2}\Big{)}\Gamma\Big{(}\sum_{j=1}^{N}\frac{\alpha_{j}}{2}+\frac{l}{ 2}+1\Big{)}\prod_{j=1}^{N}\Big{(}\frac{R_{j}}{d}\Big{)}^{\alpha_{j}}\\ \times\sum_{n_{1}=0}^{\infty}...\sum_{n_{N}=0}^{\infty}\frac{(-1) \sum_{j=1}^{N}n_{j}}{\prod_{j=1}^{N}\Big{(}\frac{R_{j}}{d}\Big{)}^{2n_{j}}}\\ \times\frac{\Big{(}\sum_{j=1}^{N}\frac{\alpha_{j}}{2}+\frac{l+1}{2} \Big{)}_{(\sum_{j=1}^{N}n_{j})}\Big{(}\sum_{j=1}^{N}\frac{\alpha_{j}}{2}+ \frac{l}{2}+1\Big{)}_{(\sum_{j=1}^{N}n_{j})}}{\prod_{j=1}^{N}\Gamma(\alpha_{j }+1)}\end{split} \tag{76}\] \((a)_{m}\) is the Pochhammer symbol which exactly matches the series representation obtained in [37] with ROC \[\sum_{i=1}^{N}|R_{j}|<d\] The above series corresponds to the Lauricella function of N variables. \[\begin{split}& S_{p}=\frac{1}{\sqrt{\pi}}\Big{(}\frac{2}{d}\Big{)}^{ l}\Big{(}\frac{1}{d}\Big{)}\Big{(}\frac{1}{\prod_{j=1}^{N}\Gamma(a_{j}+1)}\Big{)} \Gamma\Big{(}\sum_{j=1}^{N}\frac{a_{j}}{2}+\frac{l+1}{2}\Big{)}\Gamma\Big{(} \sum_{j=1}^{N}\frac{a_{j}}{2}+\frac{l}{2}+1\Big{)}\prod_{j=1}^{N}\Big{(}\frac{ R_{j}}{d}\Big{)}^{a_{j}}\\ &\times F_{c}\bigg{(}\Big{(}\sum_{j=1}^{N}\frac{a_{j}}{2}+\frac{l +1}{2}\Big{)},\Big{(}\sum_{j=1}^{N}\frac{a_{j}}{2}+\frac{l}{2}+1\Big{)};(1+a_{1 }),\cdots,(1+a_{N});-\Big{(}\frac{R_{1}}{d}\Big{)}^{2},\cdots,-\Big{(}\frac{R_ {N}}{d}\Big{)}^{2}\bigg{)}\end{split} \tag{77}\] where \(F_{c}\) in the above equation is the Lauricella function for \(N\) variables. II) **With \(n_{i}\) as the dependent variable** We write the solution to Eq.(70) as \[S_{i}=\frac{1}{a}\sum_{n_{1}=0}^{\infty}\cdots\sum_{n_{N}=0}^{\infty}\phi_{1, 2,\cdots,(i-1),(i+1),\cdots,N,p}F(n_{1},n_{2},\cdots,n_{i}^{*},\cdots,n_{N},n_ {p})\Gamma(-n_{i}^{*}) \tag{78}\] putting the values, we get \[\begin{split} S_{i}=\frac{1}{2}\sum_{n_{1}=0}^{\infty}\cdots \sum_{n_{i-1}=0}^{\infty}\sum_{n_{i+1}=0}^{\infty}\cdots\sum_{n_{N}=0}^{\infty }\sum_{n_{p}=0}^{\infty}\phi_{1,2,\cdots,(i-1),(i+1),\cdots,N,p}\frac{d^{n_{p} }\Big{(}\prod_{j=1,j\neq i}^{N}(R_{j})^{(2n_{j}+a_{j})}\Big{)}}{\Big{[}2^{\sum_ {j=1,j\neq i}^{N}(2n_{j}+a_{j})}\Big{]}\Big{(}\prod_{j=1,j\neq i}^{N}\Gamma(a_ {j}+n_{j}+1)\Big{)}}\\ \times\bigg{(}\frac{1}{\Big{(}\Gamma(a_{i}-\frac{(n_{p}+l+1)}{2} -\sum_{n_{j=1,i\neq j}}^{N}(n_{j})-\sum_{j=1}^{N}\Big{(}\frac{a_{j}}{2}\Big{)} +1\Big{)}\Big{(}(R_{i})^{-(n_{p}+l+1)-\sum_{j=1,i\neq j}^{N}(2n_{j})-\sum_{j=1 }^{N}a_{j}+a_{i}}\Big{)}}\\ \times\bigg{(}\frac{1}{2^{-(n_{p}+l+1)-\sum_{j=1,i\neq j}^{N}(2n _{j})-\sum_{j=1}^{N}a_{j}+a_{i}}}\bigg{)}\Big{(}\Gamma\Big{(}\frac{n_{p}+l+1} {2}+\sum_{j=1,j\neq i}^{N}(n_{j})+\sum_{j=1}^{N}\Big{(}\frac{a_{j}}{2}\Big{)} \Big{)}\Big{)}\end{split} \tag{79}\] Eq.(79) gives series representation for all values of \(i=1,2,\cdots,N\) and is the most general form of all the analytically continued series. ## Appendix B \(\Delta_{n}\) Relations \(\Delta_{n}\) can be expressed in terms of \(B_{n}\) as has already been shown in the subsection (5.2). Here, the relations for \(\Delta_{4}\) and \(\Delta_{5}\) are provided: \[\Delta_{4}(s) =64\frac{\Big{(}3\cdot 2^{\frac{6}{2}+3}+2^{s+6}-3^{\frac{6}{2}+4} \Big{)}(s+7)+1}{(s+2)(s+4)(s+6)(s+7)(s+8)}+\frac{96}{(s+2)(s+4)}B_{2}(s+4)- \frac{96(s+8)}{(s+2)(s+4)(s+6)}B_{2}(s+6) \tag{80}\] \[+\frac{64}{s+2}B_{3}(s+2)-\frac{96(s+7)}{(s+2)(s+4)}B_{3}(s+4)+ \frac{32(s+8)(s+9)}{(s+2)(s+4)(s+6)}B_{3}(s+6)+16B_{4}(s)\] \[-\frac{88(s+6)}{3(s+2)}B_{4}(s+2)+\frac{8(s+8)(6s+43)}{3(s+2)(s+4 )}B_{4}(s+4)-\frac{8(s+8)(s+9)(s+10)}{3(s+2)(s+4)(s+6)}B_{4}(s+6)\] \[\Delta_{5}(s) =160\frac{1+(9+s)\big{(}2^{6s+2}+2^{10+s}-5^{4+s+2}-2\cdot 3^{5+s 2}\big{)}}{(2+s)(4+s)(6+s)(8+s)(9+s)(10+s)}+\frac{320}{(2+s)(4+s)(6+s)}B_{2}(6+s )+\frac{320}{(2+s)(4+s)}B_{3}(4+s)\] (81) \[-\frac{320(10+s)}{(2+s)(4+s)(6+s)(8+s)}B_{2}(8+s)-\frac{480(9+s)}{ (2+s)(4+s)(6+s)}B_{3}(6+s)+\frac{160}{2+s}B_{4}(2+s)\] \[-\frac{880}{3}\frac{(8+s)}{(2+s)(4+s)}B_{4}(4+s)+\frac{80}{3}\frac{ (10+s)(55+6s)}{(2+s)(4+s)(6+s)}B_{4}(6+s)-\frac{80}{3}\frac{(10+s)(11+s)(12+s)}{( 2+s)(4+s)(6+s)(8+s)}B_{4}(8+s)\] \[+32B_{5}(s)-200\frac{(7+s)}{6+3s}B_{5}(2+s)+\frac{4}{3}\frac{(9+s )(291+35s)}{(2+s)(4+s)}B_{5}(4+s)-\frac{8}{3}\frac{(10+s)(11+s)(47+5s)}{(2+s)(4+s )(6+s)}B_{5}(6+s)\] \[+\frac{4}{3}\frac{(10+s)(11+s)(12+s)(13+s)}{(2+s)(4+s)(6+s)(8+s)}B_{ 5}(8+s)\] ## Appendix C _Mathematica_ files Here, we give a list of the _MATHEMATICA_ files and packages that we provide, which contains the derivation of the various results of the paper.
2309.11403
Retrieving non-linear features from noisy quantum states
Accurately estimating high-order moments of quantum states is an elementary precondition for many crucial tasks in quantum computing, such as entanglement spectroscopy, entropy estimation, spectrum estimation, and predicting non-linear features from quantum states. But in reality, inevitable quantum noise prevents us from accessing the desired value. In this paper, we address this issue by systematically analyzing the feasibility and efficiency of extracting high-order moments from noisy states. We first show that there exists a quantum protocol capable of accomplishing this task if and only if the underlying noise channel is invertible. We then establish a method for deriving protocols that attain optimal sample complexity using quantum operations and classical post-processing only. Our protocols, in contrast to conventional ones, incur lower overheads and avoid sampling different quantum operations due to a novel technique called observable shift, making the protocols strong candidates for practical usage on current quantum devices. The proposed method also indicates the power of entangled protocols in retrieving high-order information, whereas in the existing methods, entanglement does not help. We further construct the protocol for large quantum systems to retrieve the depolarizing channels, making the proposed method scalable. Our work contributes to a deeper understanding of how quantum noise could affect high-order information extraction and provides guidance on how to tackle it.
Benchi Zhao, Mingrui Jing, Lei Zhang, Xuanqiang Zhao, Yu-Ao CHen, Kun Wang, Xin Wang
2023-09-20T15:28:18Z
http://arxiv.org/abs/2309.11403v2
# Retrieving non-linear features from noisy quantum states ###### Abstract Accurately estimating high-order moments of quantum states is an elementary precondition for many crucial tasks in quantum computing, such as entanglement spectroscopy, entropy estimation, spectrum estimation and predicting non-linear features from quantum states. But in reality, inevitable quantum noise prevents us from accessing the desired value. In this paper, we address this issue by systematically analyzing the feasibility and efficiency of extracting high-order moments from noisy states. We first show that there exists a quantum protocol capable of accomplishing this task if and only if the underlying noise channel is invertible. We then establish a method for deriving protocols that attain optimal sample complexity using quantum operations and classical post-processing only. Our protocols, in contrast to conventional ones, incur lower overheads and avoid sampling different quantum operations due to a novel technique called observable shift, making the protocols strong candidates for practical usage on current quantum devices. The proposed method also indicates the power of entangled protocols in retrieving high-order information, whereas in the existing methods, entanglement does not help. Our work contributes to a deeper understanding of how quantum noise could affect high-order information extraction and provides guidance on how to tackle it. ## I Introduction Quantum computing has emerged as a rapidly evolving field with the potential to revolutionize the way we process and analyze information. Such an advanced computational paradigm stores and manipulates information in a quantum state, which forms an elaborate representation of a many-body quantum system [1]. One critical task for this purpose is to estimate the \(k\)-th _moment_ of a quantum state's density matrix \(\rho\), which is often denoted as \(\mathrm{Tr}[\rho^{k}],k\in\mathbb{Z}^{+}\). For example, the second moment of \(\rho\) is commonly known as the _purity_ of \(\rho\). Accurately computing \(\mathrm{Tr}[\rho^{k}]\) provides an elementary precondition for extracting spectral information of the quantum state [2], which is crucial in supporting the evaluation of non-linear functions in quantum algorithms [3; 4], applying to entanglement spectroscopy by determining measures of entanglement, e.g., _Renyi entropy_ and _von Neumann entropy_[5; 6], and characterizing non-linear features of complex quantum systems in materials [7; 8; 9; 10]. In particular, as a core-induced development, understanding and controlling quantum entanglement inspire various quantum information breakthroughs including fundamental entanglement theories, quantum cryptography, teleportation and discrimination [11; 12; 13; 14]. Numerous methods have been proposed for efficiently estimating quantum state spectra on a quantum computer, including the deterministic quantum schemes processing intrinsic information of the state [15; 16] and the variational quantum circuit learning for approximating non-linear quantum information functions [17; 18]. Meanwhile, a direct estimation method of \(\mathrm{Tr}[\rho^{k}]\) through the Newton-Girard method and _Hadamard-Test_[5] has been proposed in [9], and then it was further improved by [19]. However, quantum systems are inherently prone to the effects of noise, which can arise due to a variety of factors, such as imperfect state preparation, coupling to the environment, and imprecise control of quantum operations [20]. In definition, quantum noise can be described in a language of quantum operation denoted as \(\mathcal{N}\). Such an operation can inevitably pose a significant challenge to the reliable estimation of \(\mathrm{Tr}[\rho^{k}]\) from corrupted copies of quantum state \(\mathcal{N}(\rho)\). Previous works concentrated on the first order situation by applying the inverse operation \(\mathcal{N}^{-1}\)[21; 22; 23] to each copy of the noisy state, such that \(\mathcal{N}^{-1}\circ\mathcal{N}=\mathrm{id}\), where \(\mathrm{id}\) means identity map. Such inverse operation might not be physically implementable, which requires the usage of the quasi-probability decomposition (QPD) and sampling techniques, decomposing \(\mathcal{N}^{-1}\) into a linear combination of quantum channels \(\mathcal{N}^{-1}=\sum_{i}c_{i}\mathcal{C}_{i}\). Then, the value \(\mathrm{Tr}[O\rho]\) can be estimated in a statistical manner, and the total required sampling times are square proportional to _sampling overhead_\(g=\sum_{i}|c_{i}|\)[24]. Nevertheless, the situations for estimating \(\mathrm{Tr}[\rho^{k}]\) with \(k>1\) stay unambiguous apart from handling individual state noise. In Figure 1: The general framework of recovering the high-order quantum information \(\mathrm{Tr}[\rho^{k}]\) given copies of noisy resource \(\mathcal{N}(\rho)\) based on our derived protocol, i.e., a quantum channel \(\mathcal{C}\) and measurement-based post-processing. The information can be further employed in various applications in practical quantum computing. this paper, we are going to retrieve the \(k\)-th moment from noisy states, which is illustrated in Fig. 1. To systematically analyze the feasibility and efficiency of extracting high-order moment information from noisy states, as shown in Fig. 1, The following two questions are addressed: 1. _Under what conditions can we retrieve the high-order moments from noisy quantum states?_ 2. _For such conditions, what is the quantum protocol that achieves the optimal sampling complexity?_ These two questions address the existence and efficiency of quantum protocols for retrieving high-order moment information and essential properties from noisy states, which help us to access accurate non-linear feature estimations. In the present study, we aim to address both of these questions. For the first question, we establish a necessary and sufficient condition for the retrieval of high-order moments from noisy states, which states that a quantum protocol can achieve this goal if and only if the noisy channel is invertible. Regarding the second question, we propose a quantum protocol that can attain optimal sampling complexity using quantum operations and classical post-processing only. In contrast to the conventional sampling techniques, our protocol only employs one quantum operation due to avoiding quasi-probability decomposition and developing a novel technique called _observable shift_. We also demonstrate the advantages of our method over existing QPD methods [21, 23] with step-by-step protocols for some types of noise of common interest. Our protocols incur lower sampling overheads and have simple workflows, serving as strong candidates for practical usage on current quantum devices. The proposed method also indicates the power of entanglement in retrieving high-order information, whereas in the existing methods, entangled protocols do not help [22, 25]. In the end, numerical experiments are performed to demonstrate the effectiveness of our protocol with depolarizing noise applied on the ground state of the Fermi-Hubbard model. Our sampling results illustrate a more accurate estimation on \(\mathrm{Tr}[\rho^{2}]\) compared with no protocol applied. ## II Moment recoverability In this section, we are going to address the first question proposed in the introduction. We discover a necessary and sufficient condition for the existence of a high-order moment extraction protocol as shown in Theorem 1. **Theorem 1**: (Necessary and sufficient condition for existence of protocol) _Given a noisy channel \(\mathcal{N}\), there exists a quantum protocol to extract the \(k\)-th moment \(\mathrm{Tr}[\rho^{k}]\) for any state \(\rho\) if and only if the noisy channel \(\mathcal{N}\) is invertible._ Intuitively, we can understand Theorem 1 from the following aspects. Estimating high-order moment demands complete information about quantum channels. If a noise channel \(\mathcal{N}\) is invertible, it means information stored in quantum states is deformed, which can be carefully re-deformed back to the original information with extra resources of noisy states and sampling techniques. However, when the loss of information is unattainable, i.e., the noise is non-invertible. Part of the information stored in the quantum state is destroyed completely, leading to an infeasible estimation problem even with extra quantum resources. An illustration of the theorem is shown in Fig. 2. In the following, we will present a sketch of proof for our main theorem. Starting with the definition of a _quantum protocol_ which is usually described as a sequence of realizable quantum operations and post-processing steps used to perform a specific task in the domain of quantum information processing. Mathematically, we say there exists a quantum protocol to retrieve the \(k\)-th moment from copies of a noisy state \(\mathcal{N}(\rho)\) if there exists an operation \(\mathcal{D}\) such that \[\mathrm{Tr}[H\mathcal{D}\circ\mathcal{N}^{\otimes k}(\rho^{\otimes k})]= \mathrm{Tr}[H\rho^{\otimes k}], \tag{1}\] where \(H\) is what we call the _moment observable_, as the usage of it is the core of extracting the high-order moment from quantum states, i.e., \(\mathrm{Tr}[H\rho^{\otimes k}]=\mathrm{Tr}[\rho^{k}]\). For example, in estimating the purity of single-qubit states, the moment observable \(H\) is just a SWAP operator correlating two qubits. It is proved in the appendix that for any order \(k\), there exists such a moment observable \(H\) to extract the \(k\)-th moment information. The inspiration of our proof comes from the QPD method used to simulate Hermitian-preserving maps on quantum devices, which has enjoyed great success in a variety of tasks, such as error mitigation [21, 23], and entanglement detection [11, 26]. We extend our allowed operation \(\mathcal{D}\) to the field covering the Hermitian-preserving maps. If the noisy channel \(\mathcal{N}\) is invertible, then there exists the inverse operation of the noisy channel \(\mathcal{N}^{-1}\), which is generally Figure 2: Illustration of Theorem 1. Suppose a state \(\rho\) is corrupted by an invertible channel \(\mathcal{N}\), and \(H\) is the moment observable, such that \(\mathrm{Tr}[H\rho^{\otimes k}]=\mathrm{Tr}[\rho^{k}]\). The state information is deformed but can be retrieved via applying \(\mathcal{D}\) and post-processing (Top). However, for non-invertible \(\mathcal{N}\), the high-order moment is completely destroyed and cannot be retrieved (Bottom). a Hermitian-preserving map, and \((\mathcal{N}^{-1})^{\otimes k}\) stands for a feasible solution to the high-order moment retriever. On the other hand, by assuming a Hermitian-preserving map \(\mathcal{D}\) satisfying Eq. (1) and non-invertible \(\mathcal{N}\). In the view of the Heisenberg picture, the adjoint of the maps in Eq. (1) satisfies: \[\mathrm{Tr}[(\mathcal{N}^{\otimes k})^{\dagger}\circ\mathcal{D}^{\dagger}(H) \rho^{\otimes k}]=\mathrm{Tr}[H\rho^{\otimes k}]. \tag{2}\] It has been proved in [27] that given an observable \(O\), a Hermitian-preserving map \(\mathcal{M}\) satisfies \(\mathrm{Tr}[\mathcal{M}(\rho)O]=\mathrm{Tr}[\rho O]\) for any state \(\rho\) if and only if it holds that \(\mathcal{M}^{\dagger}(O)=O\). Thus, we can derive that as long as we find a Hermitian-preserving operation \(\mathcal{D}\) such that the condition \[(\mathcal{N}^{\otimes k})^{\dagger}\circ\mathcal{D}^{\dagger}(H)=H \tag{3}\] is satisfied, the problem is solved. Since the effective rank of \(H\) is full, whose definition and proof are shown in appendix, then from the fact that \(\mathrm{Rank}(B)\leq\min\left(\mathrm{Rank}(A),\mathrm{Rank}(B)\right)\), we can deduce that \(\mathcal{N}\) is invertible, contradicting to our assumption. This means there exists no quantum protocol for extracting high-order moments when the noise is non-invertible. The detailed proof is given in the appendix. ## III Observable shift method In the previous part, we mentioned that applying the inverse operation of a noisy channel \(\mathcal{N}^{-1}\) to noisy states simultaneously to mitigate the error is one feasible solution to retrieve high-order moments. However, this channel inverse method requires exponentially many resources with respect to \(k\) to retrieve the \(k\)-th moment. Also, the implementation of inverse operation \(\mathcal{N}^{-1}\) is not quantum device friendly because it has to sample and implement different quantum channels probabilistically. In this section, we propose a new method called _observable shift_ to retrieve high-order moment information from noisy states, which requires only one quantum operation with comparable sampling complexity. **Lemma 2** (Observable shift): _Given an invertible quantum channel \(\mathcal{N}\) and an observable \(O\), there exists a quantum channel \(\mathcal{C}\), called retriever, and coefficients \(t,f\) such that_ \[\mathcal{N}^{\dagger}\circ\mathcal{C}^{\dagger}(O)=\frac{1}{f}\left(O+tI \right). \tag{4}\] We develop this observable shift technique since the expectation of \(O+tI\) regarding any quantum states can be computed as \(\mathrm{Tr}[O\rho]+t\) during the measurement procedures. Moreover, if one wants to maintain the retrievability of \(\mathrm{Tr}[O\rho^{k}]\) from the noise channel \(\mathcal{N}\) with respect to any possible quantum states, then the only change that could be made to the observable is to add constant identity since such a transformation could maintain the original information of \(\rho\). Therefore, the trace value can be retrieved via measurement and post-processing. For instance, when we estimate \(\mathrm{Tr}[O\rho]\), where \(O=I+X+Z\), By skipping the identity, often called _shifting the observable_, the value of \(\mathrm{Tr}[O\rho]\) can still be re-derived by post-adding a value of one to the expectation value of the _shifted observable_\(O^{\prime}=X+Z\), i.e., \(\mathrm{Tr}[O\rho]=1+\mathrm{Tr}[O^{\prime}\rho]\). Besides, instead of mitigating noise states individually, our method utilizes entanglement to retrieve the information with respect to the moment observable \(H\). Compared with the channel inverse method, the proposed observable shift method requires fewer quantum resources, and its implementation is easier. The proposed observable shift method leads to Proposition 3. **Proposition 3**: _Given error tolerance \(\delta\), the \(k\)-th moment information can be retrieved by sampling one quantum channel with complexity \(\mathcal{O}(f_{\min}^{2}(\mathcal{N},k)/\delta^{2})\) and post-processing. The quantity \(f_{\min}(\mathcal{N},k)\) is the sampling overhead defined as_ \[f_{\min}(\mathcal{N},k)=\min\Big{\{}f\left|(\mathcal{N}^{\otimes k })^{\dagger}\circ\mathcal{C}^{\dagger}(H)=\frac{1}{f}\left(H+tI\right),\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad f\in\mathbb{R}^{+},t\in \mathbb{R},\mathcal{C}\in\mathrm{CPTP}\Big{\}}, \tag{5}\] _where \(\mathcal{N}\) is the noisy channel, \(\mathcal{C}\) is quantum channel, \(t\) is the shifted distance, \(H\) is the moment observable._ In our cases, we aim to find a Hermitian-preserving map \(\mathcal{D}\) such that Eq. (3) holds together with the allowance of observable shifting (4), i.e., \[(\mathcal{N}^{\otimes k})^{\dagger}\circ\mathcal{D}^{\dagger}(H)=H^{\prime}-tI, \tag{6}\] where \(H^{\prime}=H+tI\) is the shifted observable, and \(t\) is a real coefficient. Note that the quantum channel \(\mathcal{N}^{\otimes k}\) is a completely positive and trace preserving (CPTP) map, and the adjoint of a CPTP map is completely positive unital preserving [28], which refers \((\mathcal{N}^{\otimes k})^{\dagger}(I)=I\). Thus we have \[(\mathcal{N}^{\otimes k})^{\dagger}(\mathcal{D}^{\dagger}(H)+tI)=H^{\prime}, \tag{7}\] where we can consider \(\mathcal{D}^{\dagger}(H)+tI\) as a whole and denote it as \(\tilde{\mathcal{D}}(H)\). With proper coefficient \(t\), the map \(\tilde{\mathcal{D}}\) could reduce to a completely positive map \(\mathcal{C}\). The detailed proof is given in the appendix. If we apply the quantum channel \(\mathcal{C}\) to a noisy state and make measurement over moment observable \(H\), the expectation value will be \[\zeta =\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}^{\otimes k}(\rho^{ \otimes k})]=\mathrm{Tr}[(\mathcal{N}^{\otimes k})^{\dagger}\circ\mathcal{C}^{ \dagger}(H)\rho^{\otimes k}] \tag{8}\] \[=\frac{1}{f}\mathrm{Tr}[(H+tI)\rho^{\otimes k}]=\frac{1}{f}( \mathrm{Tr}[H\rho^{\otimes k}]+t). \tag{9}\] Obviously, the desired high-order moment is given by \(\mathrm{Tr}[H\rho^{\otimes k}]=f\zeta-t\). In order to obtain the target expectation value of \(\mathrm{Tr}[H\rho^{\otimes k}]\) within an error \(\delta\) with a probability no less than \(1-p\), the number of total sampling times \(T\) is given by Hoeffding's inequality [24], \[T\geq f^{2}\frac{2}{\delta^{2}}\log(\frac{2}{p}). \tag{10}\] Usually, the success probability \(1-p\) is fixed. Thus we consider it as a constant in this paper, and the corresponding sample compleixty is \(\mathcal{O}(f^{2}/\delta^{2})\), which only depends on error tolerance \(\delta\) and the _sampling overhead_\(f\). It is desirable to find a quantum retriever \(\mathcal{C}\) and shift distance \(t\) to make the sampling overhead \(f\) as small as possible. The optimal sampling overhead \(f_{\min}(\mathcal{N},k)\) of our method can be calculated by SDP as follows: \[f_{\min}(\mathcal{N},k) =\min f\] (11a) subject to \[J_{\tilde{\mathcal{C}}}\geq 0 \tag{11b}\] \[\mathrm{Tr}_{C}[J_{\mathcal{C}_{BC}}]=fI_{B}\] (11c) \[J_{\mathcal{F}_{AC}}\equiv\mathrm{Tr}_{B}[(J_{\mathcal{N}_{ \mathrm{AB}}^{\mathrm{e}}}^{T_{B}}\otimes I_{C})(I_{A}\otimes J_{\mathcal{C}_{ BC}})]\] (11d) \[\mathrm{Tr}_{C}[(I_{A}\otimes H_{C}^{T})J_{\mathcal{F}_{AC}}^{T}]=H _{A}+tI_{A}. \tag{11e}\] The \(J_{\tilde{\mathcal{C}}}\) and \(J_{\mathcal{N}^{\otimes k}}\) are the Choi-Jamiolkowski matrices for the completely positive trace-scaling map \(\tilde{\mathcal{C}}=f\mathcal{C}\) and noise channel \(\mathcal{N}^{\otimes k}\) respectively. Eq. (11b) corresponds to the condition that the map \(\tilde{\mathcal{C}}\) is completely positive, and Eq. (11c) guarantees that \(\tilde{\mathcal{C}}\) is a trace-scaling map. In Eq. (11d), \(J_{\mathcal{F}}\) is the Choi matrix of the composed map \(\tilde{\mathcal{C}}\circ\mathcal{N}^{\otimes k}\). Eq. (11e) corresponds to the constraint shown in Eq. (4). Beyond retrieving particular non-linear features, i.e., \(\mathrm{Tr}[\rho^{k}]\), we can also apply our method to estimate non-linear functions. For a toy example, if we wish to estimate the function \(F(\rho)=\frac{1}{2}\,\mathrm{Tr}[\rho^{2}]+\frac{1}{3}\,\mathrm{Tr}[\rho^{3}]\), we should design the moment observable first, which is supposed to be \(H=\frac{1}{2}H_{2}\otimes I+\frac{1}{3}H_{3}\), where \(H_{2}\) and \(H_{3}\) are the moment observables for two and three qubits respectively. Then, the retrieving protocol with optimal sampling overhead is given by SDP as shown in Eq. (11). ## Protocols for particular noise channels We have introduce the observable shift method in previous part, next will provide the analytical protocol for retrieving the second-order moment information \(\mathrm{Tr}[\rho^{2}]\) from noisy quantum states suffering from depolarizing channel and amplitude dimpling channel, respectively. Depolarizing channel has been extensively studied due to its simplicity and ability to represent a wide range of physical processes that can affect quantum states [2]. A quantum state undergoes a depolarizing channel would be randomly replaced by a maximally mixed state with a certain error rate. The single-qubit depolarizing (DE) noise \(\mathcal{N}_{\mathrm{DE}}^{\epsilon}\) has an exact form, \[\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)=(1-\epsilon)\rho+\epsilon\frac{I}{ 2}, \tag{12}\] where \(\epsilon\) is the noise level, and \(I\) refers to the identity operator. Given many copies of such noisy quantum states, our method derives a protocol for retrieving the second-order moment \(\mathrm{Tr}[\rho^{2}]\) using only one quantum channel and post-processing. Specifically, we have Proposition 4. **Proposition 4**: _Given noisy states \(\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{\otimes 2}\), and error tolerance \(\delta\), the second order moment \(\mathrm{Tr}[\rho^{2}]\) can be estimated by \(f\,\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{ \otimes 2}]-t\), with optimal sample complexity \(\mathcal{O}(1/(\delta^{2}(1-\epsilon)^{4})\), where \(f=\frac{1}{(1-\epsilon)^{2}}\), \(t=\frac{1-(1-\epsilon)^{2}}{2(1-\epsilon)^{2}}\). The term \(\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{ \otimes 2}]\) can be estimated by implementing a quantum retriever \(\mathcal{C}\) on noisy states and making measurement over moment observable \(H\). Moreover, there exists an ensemble of unitary operations \(\{p_{j},U_{j}\}_{j}\) such that the action of the retriever \(\mathcal{C}\) can be interpreted as,_ \[\mathcal{C}(\cdot)=\sum_{j=1}p_{j}U_{j}(\cdot)U_{j}^{\dagger} \tag{13}\] We derive an explicit form of a mixed-unitary ensemble containing twelve fixed unitary \(U_{j}\)'s given in the appendix. As a result, the second order moment can be retrieved from depolarized states \(\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{\otimes 2}\) by applying the unitaries \(U_{j}\) randomly with equal probabilities and then performing measurements with respect to the moment observable \(H\). After repeating these steps for \(T\) rounds, where \(T\) is given by Eq. (10), and averaging the measurement results, we can obtain the estimated expectation value \(\zeta=\mathrm{Tr}[H\mathcal{C}(\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{ \otimes 2})]\). Then, the desired the second-order moment is given by \[\mathrm{Tr}[\rho^{2}]=\frac{1}{(1-\epsilon)^{2}}\zeta-\frac{1-(1-\epsilon)^{2} }{2(1-\epsilon)^{2}}. \tag{14}\] When estimating \(\mathrm{Tr}[\rho^{2}]\) from copies of the noisy state \(\mathcal{N}_{\mathrm{DE}}^{\epsilon}(\rho)^{\otimes 2}\), QPD-based methods incurs a sampling overhead \(\frac{(1+\epsilon/2)^{2}}{(1-\epsilon)^{2}}\). On the other hand, our observable shift method offers a protocol with a lower sampling overhead \(\frac{1}{(1-\epsilon)^{2}}\), which is much lower than that of the QPD-based method. Besides, the quantum amplitude damping (AD) channel is another important model that we are interested in, which often appears in superconducting qubits or trapped ions. This type of noise is particularly relevant for the loss of energy or the dissipation of excited states [29], whose action results in the transition of a qubit's excited state to its ground state, offering a more realistic representation of energy relaxation processes in quantum systems. The AD channel \(\mathcal{N}_{\mathrm{AD}}^{\epsilon}\) is characterized by a single parameter \(\varepsilon\), representing the damping rate, which has two Kraus operators: \(A_{0}^{\epsilon}\coloneqq|0\rangle\!\langle 0|+\sqrt{1-\varepsilon}|1\rangle\! \langle 1|\) and \(A_{1}^{\epsilon}\coloneqq\sqrt{\varepsilon}|0\rangle\!\langle 1|\), where \(\varepsilon\in[0,1]\). Similarly, given many copies of AD-produced quantum states, the second-order information \(\mathrm{Tr}[\rho^{2}]\) can be retrieved by applying only one quantum channel and post-processing with our protocol in Proposition 5. **Proposition 5**: _Given noisy states \(\mathcal{N}_{\mathrm{AD}}^{\epsilon}(\rho)^{\otimes 2}\), and error tolerance \(\delta\), the second order moment \(\mathrm{Tr}[\rho^{2}]\) can be estimated by \(f\,\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}_{\mathrm{AD}}^{\epsilon}(\rho)^{ \otimes 2}]-t\), with optimal sample complexity \(\mathcal{O}(1/(\delta^{2}(1-\varepsilon)^{4})\), where \(f=\frac{1}{(1-\epsilon)^{2}}\), \(t=-\frac{\epsilon^{2}}{(1-\epsilon)^{2}}\). The term \(\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}_{\mathrm{AD}}^{\epsilon}(\rho)^{ \otimes 2}]\) can be estimated by implementing a quantum retriever \(\mathcal{C}\) on noisy states and making measurement. Moreover, the Choi matrix of such the retriever \(\mathcal{C}\) is_ \[J_{\mathcal{C}} =|00\rangle\!\langle 00|\otimes\frac{1}{6}((1+2\varepsilon)II+(1-4 \varepsilon)H)\] \[\quad+|\Psi^{+}\rangle\!\langle\Psi^{+}|\otimes\frac{1}{6}((1+2 \varepsilon)II+(1-4\varepsilon)H)\] \[\quad+|\Psi^{-}\rangle\!\langle\Psi^{-}|\otimes\frac{1}{2}(II-H)\] \[\quad+|11\rangle\!\langle 11|\otimes\frac{1}{6}(II+H), \tag{15}\] _where \(|\Psi^{\pm}\rangle\!\langle\Psi^{\pm}|=\frac{1}{2}(|01\rangle\pm|10\rangle)( \langle 01|\pm\langle 10|)\) are Bell states._ The above retriever \(\mathcal{C}\) can be implemented based on the following measurement and post-processing. Given amplitude damping noisy states \((\mathcal{N}_{\mathrm{AD}}^{\epsilon}(\rho))^{\otimes 2}\), we make measurement in the basis \(\mathcal{B}=\{|00\rangle,|\Psi^{+}\rangle,|\Psi^{-}\rangle,|11\rangle\}\). From the Choi matrix of the retriever \(\mathcal{C}\), which is shown in Eq. (15), we know that based on the obtained measurement results, the quantum system collapses to the states \(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\) correspondingly, where \(\sigma_{1}=\sigma_{2}=\frac{1}{6}((1+2\varepsilon)II+(1-4\varepsilon)H)\), \(\sigma_{3}=\frac{1}{2}(II-H)\) and \(\sigma_{4}=\frac{1}{6}(II+H)\). Each state corresponds to a fixed expectation value \(\mathrm{Tr}[H\sigma_{i}]\), which can be predetermined via direct matrix calculation with fixed \(H\) and known \(\sigma_{i}\). The next step is to run sufficient shots of basis-\(\mathcal{B}\) measurements to determine the probability of measuring each basis state, denoted as \(p_{i}\), respectively. The term \(\mathrm{Tr}[H\mathcal{C}\circ\mathcal{N}_{\mathrm{AD}}^{\epsilon}(\rho)^{ \otimes 2}]\) is then given by the estimated value \(\zeta=\sum_{i=1}^{4}p_{i}\,\mathrm{Tr}[H\sigma_{i}]\). The desired second-order moment is obtained by \[\mathrm{Tr}[\rho^{2}]=\frac{1}{(1-\varepsilon)^{2}}\zeta+\frac{\varepsilon^{2 }}{(1-\varepsilon)^{2}}. \tag{16}\] More details can be found in the appendix. The sampling overhead for QPD-based methods is \(\frac{(1+\varepsilon)^{2}}{(1-\varepsilon)^{2}}\), while the overhead incurred by our method is still as low as \(\frac{1}{(1-\varepsilon)^{2}}\), saying that our method requires fewer quantum resources. ## III Comparison with existing protocols To extract high-order moment information from noisy states, one straightforward method is to apply an inverse operation of noisy channel \(\mathcal{N}^{-1}\) on quantum states to mitigate error, and then perform measurement over moment observable \(H\), which is \(\mathrm{Tr}[H\rho^{\otimes k}]=\mathrm{Tr}[H\mathcal{N}^{-1}\circ(\mathcal{N }(\rho))^{\otimes k}]\). However, the map \(\mathcal{N}^{-1}\) might not be a physical quantum channel [22], thus we cannot implement it directly on quantum system. Fortunately, we can simulate such channel by quasi-probability decomposition, which decompose such non-physical map into a linear combination of physical quantum channels, i.e., \(\mathcal{N}^{-1}=\sum_{i}c_{i}\mathcal{C}_{i}\), where \(c_{i}\) are the real coefficients and \(\mathcal{C}_{i}\) are physical quantum channels. We need to note that \(c_{i}\) can be negative. From the aspect of physical implementation, in the \(t\)-th round of total \(T\) times of sampling, we first sample a quantum channel \(\mathcal{C}^{(t)}\) from \(\{\mathcal{C}_{i}\}\) with probability \(\{|c_{i}|/g\}\), where \(g=\sum_{i}|c_{i}|\), and apply it to noisy state \(\mathcal{C}^{(t)}\circ\mathcal{N}(\rho)\). Then we take measurement and get result \(o^{(t)}\). After \(T\) rounds of sampling, we attain an estimation for the expectation value \(\zeta=\frac{g}{T}\sum_{t=1}^{T}\text{sgn}(c_{i}^{(t)})o^{(t)}=\mathrm{Tr}[O\rho]\). The total sampling times \(T\) is also given by Hoeffding's inequality as shown in Eq. (10). The optimal sampling overhead \(g_{\min}(\mathcal{N})\) is given by [22, 25, 27] \[g_{\min}(\mathcal{N})=\min\Big{\{}\sum_{i}|c_{i}| |\mathcal{N}^{-1}=\sum_{i}c_{i}\mathcal{C}_{i},\] \[c_{i}\in\mathbb{R},\mathcal{C}_{i}\in\text{CPTP}\Big{\}}, \tag{17}\] which can be obtained by SDP as displayed in the appendix. When we apply the channel inverse method to retrieve the \(k\)-th moment, we should apply the inverse operation simultaneously on \(k\) quantum systems, the corresponding optimal sampling overhead if given by \(g_{\min}(\mathcal{N},k)\), which is \[g_{\min}(\mathcal{N},k)=\min\Big{\{}\sum_{i}|c_{i}| |(\mathcal{N}^{-1})^{\otimes k}=\sum_{i}c_{i}\mathcal{C}_{i},\] \[c_{i}\in\mathbb{R},\mathcal{C}_{i}\in\text{CPTP}\Big{\}}, \tag{18}\] With Eq.(5) and Eq. (18), we can make comparison of the sampling overhead between our method and the conventional QPD channel inverse method, which leads to Lemma 6. **Lemma 6**: _For arbitrary invertible quantum noisy channel \(\mathcal{N}\), and moment order \(k\), we have_ \[f_{\min}(\mathcal{N},k)\leq g_{\min}(\mathcal{N},k) \tag{19}\] Lemma 6 implies that in the task of extracting the \(k\)-th moment \(\mathrm{Tr}[\rho^{k}]\) from noisy states, the proposed method requires fewer sampling times, i.e., consumes fewer quantum resources. The detailed proof is displayed in appendix. It has Figure 3: The sampling overhead respective to noise level for estimating \(\mathrm{Tr}[\rho^{3}]\) from amplitude damping noise corrupted state \(\mathcal{N}(\rho)\). The red curve refers to the overhead for channel inverse, the green curve stands for the information recovery method, and the blue curve is the newly proposed method. been prove by Regula _et al._ in Ref. [25] that the sampling overhead for simulating a trace preserving linear map is equivalent to its diamond norm. Specifically, in the case of inverse operation \(\mathcal{N}^{-1}\), we have \(g_{\min}(\mathcal{N})=\|\mathcal{N}^{-1}\|_{\diamond}\). Note that the diamond norm is multiplicativity with respect to tensor product [22, 25], i.e., \(\|(\mathcal{N}^{-1})^{\otimes k}\|_{\diamond}=\|\mathcal{N}^{-1}\|_{\diamond}^ {k}\). Thus, we conclude that the optimal sampling overhead for retrieve high-moment from noisy states increases exponential respect to the moment order \(k\), which is \[g_{\min}(\mathcal{N},k) =g_{\min}(\mathcal{N}^{\otimes k})=\|(\mathcal{N}^{-1})^{\otimes k }\|_{\diamond}=\|\mathcal{N}^{-1}\|_{\diamond}^{k}\] \[=g_{\min}(\mathcal{N})^{k}. \tag{20}\] In order to illustrate the advantage of the proposed method over the channel inverse method in the term of sampling overhead, we conduct a numerical experiment to extract the third moment \(\mathrm{Tr}[\rho^{3}]\) from amplitude damping noise channel with different noise levels. The results are shown in Fig. 3. The red and blue curves stand for the sampling overhead for channel inverse and observable shift method, respectively. Compared with the QPD-based inverse operation method, the proposed observable shift method have at least two-folded advantages. First, our method can achieve a lower sampling overhead, i.e., \(f_{\min}(\mathcal{N},k)\leq g_{\min}(\mathcal{N},k)\). In previous, we have shown examples where \(f_{\min}(\mathcal{N},k)\) strictly smaller, indicating the effectiveness of the newly proposed observable shift technique. It is also observed that the optimal protocol given by our method is generally an entangled one. In contrast to the uselessness of entanglement in QPD [22, 25], our method demonstrates the power of entanglement for tackling noise. Second, protocols given by our method are more hardware friendly as they only need to repeat one fixed quantum channel, whereas the QPD-based methods have to sample from multiple quantum channels and implement each of them. ## IV Application to Fermi-Hubbard model The Fermi-Hubbard model is a key focus in condensed matter physics due to its relevance in metal-insulator transitions and high-temperature superconductivity [30, 31]. Recent studies have shown that entanglement spectroscopy can be utilized to extract critical exponents and phase transitions in the Fermi-Hubbard model [32, 33, 34]. As the model is characterized by a broad range of correlated electrons, it necessitates multi-determinant and highly accurate calculations [34, 35] which hence demand ingenious methods of quantum noise control. In a physical system such as a metallic crystal with an \(n_{x}\times n_{y}\) square lattice, each lattice point, known as a site, is assigned an index. The Hubbard model Hamiltonian takes on a fermionic form in second quantization, \[\begin{split} H_{\text{Hubbard}}&=-J\sum_{\langle i,j\rangle,\sigma}(a_{i\sigma}^{\dagger}a_{j\sigma}+a_{j\sigma}^{\dagger}a_{i \sigma})\\ &+U\sum_{i}n_{i\uparrow}n_{i\downarrow}+H_{\text{local}},\end{split} \tag{21}\] where \(a_{i\sigma}^{\dagger},a_{i\sigma}\) are fermionic creation and annihilation operators; \(n_{i\sigma}=a_{i\sigma}^{\dagger},a_{i\sigma}\), are the number operators; the notation \(\langle i,j\rangle\) associates adjacent sites in the \(n_{x}\times n_{y}\) rectangular lattice; \(\sigma\in\{\uparrow,\downarrow\}\) labels the spin orbital. The first term in Eq. (21) corresponds to the hopping term, where \(J\) denotes the tunnelling amplitude. The second term involves the on-site Coulomb repulsion, represented by \(U\). The final term in the equation defines the local potential resulting from nuclear-electron interaction, which we have chosen to be the Gaussian form [36]. \[H_{\text{local}}=\sum_{j=1}\sum_{\nu=\uparrow,\downarrow}\epsilon_{j,\nu}n_{j, \nu};\quad\epsilon_{j,\nu}=-\lambda_{\nu}e^{-\frac{1}{2}(j-m_{\nu})^{2}/\sigma _{\nu}^{2}}. \tag{22}\] In the following, we consider a specific 3-site (6-qubit) Fermi-Hubbard Hamiltonian with \(J=2,U=3\) and \(\lambda_{\uparrow,\downarrow}=3,0.1\), \(m_{\uparrow,\downarrow}=3,3\). The standard deviation \(\sigma_{v}\) for both spin-up and -down potentials are set to \(1\) guaranteeing a charge-spin symmetry around the centre site (\(i=2\)) of the chain system. The ground state entanglement spectroscopy of the model identifies the topological-ordering signatures of the system which requires high-precision entropy estimations over each bipartite sector of the entire system. We show that the mitigation of the quantum noise can be achieved and therefore, enhance the determination of \(\mathrm{Tr}[\mathcal{N}(\rho_{A})^{2}]\), via our proposed method, which is displayed in the appendix. Fig. 4 displays the sampling distribution with and without error mitigation. The orange curve refers to the estimation distribution of second-order information \(\mathrm{Tr}[\rho^{2}]\) from noisy states, and the cyan curve shows the estimation distribution with error mitigation. And the black dash line is the exact value of \(\mathrm{Tr}[\rho^{2}]\). Figure 4: Simulation of high-order moment \(\mathrm{Tr}[\rho^{2}]\) estimation. The curves are calculated from sampling. The orange curve represents the estimation from depolarizing noised state \(\mathcal{N}(\rho)\) with noise level \(\epsilon=0.1\). The cyan curve is the estimation with the proposed error mitigation method. The black dashed line stands for the exact value of \(\mathrm{Tr}[\rho^{2}]\). Conclusion and discussion In this study, we establish that when quantum states are distorted by noises, the original moment information can still be retrieved through post-processing if and only if the noise is invertible. Furthermore, our proposed method, called observable shift, outperforms QPD-based techniques from two aspects: (1) The proposed method requires lower quantum sampling complexity than the existing one, which implies the superiority of entangled protocols over product protocols. This contrasts with the multiplicativity of cost observed in QPD-based methods for quantum error mitigation. (2) The observable shift method is easier to implement than the QPD-based method as it only involves a single quantum operation, which makes our method more friendly to quantum devices. Our findings have implications for the dependable estimation of non-linear information in quantum systems and can influence various applications, including entanglement spectroscopy and ground state property estimation. For further work, it will be interesting to improve the scalability of the observable shift method, which makes this approach more practical. We expect the observable shift technique can be incorporated into more algorithms and protocols to boost efficiency. ## Acknowledgements Authors would like to thank Chengkai Zhu and Chenghong Zhu for their valuable discussion.
2309.17066
Optical fibres with memory effects and their quantum communication capacities
The development of quantum repeaters poses significant challenges in terms of cost and maintenance, prompting the exploration of alternative approaches for achieving long-distance quantum communication. In the absence of quantum repeaters and under the memoryless (iid) approximation, it has been established that some fundamental quantum communication tasks are impossible if the transmissivity of an optical fibre falls below a known critical value, resulting in a severe constraint on the achievable distance for quantum communication. However, if the memoryless assumption does not hold -- e.g. when input signals are separated by a sufficiently short time interval -- the validity of this limitation is put into question. In this paper we introduce a model of optical fibre that can describe memory effects for long transmission lines. We then solve its quantum capacity, two-way quantum capacity, and secret-key capacity exactly. By doing so, we show that -- due to the memory cross-talk between the transmitted signals -- reliable quantum communication is attainable even for highly noisy regimes where it was previously considered impossible. As part of our solution, we find the critical time interval between subsequent signals below which quantum communication, two-way entanglement distribution, and quantum key distribution become achievable.
Francesco Anna Mele, Giacomo De Palma, Marco Fanizza, Vittorio Giovannetti, Ludovico Lami
2023-09-29T08:58:03Z
http://arxiv.org/abs/2309.17066v1
# Optical fibres with memory effects and their quantum communication capacities ###### Abstract **The development of quantum repeaters poses significant challenges in terms of cost and maintenance, prompting the exploration of alternative approaches for achieving long-distance quantum communication. In the absence of quantum repeaters and under the memoryless (iid) approximation, it has been established that some fundamental quantum communication tasks are impossible if the transmissivity of an optical fibre falls below a known critical value, resulting in a severe constraint on the achievable distance for quantum communication. However, if the memoryless assumption does not hold -- e.g. when input signals are separated by a sufficiently short time interval -- the validity of this limitation is put into question. In this paper we introduce a model of optical fibre that can describe memory effects for long transmission lines. We then solve its quantum capacity, two-way quantum capacity, and secret-key capacity exactly. By doing so, we show that -- due to the memory cross-talk between the transmitted signals -- reliable quantum communication is attainable even for highly noisy regimes where it was previously considered impossible. As part of our solution, we find the critical time interval between subsequent signals below which quantum communication, two-way entanglement distribution, and quantum key distribution become achievable.** Quantum information [1], and in particular quantum communication, will likely play a pivotal role in our future technology. The potential applications of a global quantum internet [2; 3] include secure communication [4], efficient entanglement and qubits distribution, enhanced quantum sensing capabilities [5], distributed and blind quantum computing [6; 7], as well as ground-breaking experiments in fundamental physics [5]. These applications heavily rely on establishing long-distance quantum communication across optical fibres or free-space links. However, the vulnerability of optical signals to noise poses a significant obstacle to achieve this goal. To overcome this challenge, one possible known solution is to exploit quantum repeaters [8; 9] along the communication line. Nonetheless, current implementations of quantum repeaters remain in the realm of proof-of-principle experiments. In addition, quantum repeaters will likely impose substantial demands on technology resources, making them potentially expensive to implement. Consequently, a pressing problem is to develop quantum communication protocols that can operate without -- or with a modest number of -- quantum repeaters. Recently, in [10; 11] it has been proposed a theoretical proof-of-principle solution to the problem of establishing quantum communication over long distance without relying on quantum repeaters. The crux of such a solution is to take advantage of _memory effects_[12; 13] in optical fibres. Memory effects arise when signals are fed into the optical fibre separated by a sufficiently short time interval [12; 13; 14; 15]. In this scenario, the noise within the fibre is influenced by prior input signals, exhibiting a form of "memory." Consequently, the commonly assumed memoryless (iid) paradigm, which posits that noise acts uniformly and independently on each signal, becomes invalid. The exploration of memory effects in optical fibres has been extensively addressed in [16; 17; 10; 11]. These studies primarily focus on modelling the interaction between two signals through a single localised interaction. While they capture fundamental aspects of the problem, especially in the context of "short" communication lines or setups with signal cross-talk confined in a specific spatial region, a more comprehensive analysis reveals limitations when extending these findings to practical configurations. Specifically, in scenarios involving "long" communication lines where consecutive signals continually interact throughout the entire length of the fibre, the existing analyses become inadequate, hindering the incorporation of two essential requirements vital for the theory's conceptual self-consistency: * _Property 1_: It is imperative that no information can be transmitted across the fibre if the transmissivity associated with a single signal is precisely zero; * _Property 2_: The model must remain consistent when optical fibres are composed. In other words, the combination of the model associated with a fibre of length \(L_{1}\) and the model linked to a fibre of length \(L_{2}\) should yield the model associated with a fibre of length \(L_{1}+L_{2}\). Aim of the present paper is to overcome the limitations of [16; 17; 10; 11; 18] by developing a new model of optical fibres with memory effects that accurately encapsulates the essential attributes of extended optical fibres, such as those outlined in Properties 1 and 2 above. We shall call such model "Delocalised Interaction Model", or DIM in brief, and employ it to determine the ultimate quantum communication capabilities attainable through the strategic utilisation of memory effects in these systems. In particular, we will find the precise range of parameters that allow for the successful transmission of qubits, entanglement, or secret keys. Our main result is the calculation of the exact value of the quantum capacity \(Q\), the two-way quantum capacity \(Q_{2}\), and the secret-key capacity \(K\)[19; 20] of the optical fibre in the absence of thermal noise. Additionally, our investigation will encompass an examination of the existence of the phenomenon known as the "die-hard quantum communication" effect [10; 11; 21]. This effect, potentially enabling communication across optical fibres with arbitrarily low transmissivity, will be scrutinised to confirm its presence in our model, ensuring it is not merely an artifact of the localised signal cross-talking assumption made in [10; 11; 21]. Specifically we shall see that _for any arbitrarily low non-zero value of the single-signal transmissivity \(\lambda\in(0,1]\) of the fibre and for any arbitrarily large value of its associated thermal noise \(\nu\geq 0\), there exists a non-zero time interval separating successive signals below which \(Q\), \(Q_{2}\), and \(K\) all become strictly positive_. In particular, we show that memory effects enable qubit distribution (\(Q>0\)) even when \(\lambda\in(0,\frac{1}{2}]\), which is not achievable with the corresponding memoryless optical fibre. Additionally, using the sufficient condition for entanglement distribution reported in [22], we show that memory effects enable two-way entanglement distribution (\(Q_{2}>0\)) and quantum key distribution (\(K>0\)) even when \(\lambda\in(0,\frac{\nu}{\nu+1}]\), thus surpassing the performance of memoryless optical fibres. ## II Preliminaries In the framework of quantum Shannon theory [19; 20], the fundamental limitations of point-to-point quantum communication are determined by the _capacities_ of quantum channels. The capacities quantify the maximum amount of information that can be reliably transmitted per channel use in the asymptotic limit of many uses. Different notions of capacities have been defined, based on the type of information to be transmitted, such as qubits or secret-key bits, and the additional resources permitted in the protocol design, such as classical feedback. In this paper, we investigate three distinct capacities: on the one hand the quantum capacity \(Q\), which measures the efficiency in the transmission of qubits with no additional resources; on the other, the two-way quantum capacity \(Q_{2}\) and the secret-key capacity \(K\), which instead gauge the efficiency in the transmission of qubits and secret key, respectively, with the additional free resource of a public two-way classical communication channel between the sender (Alice) and the receiver (Bob) [19; 20]. The signals transmitted along an optical fibre can be described in terms of an ordered array of localised e.m. pulses \(S_{1}\), \(S_{2}\), \(\cdots\), \(S_{n}\), of assigned mean frequency \(\omega_{0}\) and bandwidth \(\Delta\omega\) that, in the absence of dispersion, propagate rigidly through the fibre separated by a fixed delay time \(\delta t\). In the quantum setting such modes are conventionally identified with a corresponding collection of independent annihilation operators \(a_{1}\), \(a_{2}\), \(\cdots\), \(a_{n}\) that fulfil canonical commutation rules [23]. The memoryless regime is reached when \(\delta t\) is sufficiently large to prevent cross-talking among the transmitted signals: accordingly they will experience the same type of noise which, under very general conditions, is typically identified with a _thermal attenuator_ channel \(\mathcal{E}_{\lambda,\nu}\). This is a continuous-variable [24] single-mode quantum channel characterised by two parameters: \(\lambda\in[0,1]\), which represents the transmissivity of the fibre (i.e. the ratio between the output energy and the input energy of the transmitted signal), and \(\nu\in[0,\infty)\), which quantifies the thermal noise added by the environment (in the limit of zero temperature \(\nu=0\), the transformation is conventionally referred to as the _pure-loss channel_). Mathematically, the action of \(\mathcal{E}_{\lambda,\nu}\) on a generic input state \(\rho\) of the \(i\)th signal can be expressed as a beam splitter mixing the latter with Figure 1: Pictorial representation of the mechanism which is responsible for the intra-signal interactions between a sequence of pulses \(S_{1}\), \(S_{2}\), \(\cdots\) that travel along an optical fibre. In the LIM [16; 17; 18] such couplings occurs in a single location, through the mediation of a single common environment (upper panel). A more realistic description of the effect would instead allow for multiple cross-talks events distributed over the entire length of the fibre (bottom panel). a dedicated local Bosonic bath \(E_{i}\), initialised in the thermal state \(\tau_{v}\) with mean photon number \(\nu\). In formula, this can be written as \[\mathcal{E}_{\lambda,\nu}(\rho)=\mathrm{Tr}_{E_{i}}\left[U_{\lambda}\big{(}\rho \otimes\tau_{v}\big{)}U_{\lambda}{}^{\dagger}\right]\,, \tag{1}\] where \(a_{i}\) and \(b_{i}\) are the annihilation operators of \(S_{i}\) and \(E_{i}\), \(U_{\lambda}\coloneqq e^{\arccos(\sqrt{\lambda})(a_{i}^{\dagger}b_{i}-a_{i}b_{ i}^{\dagger})}\) is the unitary describing the beam splitter interaction, and \(\mathrm{Tr}_{E_{i}}\) represents the partial trace w.r.t. \(E_{i}\). The capacities \(Q\)[25, 26, 27], \(Q_{2}\)[28], and \(K\)[28] of the pure-loss channel \(\mathcal{E}_{\lambda,0}\) have been determined exactly. In contrast, only bounds are known for the capacities \(Q\)[25, 28, 29, 30, 31, 32, 33], \(Q_{2}\)[28, 34, 35, 22, 28], and \(K\)[34, 35, 22, 36] of the thermal attenuator \(\mathcal{E}_{\lambda,\nu}\). In particular, it is known that the quantum capacity of the thermal attenuator \(Q(\mathcal{E}_{\lambda,\nu})\) vanishes if the transmissivity of the fibre falls below the critical value of \(\lambda\leq\frac{1}{2}\): \[\lambda\leq\frac{1}{2}\Longrightarrow Q(\mathcal{E}_{\lambda,\nu})=0\,, \tag{2}\] and, additionally, the equivalence "\(\Longleftrightarrow\)" holds for \(\nu=0\). It is also known that the two-way quantum capacity \(Q_{2}(\mathcal{E}_{\lambda,\nu})\) and the secret-key capacity \(K(\mathcal{E}_{\lambda,\nu})\) of the thermal attenuator vanish if and only if the transmissivity falls below the critical value of \(\lambda\leq\frac{\nu}{\nu+1}\)[22]: \[\lambda\leq\frac{\nu}{\nu+1}\Longleftrightarrow Q_{2}(\mathcal{E}_{\lambda, \nu})=K(\mathcal{E}_{\lambda,\nu})=0\,. \tag{3}\] Since typically the transmissivity of an optical fibre decreases exponentially with its length, under the memoryless assumption there are strong limitations on the distance at which it is possible to perform qubit distribution (\(Q>0\)), two-way entanglement distribution (\(Q_{2}>0\)), and quantum key distribution (\(K>0\)) without relying on quantum repeaters. For instance, modern optical fibres typically exhibit signal attenuation rates of around \(0.2\,\mathrm{dB/km}\), with the best recorded value being \(0.14\,\mathrm{dB/km}\)[36, 37], meaning that the quantum capacity vanishes if the fibre is longer than \(15\,\mathrm{km}\) or at most \(21\,\mathrm{km}\). Early attempts to incorporate memory effects into optical fibres were made in Refs. [16, 17, 18] with a model which from now on we shall refer to as "Localised Interaction Model" (LIM). In these works, following the approach outlined in [13], intra-signal couplings are induced by an ordered sequence of collisional events in which each transmitted pulse interacts with a common reservoir \(E\) (see upper panel of Fig. 1). The latter is characterised by a resetting mechanism that endeavors to restore it to its initial configuration over a thermalization time-scale \(t_{E}\). Consequently, when the time delay \(\delta t\) separating two successive input signals exceeds the thermalization time \(t_{E}\), each pulse encounters the same environmental state, rendering the communication effectively memoryless. Conversely, when \(\delta t\) is smaller than or comparable to \(t_{E}\), after colliding with one of the signals, the reservoir \(E\) does not have sufficient time to revert to \(\tau_{v}\) and functions as a mediator for pulse interactions. As illustrated in Fig. 2(b), LIM emulates this intricate dynamics of the \(n\) input signals via the \(n\)-mode quantum channel \(\Phi^{(1,n)}_{\lambda,\mu,\nu^{\prime}}\), obtained by connecting the parallel optical lines which describe the noisy propagation of the modes associated with the annihilation operators \(a_{1}\), \(a_{2}\), \(\cdots\), \(a_{n}\) in the memoryless regime (panel (a) of Fig. 2), with a series of additional beam splitters of transmissivity \(\mu\). This parameter serves as the "memory parameter" of the model, quantifying what is the fraction of the energy lost by the \(i\)th input signal that can potentially be absorbed by the subsequent ones by mixing it with the thermal contributions of the local environments \(E_{i+1}\), \(E_{i+2}\), \(\cdots\), \(E_{n}\). Ranging from \(0\) (where \(\Phi^{(1,n)}_{\lambda,0,\nu}\) reduces to \(n\)fold memoryless channels \(\mathcal{E}^{\otimes n}_{\lambda,\nu}\)) to \(1\) (full memory), \(\mu\) effectively encapsulates the interplay between the time interval \(\delta t\) separating two consecutive input signals and the characteristic thermalization time \(t_{E}\) (for instance, one plausible expression for \(\mu\) might be defined as \(\mu\coloneqq\exp(-\delta t/t_{E})\)). Note also that similarly to the memoryless case, the LIM transformation \(\Phi^{(1,n)}_{\lambda,\mu,\nu}\) is still characterised by an effective transmissivity parameter \(\lambda\) which in this case represents the attenuation experienced when a single signal traverses the line in isolation, and by the thermal noise parameter \(\nu\) which is responsible for defining the temperature of the local baths. A crucial insight which emerges from Refs. [16, 17, 18] is that memory couplings can improve the communication efficiency of optical fibres, thus opening the door to the realisation of the "die-hard quantum communication" effect [10, 11, 21]. For instance, in the zero-temperature (\(\nu=0\)) limit the value of \(Q\) computed for LIM turns out to be an increasing function of \(\mu\) for each assigned transmissivity value \(\lambda\) (see also Appendix VI). Unfortunately, as outlined in the introduction, utilising LIM for investigating real-world scenarios is problematic, mainly because it fails to consider the possibility that the signals could experience multiple cross-talks over different locations of the optical fibre as depicted in the bottom panel of Fig. 1. The limitations of LIM become particularly evident when one observes that the associated mapping \(\Phi^{(1,n)}_{\lambda,\mu,\nu}\) does not fulfil neither Property 1 nor Property 2, which should instead hold for long transmission lines. For instance, a close inspection of the interferometric representation of Fig. 2(b) reveals that, as long as \(\mu>0\), even for \(\lambda=0\) the channel \(\Phi^{(1,n)}_{0,\mu,\nu}\) is still capable of transmitting signals from Alice to Bob (e.g. the photons of the first input mode \(a_{1}\) will be received by Bob at the output of the mode \(a_{2}\)). The possibility of having multiple cross-talks will arguably prevent this possibility via destructive interferences that could in principle spoil the advantages pointed out in [10, 11, 16, 17, 18, 21]. The primary objective of this paper is to shed light on this problem, introducing a new model for memory effects in optical fibres that is immune to the shortcomings of LIM. Results In this section we present an improved version of LIM which can be used to describe memory effects in optical fibres when the transmitted signals have a chance of experiencing multiple interactions along the entire length of the communication line. As we shall see, such construction, which we dub "Delocalised Interaction Model" (DIM), is particularly apt to represent the input-output relations occurring in long optical fibres as, unlike LIM, it fulfils Properties 1 and 2 detailed in the introductory section. ### Delocalised Interaction Model The idea behind the DIM approach is relatively simple: a spatially homogeneous optical fibre of finite length \(L\), characterised by a thermal noise \(\nu\geq 0\) and by a single-pulse transmissivity \(\lambda\in[0,1]\), is seen as the composition of \(M\) identical optical fibres of length \(L/M\). In the limit of large \(M\), such pieces are sufficiently small that they can be effectively described via LIM mappings. The resulting input-output relation of the global fibre is hence computed by first properly concatenating such individual terms and then taking the continuum limit \(M\to\infty\). More specifically, considering that for a spatially homogeneous fibre the single-signal transmissivity \(\lambda\) is exponentially decreasing in the length \(L\), to each of the \(M\) infinitesimal components of the fibre we can assign an effective single-signal transmissivity \(\lambda^{1/M}\). Their individual LIM channel representations are given by maps \(\Phi_{\lambda^{1/M},\mu,\nu}^{(1,\mu)}\), each of which is characterised by the same local temperature parameter \(\nu\) and by the same memory parameter \(\mu\) of the global fibre. Similarly to the cascade construction of [38] for fixed \(M\), the resulting input-output DIM map is hence provided by the \(M\)-fold concatenation \[\Phi_{\lambda,\mu,\nu}^{(M,n)}=\left(\Phi_{\lambda^{1/M},\mu,\nu}^{(1,n)} \right)^{M}=\underbrace{\Phi_{\lambda^{1/M},\mu,\nu}^{(1,n)}\circ\cdots\circ \Phi_{\lambda^{1/M},\mu,\nu}^{(1,n)}}_{M}\,, \tag{4}\] whose interferometric representation in terms of beam splitter couplings is given in Fig. 2(c). Note that as for LIM, the present scheme reduces to the most common model of memoryless optical fibre when the memory parameter \(\mu\) vanishes: indeed, if \(\mu=0\) each of the \(M\) infinitesimal optical fibres can be modelled as a thermal attenuator \(\mathcal{R}_{\lambda^{1/M},\nu}\) of transmissivity \(\lambda^{1/M}\), and (4) gives \(\Phi_{\lambda,0,\nu}^{(M,n)}=\mathcal{R}_{\lambda,\nu}\). If \(\mu>0\), instead, then the environments \(E_{i}^{(l)}\) are not always in the thermal state \(\tau_{\nu}\), and their state depends on all previous input signals -- the model exhibits memory effects. As shown in Theorem S8 in the Supplementary, for \(M\to\infty\) the mapping \(\Phi_{\lambda,\mu,\nu}^{(M,n)}\) converges to an \(n\)-mode quantum channel denoted by \(\Phi_{\lambda,\mu,\nu}^{(n)}\). The mathematically precise sense in which this convergence happens involves the notion of _strong convergence_, discussed in the Methods. The family of quantum channels \(\{\Phi_{\lambda,\mu,\nu}^{(n)}\}_{n\in\mathbb{N}}\), which forms a _quantum memory channel_[12], characterises our model of optical fibre with memory effects. Most importantly, contrary to the LIM of Ref. [16, 17, 10, 11, 18], the map \(\Phi_{\lambda,\mu,\nu}^{(n)}\) satisfies the above Properties 1-2. The proof of this fact is rather technical; the interested reader can find it in Sec. III of the Supplementary. ### Capacities of the model Let \(Q(\lambda,\mu,\nu)\), \(Q_{2}(\lambda,\mu,\nu)\), and \(K(\lambda,\mu,\nu)\) be the quantum capacity, two-way quantum capacity, and secret key capacity, respectively, of the DIM quantum memory channel \(\{\Phi_{\lambda,\mu,\nu}^{(n)}\}_{n\in\mathbb{N}}\). The forthcoming Theorem 1, which is proved in Theorem S14 in the Supplementary, provides conditions on the parameter region where these capacities are strictly positive. In particular, it states that one can make the capacities strictly positive by sufficiently increasing the value of \(\mu\), for all nonzero values of the transmissivity \(\lambda\), and all thermal photon numbers \(\nu\). **Theorem 1**.: _Let \(\lambda\in(0,1)\) and \(\mu\in[0,1)\). In the absence of thermal noise, i.e. \(\nu=0\), it holds that_ \[Q(\lambda,\mu,\nu=0)>0\iff\sqrt{\mu}>\frac{\log_{2}\left(\frac{1}{\lambda} \right)-1}{\log_{2}\left(\frac{1}{\lambda}\right)+1}\,. \tag{5}\] _In addition, for all \(\nu\geq 0\) it holds that_ \[K(\lambda,\mu,\nu),\,Q_{2}(\lambda,\mu,\nu)>0\iff\sqrt{\mu}>\frac{\ln\left( \frac{1}{\lambda}\right)-\ln(1+\frac{1}{\nu})}{\ln\left(\frac{1}{\lambda} \right)+\ln(1+\frac{1}{\nu})}\,. \tag{6}\] The following theorem, reported and proved in Theorem S15 in the Supplementary, provides the exact solution for the capacities in the absence of thermal noise. **Theorem 2**.: _Let \(\lambda\in(0,1)\) and \(\mu\in[0,1)\). In absence of thermal noise, i.e. \(\nu=0\), it holds that_ \[Q\left(\lambda,\mu,\nu=0\right) =\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}\,\max\left\{0,\log_{2} \left(\frac{\eta^{(\lambda,\mu)}(x)}{1-\eta^{(\lambda,\mu)}(x)}\right)\right\}\,,\] \[Q_{2}\left(\lambda,\mu,\nu=0\right) =K\left(\lambda,\mu,\nu=0\right)\] \[=\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}\,\log_{2}\left(\frac{1}{1 -\eta^{(\lambda,\mu)}(x)}\right)\,, \tag{7}\] _where_ \[\eta^{(\lambda,\mu)}(x)\coloneqq\lambda^{\frac{1-\mu}{1+\mu-2\sqrt{\log(x/2)}}} \quad\forall\,x\in[0,2\pi]\,. \tag{8}\] In Fig. 3, we plot the capacities given in (7). If \(\mu=0\) (i.e. in the memoryless case), all these capacities are equal to the corresponding capacities of the pure-loss channel. Upon plotting the capacities in Fig. 3, one observes that for any \(\lambda\in(0,1]\) all the capacities \(Q\left(\lambda,\mu,\nu=0\right)\), \(Q_{2}\left(\lambda,\mu,\nu=0\right)\), and \(K\left(\lambda,\mu,\nu=0\right)\) are monotonically increasing in \(\mu\in[0,1]\). Hence, at least in the absence of thermal noise, as the memory parameter \(\mu\) increases (corresponding to a decrease in the time interval between consecutive signals), quantum communication performances improve. It is reasonable to expect that such a monotonicity in \(\mu\) holds even for \(\nu>0\). ## Discussion In this paper we analysed quantum communication, entanglement distribution, and quantum key distribution across optical fibres in the little-studied case where memory effects are present, and hence commonly employed approximations break down. It is generally believed that memory effects should improve the information transfer of a communication line, by offering a means to recover potentially lost information that would otherwise be unrecoverable in a memoryless scenario. Indeed, in a memoryless scenario, if photons are lost, they are irretrievably gone. However, in the presence of memory effects, there exists a probability that lost photons can interact with subsequent signals, allowing some Figure 2: Interferometric representation of the \(n\)-mode quantum channels which describe the transmission of e.m. pulses along an optical fibre of single-signal transmissivity \(\lambda\) and local temperature \(\nu\) for different memory configurations. Panel (a) memoryless regime: here each one of the input modes \(a_{1}\), \(a_{2},\cdots,a_{n}\) evolves independently from the others, undergoing to the same thermal attenuation mapping \(\mathcal{E}_{\lambda,\nu}\) induced by beam splitter couplings (1) with the local thermal state \(\tau_{\nu}\) of the local environments \(E_{1}\), \(E_{2}\), \(\cdots\), \(E_{n}\). Panel (b) LIM channel \(\Phi^{(1,n)}_{\lambda,\mu,\nu}\): in this case signal cross-talks are mediated by the yellow beam splitters of transmissivity \(\mu\). These allow the photons lost by the \(i\)th input mode to emerge in the output of the subsequent ones by letting them to interfere with the local bath \(E_{i+1}\), \(E_{i+2}\), \(\cdots\), \(E_{n}\). Setting \(\mu=0\) the LIM reduces to the memoryless case of panel (a), i.e. \(\Phi^{(1,n)}_{\lambda,\partial,\nu}=\mathcal{E}^{\otimes n}_{\lambda,\nu}\). Panel (c) DIM channel \(\Phi^{(M,n)}_{\lambda,\mu,\nu}\): the fibre is described by the concatenation (4) of \(M\) LIM channels \(\Phi^{(1,n)}_{\lambda^{1/M},\mu,\nu}\) of transmissivity \(\lambda^{1/M}\). In the Heisenberg representation, \(\Phi^{(M,n)}_{\lambda,\mu,\nu}\) maps the annihilation operator \(a_{i}\) (depicted on the left) of the \(i\)th input signal into the annihilation operators \(a_{i,M}\) (depicted on the right) of the \(i\)th output signal for all \(i=1,2,\ldots,n\). For \(j=1,\cdots,M\), the symbols \(E^{(j)}_{1}\), \(E^{(j)}_{2}\), \(\cdots\), \(E^{(j)}_{n}\) represent the single-mode environments associated with the \(j\)th infinitesimal optical fibre element: all of them are initialised in the same thermal state \(\tau_{\nu}\). Supplementary VII presents a generalisation of DIM where memoryless attenuation along the fibre occurs, concurrently with the memory effects. information encoded in lost photons to persist within the fibre. We first proposed a model of optical fibre with memory effects which overcomes problems of the LIM previously introduced in the literature [16; 17; 10; 11; 18]. Our model depends on three parameters: the transmissivity \(\lambda\in[0,1]\) of the optical fibre, the thermal noise \(v\geq 0\), and the memory parameter \(\mu\in[0,1]\), with the latter being related to the time interval between subsequent signals. In particular, increasing \(\mu\) has the operational meaning as a decrease in the time interval between subsequent signals. In Theorem 2 we found the exact solution for the quantum capacity \(Q\), the two-way quantum capacity \(Q_{2}\), and the secret-key capacity \(K\) of the DIM channel in the absence of thermal noise (\(v=0\)). These capacities are monotonically increasing in the memory parameter \(\mu\), meaning that memory effects improve quantum communication. It is worth noting that in the presence of thermal noise (\(v>0\)) the capacities remain unknown even in the absence of memory effects (\(\mu=0\)), as the capacities of the thermal attenuator are currently unknown. The reader might question the meaningfulness of considering communication tasks assisted by two-way classical communication in a setting where the memory parameter \(\mu\), and thus the time interval between subsequent signals, is fixed. Indeed, in general, the rounds of classical communication between consecutive channel uses may vary during the communication protocol. Additionally, another problem is that, when the sender and receiver are very far apart, even a single round of classical communication requires an excessively long waiting time which prevents the exploitation of memory effects altogether. However, in the case of Choi-simulable channels [28], it is meaningful to consider such communication tasks. This is because optimal entanglement and secret-key distribution strategies for Choi-simulable channels can be achieved by initially utilising the channel (with a constant and short time interval between subsequent input signals) to share multiple copies of the Choi state, followed by employing optimal distillation protocols (either for entanglement or secret-key) to distil the Choi state. Fortunately, our model is associated with the quantum memory channel \(\{\Phi^{(\mu)}_{\lambda,\mu,\nu}\}_{n\in\mathds{N}}\), which is Choi simulable because it is Gaussian [28]. In the absence of memory effects (\(\mu=0\)), it is known that no quantum communication tasks can be achieved when the transmissivity is sufficiently low (\(Q=0\) for \(\lambda\leq\frac{1}{2}\), and \(Q_{2}=K=0\) for \(\lambda\leq\frac{v}{v+1}\)). However, in Theorem 1 we established that for any transmissivity \(\lambda>0\) and thermal noise \(\nu\geq 0\), there exists a critical value of the memory parameter \(\mu\) above which it becomes possible to achieve qubit distribution (\(Q>0\)), entanglement distribution (\(Q_{2}>0\)), and secret-key distribution (\(K>0\)). This result is particularly intriguing as it demonstrates that -- at least within our model -- memory effects provide an advantage, enabling quantum communication tasks to be performed in highly noisy regimes where it was previously considered impossible without the use of quantum repeaters. While this result is model dependent, it offers hope that memory effects could offer a concrete route to achieve efficient long-distance quantum communication with fewer quantum repeaters than previously believed. Specifically, this result bears resemblance to the "die-hard quantum communication" effect" observed in [10; 11] within their toy model of optical fibre with memory effects, which was limited to the analysis of \(Q\) only and does not consider \(K\) and \(Q_{2}\). In our paper, we not only demonstrate the persistence of such an effect in a more realistic model, accounting for \(Q\), \(Q_{2}\), and \(K\), but we also derive an analytical expression for the critical value of the memory parameter \(\mu\), which goes beyond the findings of [10; 11]. Note that, by relating \(\mu\) to the temporal interval \(\delta t\) between subsequent input signals (e.g. \(\mu=e^{-\delta t/t_{E}}\) with \(t_{E}\) being the thermalisation timescale), Theorem 1 can also be expressed in terms of the critical \(\delta t\) below which the above mentioned quantum communication tasks can be achieved. **Acknowledgements** -- FAM and VG acknowledge financial support by MUR (Ministero dell'Istruzione, dell'Universita e della Ricerca) through the following projects: PNRR MUR project PE000023-NQSTI, PRIN 2017 Taming complexity via Quantum Strategies: a Hybrid Integrated Photonic approach (QUSHIP) Id. 2017SRN-BRK, and project PRO3 Quantum Pathfinder. GDP has been supported by the HPC Italian National Centre for HPC, Big Data and Quantum Computing - Proposal code CN00000013 and by the Italian Extended Partnership PE01 - FAIR Future Artificial Intelligence Research - Proposal code PE0000013 under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU. GDP is a member of the "Gruppo Nazionale per la Fisica Matematica (GNFM)" of the "Istituto Nazionale di Alta Matematica "Francesco Severi" (INdAM)". MF is supported by a Juan de la Cierva Fornacion fellowship (project FJC2021-047404-I), with funding from MCIN/AEI/10.13039/501100011033 and European Union NextGenerationEU/PRTR, and by Spanish Agencia Estatal de Investigacion, project PID2019-107609GB-I00/AEI/10.13039/501100011033, by the European Union Regional Development Fund within the ERDF Operational Program of Catalunya (project QuantumCat, ref. 001-P-001644), and by European Space Agency, project ESA/ESTEC 2021-01250-ESA. FAM and LL thank the Freie Universitat Berlin for hospitality. FAM, LL, and VG acknowledge valuable discussions with Paolo Villoresi, Giuseppe Vallone, and Marco Avesani and their hospitality at the University of Padua. FAM and MF acknowledge valuable discussions with Giovanni Barbarino regarding the Avram-Parter theorem and Szego theorem. **Supplementary Information** is available for this paper. **Competing interest --** The authors declare no competing interests. ## Methods Let \(\mathcal{H}\) be an Hilbert space, let \(\mathcal{E}(\mathcal{H})\) the space of linear operators on \(\mathcal{H}\), and let \(\mathfrak{G}(\mathcal{H})\) be the set of quantum states on \(\mathcal{H}\). Given a quantum channel \(\Phi\) and a sequence of quantum channels \(\{\Phi_{k}\}_{k\in\mathbb{N}}\), we say that \(\{\Phi_{k}\}_{k\in\mathbb{N}}\)_strongly converges to \(\Phi\) if_ \[\lim_{k\to\infty}\|\Phi_{k}(\rho)-\Phi(\rho)\|_{1}=0\quad\forall\,\rho\in \mathfrak{G}(\mathcal{H})\,, \tag{9}\] _where \(\|\Theta\|_{1}:=\operatorname{Tr}\sqrt{\Theta^{t}\Theta}\) denotes the trace norm of a linear operator \(\Theta\). We now define quantum memoryless and memory channels, and introduce relevant notations._ **Definition 1**.: _Let \(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\) be a family of quantum channels with \(\Phi^{(n)}:\mathcal{E}(\mathcal{H}^{\otimes n})\to\mathcal{E}(\mathcal{H}^{ \otimes n})\) for all \(n\in\mathbb{N}\). The family \(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\) is memoryless if there exists a quantum channel \(\Phi:\mathcal{E}(\mathcal{H})\to\mathcal{E}(\mathcal{H})\) such that \(\Phi^{(n)}=\Phi^{\otimes n}\) for all \(n\in\mathbb{N}\). In such a scenario, we will also refer to \(\Phi\) as a memoryless channel. Moreover, given a capacity \(C\) (e.g. \(C\in\{Q,Q_{2},K\}\)), the capacity \(C\) of the memoryless quantum channel \(\Phi\) is denoted as \(C(\Phi)\)._ **Definition 2**.: _Let \(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\) be a family of quantum channels with \(\Phi^{(n)}:\mathcal{E}(\mathcal{H}^{\otimes n})\to\mathcal{E}(\mathcal{H}^{ \otimes n})\) for all \(n\in\mathbb{N}\). The family \(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\) is a memory quantum channel if it is not memoryless. In such a scenario, the term "\(n\) channel uses" corresponds to the quantum channel \(\Phi^{(n)}\). Moreover, given a capacity \(C\) (e.g. \(C\in\{Q,Q_{2},K\}\)), the capacity \(C\) of the quantum memory channel \(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\) is denoted as \(C\left(\{\Phi^{(n)}\}_{n\in\mathbb{N}}\right)\)._ For all \(\lambda\in[0,1]\) the capacities \(Q\)[25; 26; 27], \(Q_{2}\)[28], and \(K\)[28] of the pure-loss channel \(\mathcal{E}_{\lambda,0}\) are given by: \[\begin{split} Q(\mathfrak{G}_{\lambda,0})&=\begin{cases} \log_{2}\left(\frac{\lambda}{1-\lambda}\right)&\text{if }\lambda\in[\frac{1}{2},1]\;,\\ 0&\text{if }\lambda\in[0,\frac{1}{2}]\;.\end{cases}\\ Q_{2}(\mathfrak{G}_{\lambda,0})&=K(\mathfrak{G}_{\lambda,0})=\log_{2} \left(\frac{1}{1-\lambda}\right)\;.\end{cases}\end{split} \tag{10}\] In the following we will outline the key concepts that enabled us to derive the results stated in Section B. To achieve this, we generalise methods introduced in [16; 17; 18]. Let us begin with the forthcoming Theorem 3, which is proved in Theorem S8 in the Supplementary. **Theorem 3**.: _Let \(\lambda,\mu\in[0,1]\), \(v\geq 0\), and \(n\in\mathbb{N}\). There exists passive [24]\(n\)-mode unitary transformations \(\mathcal{U}_{1},\mathcal{U}_{2}\) such that_ \[\mathcal{U}_{2}\circ\Phi^{(n)}_{\lambda,\mu,v}\circ\mathcal{U}_{1}=\bigotimes_{ i=1}^{n}\mathcal{E}_{\eta_{i}^{(n,\lambda,\mu)},v}\,, \tag{11}\] _where the transmissivities \(\{\eta^{(n,\lambda,\mu)}_{i}\}_{i=1,2,\ldots,n}\) are the square of the singular values, indexed in increasing order, of the \(n\times n\) real matrix \(\bar{A}^{(n,\lambda,\mu)}\) whose \((i,k)\) element is_ \[\bar{A}^{(n,\lambda,\mu)}_{i,k}:=\Theta(i-k)\sqrt{\lambda}\mu^{\frac{i-k}{2}}L^ {(-1)}_{i-k}(-\ln\lambda) \tag{12}\] _for all \(i,k\in\{1,2,\ldots,n\}\). Here, \(L^{(-1)}_{m}\) is a generalised Laguerre polynomial, and_ \[\Theta(x):=\begin{cases}1,&\text{if }x\geq 0,\\ 0,&\text{otherwise.}\end{cases} \tag{13}\] _Consequently, any capacity of the quantum memory channel \(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\) coincides with the corresponding capacity of \(\{\bigotimes_{i=1}^{n}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}\}_{n\in \mathbb{N}}\)._ The proof of (11) involves expressing the output annihilation operators \(\{a_{i,M}\}_{i=1,2,\ldots,n}\) (see Figure 2) in terms of the input ones \(\{a_{i}\}_{i=1,2,\ldots,n}\), performing a Bogoliubov transformation, and finally taking the continuum limit \(M\to\infty\). As a consequence of (11), the quantum memory channel \(\Phi^{(n)}_{\lambda,\mu,\nu}\), which models \(n\) uses of the optical fibre, is unitarily equivalent to a tensor product of \(n\) distinct thermal attenuators. Consequently, if Alice and Bob are linked by \(\Phi^{(n)}_{\lambda,\mu,\nu}\) (resp. \(\bigotimes_{i=1}^{n}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}\)), they can simulate \(\bigotimes_{i=1}^{n}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}\) (resp. \(\Phi^{(n)}_{\lambda,\mu,\nu}\)) through the application of \(\mathcal{U}_{1}\) (resp. \(\mathcal{U}_{1}^{\star}\)) by Alice just before transmission, and \(\mathcal{U}_{2}\) (resp. \(\mathcal{U}_{2}^{\star}\)) by Bob just after reception. This implies the capacity equivalence between \(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\) and \(\{\bigotimes_{i=1}^{n}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}\}_{n\in \mathbb{N}}\), as stated in the final part of Theorem 3. Notably, this equivalence holds even for energy-constrained capacities [12], thanks to the passivity of \(\mathcal{U}_{1}\). Let us analyse the behaviour of the transmissivities \(\{\eta^{(n,\lambda,\mu)}_{i}\}_{i=1,2,\ldots,n}\), as the number of uses of the optical fibre, \(n\), approaches infinity. By plotting in Fig. 4 the points \[\left\{\left(2\pi\frac{j}{n}\,\ \eta^{(n,\lambda,\mu)}_{j}\right)\ :\ j=1,2,\ldots,n\right\} \tag{14}\] on a two-dimensional plane, we observe that they converge for \(n\to\infty\) to the graph of a function \(\eta^{(\lambda,\mu)}:[0,2\pi]\to\mathbb{R}\), which we dub _effective transmissivity function_. We formally state this result in the forthcoming Theorem 4 and we provide its proof in Theorem S13 in the Supplementary by establishing a corollary of the _Avram-Parter theorem_[39, 40], a matrix analysis result about the asymptotic behaviour for \(n\to\infty\) of the singular values of a \(n\times n\) Toeplitz matrix. Notably, our corollary, reported in Theorem S12 in the Supplementary, seems to be unknown in the matrix analysis literature and we believe it may have independent interest. **Theorem 4**.: _Let \(\lambda\in[0,1]\) and \(\mu\in[0,1)\). There exists a sequence \(\{j_{n}\}_{n\in\mathbb{N}}\subseteq\mathbb{N}\) such that \(0\leq j_{n}\leq n\) for all \(n\in\mathbb{N}\), \(\lim_{n\to\infty}\frac{j_{n}}{n}=0\), and_ \[\lim_{n\to\infty}\max\left\{\left|\eta^{(n,\lambda,\mu)}_{j}-\eta^{(\lambda, \mu)}\left(\frac{2\pi j}{n}\right)\right|\ :\ j\in\{j_{n},\ldots,n\}\right\}=0\,, \tag{15}\] _where the effective transmissivity function is defined as_ \[\eta^{(\lambda,\mu)}(x)\coloneqq\lambda^{\frac{1-\mu}{1+\mu-2\mu\cos(x/2)}} \quad\forall\ x\in[0,2\pi]\,. \tag{16}\] As a consequence of the previous theorem and the fact that \(\max_{x\in[0,2\pi]}\eta^{(\lambda,\mu)}(x)=\eta^{(\lambda,\mu)}(2\pi)\), the value \(\eta^{(\lambda,\mu)}(2\pi)\) determines whether or not the capacities of our model are strictly positive. Hence, (2) and (3) imply the following theorem, which is proved in Theorem S14 in the Supplementary. **Theorem 5**.: _Let \(\lambda\in(0,1)\), \(\mu\in[0,1)\), and \(\nu\geq 0\). Let \(C(\lambda,\mu,\nu)\) be one of the following capacities of the quantum memory channel \(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\): quantum capacity \(Q\), two-way quantum capacity \(Q_{2}\), or secret key capacity \(K\). It holds that_ \[C(\lambda,\mu,\nu)>0\iff C(\mathcal{E}_{\eta^{(\lambda,\mu)}_{j}(2\pi),\nu})> 0\,, \tag{17}\] _where \(\eta^{(\lambda,\mu)}\) is reported in (16). In particular, in the absence of thermal noise, i.e. \(\nu=0\), it holds that_ \[Q(\lambda,\mu,\nu=0)>0\iff\sqrt{\mu}>\frac{\log_{2}\left(\frac{1}{\lambda} \right)-1}{\log_{2}\left(\frac{1}{\lambda}\right)+1}\,. \tag{18}\] _In addition, for all \(\nu\geq 0\) it holds that_ \[K(\lambda,\mu,\nu),\,Q_{2}(\lambda,\mu,\nu)>0\iff\sqrt{\mu}>\frac{\ln\left( \frac{1}{\lambda}\right)-\ln(1+\frac{1}{\nu})}{\ln\left(\frac{1}{\lambda} \right)+\ln(1+\frac{1}{\nu})}\,. \tag{19}\] Now, let us provide a brief overview of the key ideas underlying the derivation of the capacities of \(\{\bigotimes_{i=1}^{n}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}\}_{n\in \mathbb{N}}\). For additional technical details, see Theorem S15 in the Supplementary. For \(l,P\in\mathbb{N}\), consider \(Pl\) channel uses as divided into \(P\to\infty\) groups, each containing \(l\to\infty\) channel uses: \[\bigotimes_{i=1}^{IP}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{i},\nu}=\bigotimes_{j= 1}^{I}\bigotimes_{p=1}^{P}\mathcal{E}_{\eta^{(n,\lambda,\mu)}_{(p-1)+\nu},\nu}\,. \tag{20}\] Roughly speaking, Theorem 4 implies that for any \(j\in\{1,2,\ldots,l\}\) it holds that \[\eta^{(p,l,\lambda,\mu)}_{(p-1)l+j}\simeq\eta^{(\lambda,\mu)}\left(2\pi\frac{p}{p }\right)\quad\text{for }l,P\to\infty\, \tag{21}\] meaning that \(lP\) channel uses, with \(l,P\to\infty\), corresponds to \(l\) uses of the following \(P\)-mode thermal attenuator: \[\bigotimes_{p=1}^{P}\mathcal{E}_{\eta^{(\lambda,\mu)}(2\pi\frac{p}{p}),\nu}\, \tag{22}\] which is a memoryless quantum channel. Hence, for any capacity \(C\) (e.g. \(C=Q,Q_{2},K\)) it holds that the capacity \(C\) of the quantum memory channel \(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\) satisfies \[C\left(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\right) \stackrel{{\eqref{eq:C}}}{{=}}\lim_{P\to\infty}\frac{1}{P} C\left(\bigotimes_{p=1}^{P}\mathcal{E}_{\eta^{(\lambda,\mu)}(2\pi\frac{p}{p}),\nu}\right) \tag{23}\] \[\stackrel{{\eqref{eq:C}}}{{=}}\lim_{P\to\infty}\frac {1}{P}\sum_{p=1}^{P}C\left(\mathcal{E}_{\eta^{(\lambda,\mu)}(2\pi\frac{p}{p}), \nu}\right)\] (24) \[\stackrel{{\eqref{eq:C}}}{{=}}\int_{0}^{2\pi}\frac{ \mathrm{d}x}{2\pi}\,C\left(\mathcal{E}_{\eta^{(\lambda,\mu)}(x),\nu}\right)\, \tag{25}\] where: (i) comes from the fact that a single use of the \(P\)-mode attenuator in (22) is approximately equivalent to \(P\) uses of the optical fibre; (ii) is a consequence of the fact that Alice and Bob can independently employ the optimal communication strategy for each of the \(P\) single-mode channels that define the \(P\)-mode attenuator; in (iii) we have just introduced the Riemann integral. Moreover, the equality in (24) holds if \(C\) is additive, i.e. if \(C\) is such that for all transmissivities \(\{\lambda_{p}\}_{p=1,2,\ldots,P}\) and all \(P\in\mathbb{N}\) it holds that \[C\left(\bigotimes_{p=1}^{P}\mathcal{E}_{\lambda_{p},\nu}\right)=\sum_{p=1}^{ P}C\left(\mathcal{E}_{\lambda_{p},\nu}\right). \tag{26}\] Hence, leveraging the additivity and the expression in (10) of the capacities \(Q\)[41], \(Q_{2}\)[28], and \(K\)[28] of the pure-loss channel, we can derive the precise values of the capacities of our model in the absence of thermal noise. This exact solution is provided by Theorem 6, which is proved in Theorem S15 in the Supplementary. **Theorem 6**.: _Let \(\lambda\in(0,1)\), \(\mu\in[0,1)\), \(\nu\geq 0\). Let \(C\) be one of the following capacities: quantum capacity \(Q\), two-way quantum capacity \(Q_{2}\), or secret key capacity \(K\). In addition, let \(C(\lambda,\mu,\nu)\) be the capacity \(C\) of the quantum memory channel \(\{\Phi^{(n)}_{\lambda,\mu,\nu}\}_{n\in\mathbb{N}}\). In absence of thermal noise, i.e. \(\nu=0\), it holds that_ \[C(\lambda,\mu,\nu=0)=\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}\,C\left(\mathcal{ E}_{\eta^{(\lambda,\mu)}(x),0}\right). \tag{27}\] _where \(\eta^{(\lambda,\mu)}(x)\) is the effective transmissivity function expressed in (16). In particular,_ \[Q\left(\lambda,\mu,\nu=0\right) =\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}\,\max\left\{0,\log_{2} \left(\frac{\eta^{(\lambda,\mu)}(x)}{1-\eta^{(\lambda,\mu)}(x)}\right)\right\}\, \tag{28}\] \[Q_{2}\left(\lambda,\mu,\nu=0\right) =K\left(\lambda,\mu,\nu=0\right)\] \[=\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}\,\log_{2}\left(\frac{1}{ 1-\eta^{(\lambda,\mu)}(x)}\right)\.\] _Moreover, in the presence of thermal noise, i.e. \(\nu>0\), it holds that_ \[C(\lambda,\mu,\nu)\geq\int_{0}^{2\pi}\frac{\mathrm{d}x}{2\pi}C\left(\mathcal{ E}_{\eta^{(\lambda,\mu)}(x),\nu}\right). \tag{29}\]
2309.08146
Syn-Att: Synthetic Speech Attribution via Semi-Supervised Unknown Multi-Class Ensemble of CNNs
With the huge technological advances introduced by deep learning in audio & speech processing, many novel synthetic speech techniques achieved incredible realistic results. As these methods generate realistic fake human voices, they can be used in malicious acts such as people imitation, fake news, spreading, spoofing, media manipulations, etc. Hence, the ability to detect synthetic or natural speech has become an urgent necessity. Moreover, being able to tell which algorithm has been used to generate a synthetic speech track can be of preeminent importance to track down the culprit. In this paper, a novel strategy is proposed to attribute a synthetic speech track to the generator that is used to synthesize it. The proposed detector transforms the audio into log-mel spectrogram, extracts features using CNN, and classifies it between five known and unknown algorithms, utilizing semi-supervision and ensemble to improve its robustness and generalizability significantly. The proposed detector is validated on two evaluation datasets consisting of a total of 18,000 weakly perturbed (Eval 1) & 10,000 strongly perturbed (Eval 2) synthetic speeches. The proposed method outperforms other top teams in accuracy by 12-13% on Eval 2 and 1-2% on Eval 1, in the IEEE SP Cup challenge at ICASSP 2022.
Md Awsafur Rahman, Bishmoy Paul, Najibul Haque Sarker, Zaber Ibn Abdul Hakim, Shaikh Anowarul Fattah, Mohammad Saquib
2023-09-15T04:26:39Z
http://arxiv.org/abs/2309.08146v1
# Syn-Att: Synthetic Speech Attribution via Semi-Supervised Unknown Multi-Class Ensemble of Cnns ###### Abstract With the huge technological advances introduced by deep learning in audio & speech processing, many novel synthetic speech techniques achieved incredible realistic results. As these methods generate realistic fake human voices, they can be used in malicious acts such as people imitation, fake news, spreading, spoofing, media manipulations, etc. Hence, the ability to detect synthetic or natural speech has become an urgent necessity. Moreover, being able to tell which algorithm has been used to generate a synthetic speech track can be of preeminent importance to track down the culprit. In this paper, a novel strategy is proposed to attribute a synthetic speech track to the generator that is used to synthesize it. The proposed detector transforms the audio into log-mel spectrogram, extracts features using CNN, and classifies it between five known and unknown algorithms, utilizing semi-supervision and ensemble to improve its robustness and generalizability significantly. The proposed detector is validated on two evaluation datasets consisting of a total of 18,000 weakly perturbed (Eval 1) & 10,000 strongly perturbed (Eval 2) synthetic speeches. The proposed method1 outperforms other top teams in accuracy by 12-13% on Eval 2 and 1-2% on Eval 1, in the IEEE SP Cup challenge at ICASSP 2022. Footnote 1: Code & Dataset is available at [https://github.com/awsaf49/synatt](https://github.com/awsaf49/synatt) Md Awsafur Rahman\({}^{\$,1}\), Bishmoy Paul \({}^{\$,1}\), Najibul Haque Sarker \({}^{\$,2}\), Zaber Ibn Abdul Hakim \({}^{\$,2}\) Shaikh Anowarul Fattah \({}^{1}\) and Mohammad Saquib \({}^{3}\) \({}^{1}\) Dept. of EEE, BUET, Bangladesh \({}^{2}\) Dept. of CSE, BUET, Bangladesh \({}^{3}\) Dept. of EE, UT Dallas, Texas, USA Synthetic Speech Attribution, Speech Forensics, Semi-Supervision, Ensemble ## 1 Introduction and Related Work Due to the utilization of audio in tasks related to security, privacy, evidence and more non-frivolous activities, the quantitative and qualitative research in audio and its discipline has surged in recent times. With the advent of deep learning technologies, an array of new methods has been introduced for voice and speech recognition and comprehension [1, 2]. This improvement in technology also is evident in the synthetic speech generation field which is now in such a state that even synthetic speech of an individual can be mimicked flawlessly [2, 3, 4]. This has given rise to the possibility that the technology can now be used for malevolent purposes and poses security concerns which cannot be ignored [5]. In order to combat the proliferation of illegal and detrimental activities, the development of technologies for the detection and classification of fake speeches [6] is of paramount importance, not only from a law enforcement perspective but also within the context of machine ethics. While numerous efforts have already been made in the field of forensic detectors designed to differentiate between genuine speech recordings and synthetically generated ones [7], the challenge of attributing a synthetic speech track to the specific generator used for its synthesis remains relatively unexplored. Traditional methods, relying on closed-set approaches such as simple classification [8, 9, 10], and consistency detection [11], fall short in detecting samples from unseen algorithms (open-set scenarios). These methods tend to confuse samples from unknown algorithms with known ones, resulting in subpar performance. Recent attempts, such as ParalMGC [12], have aimed to address the issue of unknown algorithms by employing parallel branches (utilizing Mel-Frequency and GammaTone coefficients) CNNs but it struggles when faced with unseen perturbed test cases. Another method, CAT [4], leverages transformers in conjunction with t-SNE to identify unknown algorithms based on latent space but falters when confronted with substantial variations within known algorithm due to factors like speaker changes or environmental shifts. An alternative data-driven approach [2] has specifically targeted the challenge of addressing unknown algorithms with a confidence threshold and a one-class SVM. While both of these methods exhibit promising results, they suffer from a lack of robustness, due to their reliance on highly perturbable confidence parameter, resulting in poor performance in strongly perturbed cases. To mitigate the aforementioned challenges, a novel approach is proposed which exploits a multi-class strategy with semi-supervision and ensemble techniques to attribute both known and unknown synthetic speech algorithms, ensuring robustness and generalizability. ## 2 Methodology ### Problem Formulation Mathematically, given a data set \(S=\{(x_{1},Y_{1}),\cdots,(x_{N},Y_{N})\}\) where N is the number of sample, \(x_{i}\) denotes the \(i^{th}\) audio sample and \(Y_{i}\) represents the \(i^{th}\) label. The experimented approaches can majorly be classified in two classes. Firstly, using raw audio as the input feature and secondly, using log-mel spectrogram instead. If, \(T(\cdot)\) denotes the transformation that extracted log-mel spectrogram from raw audio and \(F(\cdot)\) denotes any generic method that generated prediction label, \(\hat{y}\), from feature, then \[\hat{y}=F(x_{i})\quad\text{or}\quad\hat{y}=F(T(x_{i}))\] Following this, total loss, \(g\), was calculated using a loss function. \[g=\sum_{i=1}^{N}Loss(\hat{y_{i}},Y_{i})\] The target was to minimize \(g\). ### Unknown Multi-Class Strategy To identify unknown algorithms an additional class has been added namely "Unknown" class along with data of 5 classes. The data for this additional class is added in both training and validation phase to make the distribution as diverse as possible with help of external data. Thus, synthetic speech attribution from both known and unknown class has been formulated as a Six Class Classification problem. Fig. 1 provides a visual insight on the proposed unknown multi-class scheme. ### Data Processing In the data pipeline, all input signals are resampled to 16,000 samples/second and then Z-normalized. Subsequently, log-mel spectrograms are generated as model inputs. In Part I, Random 6-second segments are extracted from audio signals, and shorter ones are randomly padded. The resulting log-mel spectrograms are 128 x 384, with a hop length of 250, 128 mel bins, and FFT/window sizes of 2048. In Part II, 8-second sequences are utilized for improved noise handling, with all parameters remaining the same, except for an increase in mel bins to 256, resulting in spectrograms of 256 x 512. For evaluation, two datasets are available: Eval 1 and Eval 2. Eval 2 contains strong perturbations (pitch shift, time stretch, filtering), making it very challenging. Eval 1 consists of two parts--one without perturbations and one with weak perturbations (noise, compression, reverberation). The final Eval 1 result is computed as \(0.7\times\text{Part I}+0.3\times\text{Part II}\) to balance contributions. For training, 1000 samples per algorithm (0, 1, 2, 3, 4) are provided, along with an additional 1000 samples from an unseen algorithm (considered as class 5 as per proposed multi-class strategy). Classes 1, 2, 3, and 5 share a common speaker, while class 0 has a distinct speaker, and class 4 involves multiple speakers. Three publicly available natural speech datasets (LJSpeech [13], LibriSpeech [14], VCTK [15]) are included as unseen algorithms, aiming to 1) diversify the unknown class, 2) mitigate speaker-specific overfitting, and 3) enhance generalization. It's important to highlight that the Eval data does not contain natural speech, allowing for the inclusion of natural speech in the unknown class. Additionally, if necessary, distinguishing between predicted natural speech and synthetic speech can be easily accomplished using conventional methods. To further diversify the unknown class, synthetic data is generated through various algorithms. Texts are extracted from 5000 training samples, utilizing the Wav2Vec 2.0 model [16] for initial extraction, correcting spelling inconsistencies with NeuralSpeechCorrector [17], and then processing the text with various text-to-speech models [18, 19, 20] to produce synthetic audio. ### Ensemble In order to improve the comprehensive representation of multifaceted features inherent in the input dataset, an ensemble methodology is strategically employed. This approach harmoniously amalgamates the outcomes of individual models, culminating in a cohesive, resilient, and universally applicable result. This ensemble strategy is characterized by the utilization of the mean operation applied to the probability outputs from multiple models, thereby yielding the ultimate prediction.. ### Semi-Supervised Training The proposed approach leverages Semi-Supervised Training [21], commonly referred to as Pseudo Labelling, to enhance model robustness and generalizability. This technique involves generating approximate labels for input data based on the features learned during training. Initially, a model is trained using both provided and external datasets, followed by Figure 1: Proposed Unknown Multi-Class Scheme soft label (no thresholding) generation on the test set. These generated labels, termed pseudo labels, are not guaranteed to be ground truth and may exhibit bias towards training labels. However, by incorporating these pseudo labels as additional training data, the model adapts to the test data distribution, resulting in improved learning sample space. For a visual representation of the approach, refer to Fig. 2. ## 3 Results and Discussions ### Experimental Setup The hardware configuration includes \(8\) cores CPU, \(64\) GB RAM, and \(4\times\) NVIDIA V \(100\) GPUs. Various hyperparameters are selected through experimentation such as Adam optimizer, a fixed learning rate (\(\gamma_{1}=10^{-3}\)) and an Exponential-Decay scheduler in both **Part I** and **Part II**. Categorical Cross Entropy loss is used to optimize the six class CNN classifiers with label smoothing (\(\alpha\) = 0.05). Diverse networks are trained with varying epochs and batch sizes for enhanced performance, incorporating a five-fold cross-validation scheme for robust validation. Model performance evaluation favors the \(F1\) Score metric to tackle class imbalance introduced by external datasets. Augmentation techniques, such as MixUp [22], CutMix [23], GaussianNoise, Time-Freq Mask, JpegCompress, Crop, Pad, and others, are applied to improve robustness. ### Ablation Study An overview of the quantitative comparison of various stages within the ablation study is presented in Table 1, shedding light on the significance of different aspects of the proposed method. Evidently, unknown mutli-class strategy emerges as the most influential factor, given its crucial role in identifying unknown algorithms. Moreover, for single-model, semi-supervised approach surpasses augmentation methods, due to its exceptional adaptability to unknown distribution. #### 3.2.1 Effect of unknown multi-class strategy The effect of proposed multi-class strategy has been examined with respect to the variation of external datasets and backbones. The efficacy of the multi-class method is contingent upon the diversity of the unknown class. Since the provided datasets lack the diversity, external datasets have been incorporated. As Table 2 (w/o ensemble, augment, semi-sup.) reveals, the optimal outcome arises from the integration of distinct datasets, surpassing the baseline method by an approximate margin of \(4\%\). The verification of the multi-class strategy's efficacy is further carried out through testing with various CNN backbones, as delineated in Table 3 (w/ augment, and semi-sup.), affirming the method's effectiveness across diverse backbones. Notably, in **Part I** (w/o perturbed), smaller models dominates, showcasing their resilience to overfitting attributed to their compact size. Conversely, in **Part II** (w/ perturbed), larger models gains superiority, leveraging their large complexity to adepthly extract intricate features. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{StageScore} & \multicolumn{2}{c|}{Part1} & \multicolumn{2}{c|}{Part2} \\ \cline{2-5} & CV (F1) & LB (Acc) & CV (F1) & LB (Acc) \\ \hline Baseline & 0.926 & 0.915 & 0.919 & 0.903 \\ \hline Baseline & 0.962 & 0.915 & 0.919 & 0.903 \\ \hline Baseline & 0.926 & 0.915 & 0.919 & 0.903 \\ \hline Baseline & 0.926 & 0.915 & 0.919 & 0.903 \\ \hline Baseline & 0.926 & 0.948 & 0.937 & 0.934 \\ \hline Baseline & 0.935 & 0.929 & 0.942 & 0.921 \\ \hline Semi-Supervised & 0.932 & 0.934 & 0.933 & 0.918 \\ \hline \end{tabular} \end{table} Table 1: Scores of different stages of ablation study \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{DatasetScore} & \multicolumn{2}{c|}{Part1} & \multicolumn{2}{c|}{Part2} \\ \cline{2-5} & CV (F1) & LB (Acc) & CV (F1) & LB (Acc) \\ \hline Baseline & 0.926 & 0.915 & 0.919 & 0.903 \\ \hline LJSpeech [13] & 0.937 & 0.923 & 0.928 & 0.916 \\ \hline VCTK [15] & 0.935 & 0.928 & 0.924 & 0.919 \\ \hline LibriSpeech [14] & 0.940 & 0.931 & 0.902 & 0.873 \\ \hline Synthetic & 0.942 & 0.935 & 0.930 & 0.922 \\ \hline **Best** & **0.962** & **0.948** & **0.937** & **0.934** \\ \hline \end{tabular} \end{table} Table 2: Effect of Multi-Class Strategy w.r.t Datasets \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{ \begin{tabular}{c} BackboneScore \\ \end{tabular} } & \multicolumn{2}{c|}{Part1} & \multicolumn{2}{c|}{Part2} \\ \cline{2-5} & CV (F1) & LB (Acc) & CV (F1) & LB (Acc) \\ \hline ResNet50D [24] & 0.963 & 0.949 & 0.926 & 0.920 \\ \hline ResNet50D [24] & 0.960 & 0.952 & 0.933 & 0.927 \\ \hline ResNetRS50 [24] & 0.956 & 0.955 & 0.929 & 0.918 \\ \hline EfficientSV2 [25] & 0.964 & 0.959 & 0.935 & 0.931 \\ \hline RegNeZDS [24] & 0.969 & 0.951 & 0.941 & 0.946 \\ \hline EfficientNetB0 [25] & 0.971 & 0.962 & 0.958 & 0.957 \\ \hline ECA\_NFNetL2 [26] & 0.948 & 0.930 & 0.955 & 0.949 \\ \hline ConvNeXt,Base\_28 [9] & 0.933 & 0.932 & 0.949 & 0.948 \\ \hline ConvNeXt,Jage\_28 [9] & 0.941 & 0.929 & 0.952 & 0.950 \\ \hline ResNetRS152 [24] & 0.936 & 0.927 & 0.957 & 0.949 \\ \hline EfficientNetV2M [25] & 0.930 & 0.922 & 0.955 & 0.952 \\ \hline \end{tabular} \end{table} Table 3: Effect of Multi-Class Strategy w.r.t Backbones Figure 2: Proposed Semi-Supervised Scheme #### 3.2.2 Effect of different augmentations To combat speaker bias, Mixup [22] and Cutmix [23] is employed. Random beta distribution (\(\alpha=2.5\), \(\beta=2.5\)) is used to determine sample contributions. Gaussian noise, CutOutstyle masking to the spectrogram [27], and slight JPEG compression is added to enhance model performance. Table 4 summarizes augmentation effects. While CutMix performs well on **Part I** without perturbation, it negatively impacts scores in the presence of perturbation; others consistently perform well on both **Part I & II** data. #### 3.2.3 Effect of semi-supervised training The pseudo test labels, generated by trained models, contribute to a more robust learning sample space, allowing models to adapt to unknown distributions. In this instance, the pseudo-labels are generated from high-performing models based on the metrics used in evaluation. As a result, despite the possibility of an increased bias towards the training labels, the labels still provide a significant contribution to model training, as observed in Table 6. #### 3.2.4 Effect of ensemble The class-wise probabilities from multiple models are averaged to derive the final prediction, enabling the utilization of diverse model insights. As shown in Table 3 and Table 6, it is evident that ensembling significantly improved both CV and LB performance for **Part I** and **Part II**. Particularly in **Part II**, the ensemble increases the results by nearly 2.5% in observed metrics ### Result on IEEE SP Cup 2022 The performance of the proposed method is rigorously assessed in the IEEE SP Cup [28] competition at ICASSP 2022. As illustrated in Table 5 (w/ ensemble), the proposed method outperforms other top teams on the leaderboard by a significant margin, with an improvement of 12-13% on Eval 2 (highly perturbed) and 1-2% on Eval 1 (weakly perturbed), on accuracy metric. This affirms the effectiveness of the method. Notably, Eval 2 dataset is kept hidden from the participants. ### Comparison with Other Approaches In Table 6 (w/o ensemble), it becomes evident that the proposed method surpasses other approaches by a considerable margin, in terms of accuracy and F1 score. This superiority can be attributed to the robustness and generalizability of the proposed Unknown Multi-Class Strategy, semi-supervised training, and network ensembling. These findings provide compelling evidence of the effectiveness of the proposed approach in synthetic speech attribution. ## 4 Conclusion In this article, a solution for synthetic speech attribution is presented: a semi-supervised multi-class convolutional neural network ensemble-based approach that employs a multi-class strategy with a dedicated unknown class for unidentified algorithms. Its semi-supervised nature ensures effective handling of unknown data distribution whereas the ensemble network enhances detector robustness by incorporating diverse features from different models. Extensive investigation demonstrates its remarkable effectiveness in synthetic speech attribution, notably in the evaluation datasets. It stands as a promising candidate for state-of-the-art synthetic speech attribution, addressing forensic concerns linked to malicious synthetic speech use. ## 5 Acknowledgment The authors thank IEEE Signal Processing Society, ISPL at Politecnico di Milano (Italy), and MISL at Drexel University (USA) for hosting IEEE SP Cup at ICASSP 2022, which inspired this work. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Method\(\begin{bmatrix}\textbf{Score}\\ \textbf{CV (F1)}\end{bmatrix}\)**} & \multicolumn{2}{c|}{**Part I**} & \multicolumn{2}{c|}{**Part II**} \\ \cline{2-5} & **CV (F1)** & **LB (Acc)** & **CV (F1)** & **LB (Acc)** \\ \hline \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & 0.443 & 0.427 & 0.422 & 0.409 \\ \cline{2-5} & 0.586 & 0.522 & 0.549 & 0.510 \\ \hline \multicolumn{5}{|l|}{LSTM [10]} & 0.696 & 0.645 & 0.637 & 0.608 \\ \hline \multicolumn{5}{|l|}{PuralMGC [12]} & 0.822 & 0.810 & 0.802 & 0.782 \\ \hline \multicolumn{5}{|l|}{Confidence Threshold [2]} & 0.892 & 0.875 & 0.808 & 0.790 \\ \hline \multicolumn{5}{|l|}{CAT + t-SNE [4]} & 0.901 & 0.881 & 0.861 & 0.854 \\ \hline \multicolumn{5}{|l|}{One-class SVM [2]} & 0.911 & 0.901 & 0.843 & 0.820 \\ \hline \multicolumn{5}{|l|}{**Proposed**} & **0.971** & **0.962** & **0.958** & **0.957** \\ \hline \end{tabular} \end{table} Table 6: Comparison of Different Methods on Eval 1 Data \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Augmentation\(\begin{bmatrix}\text{Score}\\ \text{Baseline}\end{bmatrix}\)} & \multicolumn{2}{c|}{Part I} & \multicolumn{2}{c|}{Part II} \\ \cline{2-5} & CV (F1) & LB (Acc) & CV (F1) & LB (Acc) \\ \hline \multicolumn{5}{|c|}{} \\ \cline{2-5} & 0.962 & 0.948 & 0.937 & 0.934 \\ \hline \multicolumn{5}{|c|}{CutMix [23]} & 0.968 & 0.956 & 0.910 & 0.902 \\ \hline \multicolumn{5}{|c|}{MixD [22]} & 0.965 & 0.951 & 0.948 & 0.940 \\ \hline \multicolumn{5}{|c|}{Gaussian Noise} & 0.969 & 0.953 & 0.944 & 0.942 \\ \hline \multicolumn{5}{|c|}{JpegCompression} & 0.962 & 0.952 & 0.940 & 0.938 \\ \hline \multicolumn{5}{|c|}{Time-Frequency Mask} & 0.965 & 0.953 & 0.950 & 0.942 \\ \hline \multicolumn{5}{|c|}{**Best**} & **0.971** & **0.962** & **0.958** & **0.957** \\ \hline \end{tabular} \end{table} Table 4: Effect of Data Augmentation \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{\(\begin{bmatrix}\text{MethodMetric}\\ \text{Method}\end{bmatrix}\)} & Acc & Prc & Rec & F1 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Std. Proc. & 0.97 & 0.97 & 0.96 & 0.97 \\ \cline{2-5} & Team IITH & 0.96 & 0.96 & 0.95 & 0.96 \\ \cline{2-5} & **Synthesizer (Ours)** & **0.98** & **0.99** & **0.97** & **0.98** \\ \hline \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & Std. Proc. & 0.48 & 0.62 & 0.48 & 0.48 \\ \cline{2-5} & Team IITH & 0.49 & 0.51 & 0.49 & 0.49 \\ \cline{2-5} & **Synthesizer (Ours)** & **0.61** & **0.71** & **0.61** & **0.63** \\ \hline \end{tabular} \end{table} Table 5: LB Scores of Top3 Teams in IEEE SP Cup 2022
2309.09258
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
In this note, we demonstrate a first-of-its-kind provable convergence of SGD to the global minima of appropriately regularized logistic empirical risk of depth $2$ nets -- for arbitrary data and with any number of gates with adequately smooth and bounded activations like sigmoid and tanh. We also prove an exponentially fast convergence rate for continuous time SGD that also applies to smooth unbounded activations like SoftPlus. Our key idea is to show the existence of Frobenius norm regularized logistic loss functions on constant-sized neural nets which are "Villani functions" and thus be able to build on recent progress with analyzing SGD on such objectives.
Pulkit Gopalani, Samyak Jha, Anirbit Mukherjee
2023-09-17T12:44:07Z
http://arxiv.org/abs/2309.09258v2
# Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets ###### Abstract In this note, we demonstrate a first-of-its-kind provable convergence of SGD to the global minima of appropriately regularized logistic empirical risk of depth 2 nets - for arbitrary data and with any number of gates with adequately smooth and bounded activations like sigmoid and tanh. We also prove an exponentially fast convergence rate for continuous time SGD that also applies to smooth unbounded activations like SoftPlus. Our key idea is to show the existence of Frobenius norm regularized logistic loss functions on constant-sized neural nets which are "Villani functions" and thus be able to build on recent progress with analyzing SGD on such objectives. ## 1 Introduction Modern developments in artificial intelligence have been significantly been driven by the rise of deep-learning. The highly innovative engineers who have ushered in this A.I. revolution have developed a vast array of heuristics that work to get the neural net to perform "human like" tasks. Most such successes, can mathematically be seen to be solving the function optimization/"risk minimization" question, \(\min_{n\times\mathcal{D}}\mathbb{E}_{\mathbf{z}\in\mathcal{D}}[\ell(n,\mathbf{z})]\) where members of \(\mathcal{N}\) are continuous functions representable by neural nets and \(\ell:\mathcal{N}\times\text{Support}(\mathcal{D})\rightarrow[0,\infty)\) is called a "loss function" and the algorithm only has sample access to the distribution \(\mathcal{D}\). The successful neural experiments can be seen as suggesting that there are many available choices of \(\ell\), \(\mathcal{N}\;\&\;\mathcal{D}\) for which highly accurate solutions to this seemingly extremely difficult question can be easily found. This is a profound mathematical mystery of our times This work is about developing our understanding of some of the most ubiquitous methods of training nets. In particular, we shed light on how regularization can aid the analysis and help prove convergence to global minima for stochastic gradient methods for neural nets in hitherto unexplored and realistic parameter regimes. In the last few years, there has been a surge in the literature on provable training of various kinds of neural nets in certain regimes of their widths or depths, or for very specifically structured data, like noisily realizable labels. Motivated by the abundance of experimental studies it has often been surmised that Stochastic Gradient Descent (SGD) on neural net losses - with proper initialization and learning rate - converges to a low-complexity solution, one that generalizes - when it exists Zhang et al. (2018). But, to the best of our knowledge a convergence result for any stochastic training algorithm applied to the logistic loss for even depth 2 nets (one layer of activations with any kind of non-linearity), without either an assumption on the width or the data, has remained elusive so far. We recall that this is the most common way to train classifiers facing binary class labelled data. In this work, we not only take a step towards addressing the above question in the theory of neural networks but we also do so while keeping to a standard algorithm, the Stochastic Gradient Descent (SGD). In light of the above, our key message can be summarily stated as follows, **Theorem 1.1** (Informal Statement of the Main Results, Theorem 3.2).: _If the initial weights are sampled from an appropriate class of distributions, then for nets with a single layer of sigmoid or tanh gates - for arbitrary data and size of the net - SGD on appropriately regularized logistic loss, while using constant steps of size \(\mathcal{O}(\epsilon)\), will converge in \(\mathcal{O}(\frac{1}{\epsilon})\) steps to weights at which the expected regularized loss would be \(\epsilon\)-close to its global minimum._ We note that the threshold amount of regularization needed in the above _would be independent of the width of the nets_. Further, this threshold would be shown to scale s.t it can either naturally turn out to be proportionately small if the norms of the training data are small or can be made arbitrarily small by choosing outer layer weights to be small. Our above result is made possible by the crucial observation informally stated in the following lemma - which infact holds for more general nets than what is encompassed by the above theorem, **Lemma 1.2**.: _It is possible to add a constant amount of Frobenius norm regularization on the weights, to the logistic loss on depth-\(2\) nets with activations like SoftPlus, sigmoid and tanh gates s.t with no assumptions on the data or the size of the net, the regularized loss would be a Villani function._ Since our result stated above does not require any assumptions on the data, or the neural net width, we posit that this significantly improves on previous work in this direction. To the best of our knowledge, similar convergence guarantees in the existing literature either require some minimum neural net width - growing quickly w.r.t. inverse accuracy and the training set size (NTK regime Chizat et al. (2018); Du et al. (2018)), infinite width (Mean Field regime Chizat & Bach (2018); Chizat (2022); Mei et al. (2018)) or other assumptions on the data when the width is parametric (e.g. realizable data, Ge et al. (2019); Zhou et al. (2021)). In contrast to all these, we show that with appropriate \(\ell_{2}\) regularization, SGD on logistic loss on \(2\)-layer sigmoid / tanh nets converges to the global infimum of the loss. Our critical observation towards this proof is that the above standard losses on \(2\)-layer nets - for a broad class of activation functions -- are a "Villani function". Our proof get completed by leveraging the relevant results in Shi et al. (2020). **Organization** In Section 2 we shall give a literature review of existing proofs about guaranteed training of neural nets. In Section 3 we present our primary results - Theorem 3.2 which show the global convergence of SGD on regularized logistic loss with \(\pm 1\) labelled data and and for gates like sigmoid and tanh. Additionally, in Theorem 3.5 we also point out that for our architecture, if using the SoftPlus activation, we can show that the underlying SDE converges in expectation to the global minimizer in linear time. In Section 4, we give a brief overview of the methods in Shi et al. (2020), leading up to the proof of Theorem 3.2 in Section 5. In Section 3.2 we discuss some experimental demonstrations, which show that there exist nets trained on the loss function considered such that they have high binary classification accuracy near the critical value of the regularizer considered for the proof. We end in Section 6 with a discussion of various open questions that our work motivates. Appendix A onwards one can find the calculations needed in the main theorems' proofs. ## 2 Related Work Firstly, we note that in recent times major advances have been made about understanding the statistical properties of doing binary classification by neural nets. In Zhou & Huo (2023), the authors consider \(\{\pm 1\}\) labelled data distributed as a Gaussian Mixture Model and the labels satisfying a Tsybakov-type noise conditions with the noise exponent being \(q\). The authors obtain that with probability \(1-\delta\) of sampling \(n\) data, the difference between the population risk of the empirical risk minimizer and the Bayes' optimal risk is upperbounded by \(C_{q,d}\log\bigl{(}\frac{2}{\delta}\bigr{)}(\log n)^{4}\bigl{(}\frac{1}{n} \bigr{)}^{\frac{q+2}{q+2}}\), where \(C_{q,d}\) is some constant depending on \(q\) and data dimension \(d\). To appreciate this, we note that earlier in Shen et al. (2022), similar bounds for CNNs with logistic loss were obtained. However, unlike the result in Shen et al. (2022), the excess risk bound obtained in Zhou and Huo (2023) for the hinge loss doesn't blow up with respect to the increasing smoothness of the minimizer of the risk over all measurable functions. In the setting of finite-width neural nets trained on logistic loss for binary classification, Chatterji et al. (2021), had shown that if one has (a) small initial loss (\(\operatorname{poly}\left(\frac{1}{n}\right)\), where \(n\) is the number of training data) and (b)'smoothly approximately ReLU activation function', then the loss converges at a rate of \(O\left(\frac{1}{t}\right)\) over \(t\) steps of gradient descent. But to ensure the smallness of the initial loss, this result needs to assume a large width which scales polylogarithmically with inverse of the confidence parameter. In that limited sense this can be seen to be belonging to the larger framework of proofs at asymptotically wide nets which we review as follows. Review of the NTK Approach To Provable Neural Training :One of the most popular parameter zones for theory of provable training of nets has been the so-called "NTK" (Neural Tangent Kernel) regime - where the width is a high degree polynomial in the training set size and inverse accuracy (a somewhat _unrealistic_ regime) and the net's last layer weights are scaled inversely with width as the width goes to infinity, Du et al. (2018); Su and Yang (2019); Allen-Zhu et al. (2019, 2019); Du and Lee (2018); Arora et al. (2019); Li et al. (2019); Arora et al. (2019); Chizat et al. (2018); Du et al. (2018). The core insight in this line of work can be summarized as follows: for large enough width, SGD _with certain initializations_ converges to a function that fits the data perfectly, with minimum norm in the RKHS defined by the neural tangent kernel - which gets specified entirely by the initialization (which is such that the initial output is of order one). A key feature of this regime is that the net's matrices do not travel outside a constant radius ball around the starting point - a property that is often not true for realistic neural net training scenarios. In particular, for the case of depth 2 nets - with similarly smooth gates as we focus on - in Song et al. (2021) global convergence of gradient descent was shown using number of gates scaling sub-quadratically in the number of data - which, to the best of our knowledge, is the smallest known width requirement for such a convergence in a classification setup. On the other hand, for the special case of training depth 2 nets with ReLU gates on cross-entropy loss for doing binary classification, in Ji and Telgarsky (2020) it was shown that one needs to blow up the width only poly-logarithmically with target accuracy to get global convergence for SGD. Review of the Mean-Field Approach To Provable Neural Net Training :In a separate direction of attempts towards provable training of neural nets, works like Chizat and Bach (2018) showed that a Wasserstein gradient flow limit of the dynamics of discrete time algorithms on shallow nets, converges to a global optimizer - if the convergence of the flow is assumed. We note that such an assumption is very non-trivial because the dynamics being analyzed in this setup is in infinite dimensions - a space of probability measures on the parameters of the net. Similar kind of non-asymptotic convergence results in this so-called'mean-field regime' were also obtained in Mei et al. (2018); Fang et al. (2021); Chizat and Bach (2018); Nguyen and Pham (2020); Sirignano and Spiliopoulos (2022); Ren and Wang (2022). The key idea in the mean-field regime is to replace the original problem of neural training which is a non-convex optimization problem in finite dimensions by a convex optimization problem in infinite dimensions - that of probability measures over the space of weights. The mean-field analysis necessarily require the probability measures (whose dynamics is being studied) to be absolutely-continuous and thus de facto it only applies to nets in the limit of them being infinitely wide. We note that the results in the NTK regime hold without regularization while in many cases the mean-field results need it, Mei et al. (2018); Chizat (2022); Tzen and Raginsky (2020). In the next subsection we shall give a brief overview of some of the attempts that have been made to get provable deep-learning at parametric width. Need And Attempts To Go Beyond Large Width Limits of NetsThe essential proximity of the NTK regime to kernel methods and it being less powerful than finite nets has been established from multiple points of view, Allen-Zhu and Li (2019); Wei et al. (2019). In He and Su (2020), the authors had given a a very visibly poignant way to see that the NTK limit is not an accurate representation of a lot of the usual deep-learning scenarios. Their idea was to define a notion of "local elasticity" - when doing a SGD update on the weights using a data say \(\mathbf{x}\), it measures the fractional change in the value of the net at a point \(\mathbf{x}^{\prime}\) as compared to \(\mathbf{x}\). It's easy to see that this is a constant function for linear regression - as is what happens at the NTK limit (Theorem 2.1 Lee et al. (2019)). But it has been shown in Dan et al. (2021) that this local-elasticity function indeed has non-trivial time-dynamics (particularly during the early stages of training) when a moderately large neural net is trained on logistic loss. Specific to depth-2 nets - as we consider here - there is a stream of literature where analytical methods have been honed to this setup to get good convergence results without width restrictions - while making other structural assumptions about the data or the net. Janzamin et al. (2015) was one of the earliest breakthroughs in this direction and for the restricted setting of realizable labels they could provably get arbitrarily close to the global minima. For non-realizable labels they could achieve the same while assuming a large width but in all cases they needed access to the score function of the data distribution which is a computationally hard quantity to know. In a more recent development, Awasthi et al. (2021) have improved over the above to include ReLU gates while being restricted to the setup of realizable data and its marginal distribution being Gaussian. One of the first proofs of gradient based algorithms doing neural training for depth-2 nets appeared in Zhong et al. (2017). In Ge et al. (2019) convergence was proven for training depth-2 ReLU nets for data being sampled from a symmetric distribution and the training labels being generated using a 'ground truth' neural net of the same architecture as being trained - the so-called "Teacher-Student" setup. For similar distributional setups, some of the current authors had in Karmakar et al. (2020) identified classes of depth-2 ReLU nets where they could prove linear-time convergence of training - and they also gave guarantees in the presence of a label poisoning attack. The authors in Zhou et al. (2021) consider another Teacher-Student setup of training depth 2 nets with absolute value activations. In this work, authors can get convergence in \(\text{poly}(d,\frac{1}{\epsilon})\) time, in a very restricted setup of assuming Gaussian data, initial loss being small enough, and the teacher neurons being norm bounded and 'well-separated' (in angle magnitude). Cheridito et al. (2022) get width independent convergence bounds for Gradient Descent (GD) with ReLU nets, however at the significant cost of having the restrictions of being only an asymptotic guarantee and assuming an affine target function and one-dimensional input data. While being restricted to the Gaussian data and the realizable setting for the labels, an intriguing result in Chen et al. (2021) showed that fully poly-time learning of arbitrary depth 2 ReLU nets is possible if one is in the "black-box query model". Related Work on Provable Training of Neural Networks Using RegularizationUsing a regularizer is quite common in deep-learning practice and in recent times a number of works have appeared which have established some of these benefits rigorously. In particular, Wei et al. (2019) show that for a specific classification task (noisy-XOR) definable in any dimension \(d\), no NTK based 2 layer neural net can succeed in learning the distribution with low generalization error in \(o(d^{2})\) samples, while in \(O(d)\) samples one can train the neural net using Frobenius/\(\ell_{2}-\)norm regularization. Nakkiran et al. (2021) show that for a specific optimal value of the \(\ell_{2}\)- regularizer the double descent phenomenon can be avoided for linear nets - and that similar tuning is possible even for real world nets. In the seminal work Raginsky et al. (2017), it was pointed out that one can add a regularization to a gradient Lipschitz loss and make it satisfy the dissipativity condition so that Stochastic Gradient Langevin Dynamics (SGLD) provably converges to its global minima. But SGLD is seldom used in practice, and to the best of our knowledge it remains unclear if the observation in Raginsky et al. (2017) can be used to infer the same about SGD. Also it remains open if there exists neural net losses which satisfy all the assumptions needed in the above result. We note that the convergence time in Raginsky et al. (2017) for SGLD is \(\mathcal{O}\left(\frac{1}{\epsilon^{6}}\right)\) using an \(\mathcal{O}\left(\epsilon^{4}\right)\) learning rate, while in our Theorem 3.2 SGD converges in expectation to the global infimum of the regularized neural loss in time, \(\mathcal{O}\left(\frac{1}{\epsilon}\right)\) using a \(\mathcal{O}\left(\epsilon\right)\) step-length. In summary, to the best of our knowledge, it has remained an unresolved challenge to show convergence of SGD for logistic loss on any neural architecture with a constant number of gates while not constraining the distribution of the data to a specific functional form. _In this work, we exploit the use of some regularization to be able to resolve this optimization puzzle in our key result, Theorem 3.2 - and in our experiments we show that the regularization needed may not harm downstream classification performance. Thus we take a step towards bridging this important lacuna in the existing theory of stochastic optimization for neural nets in general._ ## 3 Setup and Main Results We start with defining the neural net architecture, the loss function and the algorithm for which we will prove our convergence results. **Definition 1** (**Constant Step-Size SGD On Depth-2 Nets)**.: _Let, (applied elementwise for vector valued inputs) be atleast once differentiable activation function. Corresponding to it, consider the width, depth neural nets with fixed outer layer weights and trainable weights as,_ (1) _Then, corresponding to a given set of binary training data, with define the individual data logistic losses. Then for any let the regularized logistic empirical risk be,_ (2) _Correspondingly, we consider SGD with step-size as,_ (3) _where is a randomly sampled mini-batch of size._ **Definition 2** (**Properties of the Activation )**.: _Let the used in Definition 1 be bounded s.t.,,, and -smooth. Further assume that a constant vector and positive constants and. s.t and and._ In terms of the above constants we can now quantify the smoothness of the empirical loss as follows, **Lemma 3.1** (for Classification with Logistic Loss).: _In the setup of binary classification as contained in Definition 1, and the definition and as given in Definition 2 above, there exists a constant and the Gibbs measure satisfies a Poincare-type inequality with the corresponding constant. Moreover, if the activation satisfies the conditions of Definition 2 then such that the empirical loss is -smooth and we can bound the smoothness coefficient of the empirical loss as,_ (4) The precise form of the Poincare-type inequalities used above is detailed in Theorem 4.1. **Theorem 3.2** (**Global Convergence of SGD on Sigmoid and Tanh Neural Nets of 2 Layers for Any Width and Any Data - for Binary Classification With Logistic Loss)**.: _We continue in the setup of logistic loss from Definitions 1 and 2 and Lemma 3.1. For any, define the probability measure \(\mu_{s}\coloneqq\frac{1}{Z_{s}}\exp\left(-\frac{2L(\mathbf{W})}{s}\right)\), \(Z_{s}\) being the normalization factor. Then, \(\forall\ T>0,\) and desired accuracy, \(\epsilon>0\), \(\exists\) constants \(A(\tilde{L})\), \(B(T,\tilde{L})\) and \(C(s,\tilde{L})\) s.t if the above SGD is executed at a constant step-size_ \[s=s^{*}\coloneqq\min\left(\frac{1}{\operatorname{gLip}(\tilde{L})},\frac{ \epsilon}{2\cdot(A(\tilde{L})+B(T,\tilde{L}))}\right)\] _with the weights \(\mathbf{W}^{0}\) initialized from a distribution with \(p.d.f\ \rho_{\mathrm{initial}}\in L^{2}(\frac{1}{\mu_{s^{*}}})\) and \(\left\|\rho_{\mathrm{initial}}-\mu_{s^{*}}\right\|_{\mu_{s^{*}}^{-1}}\leq\frac{ \epsilon}{2\cdot C(s^{*},\tilde{L})}\cdot e^{\lambda_{s^{*}}T}\) - then, in expectation, the regularized empirical risk of the net, \(\tilde{L}\) would converge to its global infimum, with the rate of convergence given as,_ \[\mathbb{E}\tilde{L}(\mathbf{W}^{\frac{T}{s^{*}}})-\inf_{\mathbf{W}}\tilde{L}(\mathbf{W}) \leq\epsilon.\] For clarity, in the following lemma, we also point out, that a guarantee on the quality of the weights obtained by SGD on this regularized loss, can also be given such that it is explicitly parametric in the number of steps taken as well as the initialization. **Lemma 3.3** (**Bounds on Error for Arbitrary Initialization.**).: _We continue in the setup of logistic loss from Definitions 1 and 2 and Lemma 3.1. For any \(s>0\), define the probability measure \(\mu_{s}\coloneqq\frac{1}{Z_{s}}\exp\left(-\frac{2L(\mathbf{W})}{s}\right)\), \(Z_{s}\) being the normalization factor. Then, \(\forall\ T>0\) and desired accuracy \(\epsilon>0\), \(\exists\) constants \(A(\tilde{L})\), \(B(T,\tilde{L})\) and \(C(s,\tilde{L})\) s.t if the above SGD is executed at a constant step-size_ \[s=s^{*}(\epsilon,T)\coloneqq\min\left(\frac{1}{\operatorname{gLip}(\tilde{L}) },\epsilon\cdot\frac{1}{(A(\tilde{L})+B(T,\tilde{L}))}\right)\] _with the weights \(\mathbf{W}^{0}\) initialized from any distribution with \(p.d.f\ \rho_{\mathrm{initial}}\in L^{2}(\frac{1}{\mu_{s^{*}}})\) and then, the error at the end of having taken \(k=\frac{T}{s^{*}}\) SGD steps can bounded as,_ \[\mathbb{E}\tilde{L}(\mathbf{W}^{k})-\min_{\mathbf{W}}\tilde{L}(\mathbf{W})\leq\epsilon+C( s^{*},\tilde{L})\|\rho_{\mathrm{initial}}-\mu_{s^{*}}\|_{\mu_{s^{*}}^{-1}}e^{-s^{*} \lambda_{s^{*}}\cdot k}\] The proof of Theorem 3.2 is given in Section 5 and the proof of Lemma 3.1 can be read off from the calculations done as a part of the proof of Theorem 3.2. The proof for Lemma 3.3 follows from the calculations done for proof of Theorem 3.2 and hence it's omitted. We make a few quick remarks about the nature of the above guarantees, _Firstly,_ we note that the "time horizon" \(T\) above is a free parameter - which in turn parameterizes the choice of the step-size and the initial weight distribution. Choosing a larger \(T\) makes the constraints on the initial weight distribution weaker at the cost of making the step-size smaller and the required number of SGD steps larger. But for any value of \(T\), the above theorem guarantees that SGD, initialized from weights sampled from a certain class of distributions, converges in expectation to the global minima of the regularized empirical loss for our nets for any data and width, in time \(\mathcal{O}(\frac{1}{\epsilon})\) using a learning rate of \(\mathcal{O}(\epsilon)\). _Secondly,_ we note that the phenomenon of a lower bound on the regularization parameter being needed for certain nice learning theoretic property to emerge has been seen in kernel settings too, Yang et al. (2017). Also, to put into context the emergence of a critical value of the regularizer for nets as in the above theorem, we recall the standard result, that there exists an optimal value of the \(\ell_{2}-\)regularizer at which the excess risk of the similarly penalized linear regression becomes dimension free (Proposition 3.8, Bach (2022)). However, we recall that the quantities required for computing this "optimal" regularizer are not knowable while training and hence it is not practically implementable. Thus, we see that for binary classification, one can define a notion of an "optimal" regularizer and it remains open to investigate if such a similar threshold of regularization also exists for nets. Our above theorem can be seen as a step in that direction. _Thirdly,_ we note that the lowerbounds on training time of neural nets proven in works like Goel et al. (2020) do not apply here since these are proven for SQ algorithms and SGD is not of this type. _Finally_, note that the threshold values of regularization computed above, \(\lambda_{c}\), do not explicitly depend on the training data or the neural architecture, consistent with observations in Anthony & Bartlett (2009); Zhang et al. (2021). It depends on the activation and scales with the norm of the input data and the outer layer of weights. For intuition, suppose in the binary classification setting we set the outer layer weights s.t we have \(\left\|\boldsymbol{a}\right\|_{2}\cdot B_{x}=1\). This leads to \(\lambda_{c}=\frac{M_{D}L}{2}\). For the sigmoid activation, \(\sigma_{\beta}(x)\), by calculations as above we would get, \(\lambda_{c}\) in this case (say \(\lambda_{c}^{si,\beta}\)) to be \(=\frac{\beta^{2}}{32}\). Since \(\beta=1\) is the most widely used setting for the above sigmoid activation, this results in, \[\lambda_{c}^{si,1}\approx 0.03125 \tag{3}\] ### Global Convergence of Continuous Time SGD on Nets with SoftPlus Gates In, Shi et al. (2020) the authors had established that, if the loss \(\tilde{L}\) is gradient Lipschitz, then over any fixed time horizon \(T>0\), as \(s\to 0\), the dynamics of the SGD in Definition 1 is arbitrarily well approximated (in expectation) by the unique global solution that exists for the Stochastic Differential Equation (SDE), \[\mathrm{d}\mathbf{W}_{s}(t)=-\nabla\tilde{L}(\mathrm{W}_{s}(t))\,\mathrm{d}t +\sqrt{s}\,\mathrm{d}\mathbf{B}(t)\qquad\text{(SGD-SDE)} \tag{4}\] where \(\mathbf{B}(t)\) is the standard Brownian motion. The SGD convergence proven in the last section critically uses this mapping to a SDE. In Shi et al. (2020) it was further pointed out that if we only want to get a non-asymptotic convergence rate for the continuous time dynamics, the smoothness of the loss function is not needed and only the Villani condition suffices. In this short section we shall exploit this to show convergence of continuous time SGD on \(\tilde{L}\) with the activation function being the unbounded 'SoftPlus'. Also, in contrast to the guarantee about SGD in the previous subsection here we shall see that the SDE converges exponentially faster i.e _at a linear rate_. **Definition 3** (SoftPlus activation).: _For \(\beta>0\), \(x\in\mathbb{R}\), define the SoftPlus activation function as_ \[\mathrm{SoftPlus}_{\beta}(x)=\frac{1}{\beta}\log_{e}\left(1+\exp(\beta x)\right)\] **Remark**.: _Note that \(\lim_{\beta\to\infty}\mathrm{SoftPlus}_{\beta}(x)=\mathrm{ReLU}(x)\). Also note that for \(f(x)=\text{SoftPlus}_{\beta}(x)\), \(f^{\prime}(x)=\sigma_{\beta}(x)\) (sigmoid function as defined above) and hence \(|f^{\prime}(x)|\leq M_{D}\) for \(M_{D}=1\) and \(f(x)\) is \(L-\)Lipschitz for \(L=1\)._ Recall the following fact that was proven as a part of Lemma 3.1, **Lemma 3.4**.: _There exists a constant \(\lambda_{c}:=\frac{M_{D}LB_{c}^{2}|\boldsymbol{a}|_{2}^{2}}{2}\) s.t \(\forall\)\(\lambda>\lambda_{c}\)\(\&\)\(s>0\), the Gibbs' measure \(\mu_{s}=\frac{1}{Z_{s}}\exp\left(-\frac{2\mathcal{I}(\boldsymbol{W})}{s}\right)\), \(Z_{s}\) being the normalization factor, satisfies a Poincare-type inequality with the corresponding constant \(\lambda_{s}\)._ **Theorem 3.5** (**Continuous Time SGD Converges to Global Minima of SoftPlus Nets in Linear Time)**.: _We consider the SDE as given in Equation 4 on a Frobenius norm regularized logistic empirical loss on depth-\(2\) neural nets as specified in Equation 1, while using \(\sigma(x)=\mathrm{SoftPlus}_{\beta}(x)\) for \(\beta>0\), the regularization threshold being s.t \(\lambda>\lambda_{c}=\frac{M_{D}LB_{c}^{2}|\boldsymbol{a}|_{2}^{2}}{2}\) and with the weights \(\boldsymbol{W}_{0}\) being initialized from a distribution with p.d.f \(\rho_{\mathrm{initial}}\in L^{2}\left(\frac{1}{\mu_{s}}\right)\)._ _Then, for any \(S>0\), \(\exists\)\(G(S,\tilde{L})\) and \(C(s,\tilde{L})\), an increasing function of \(s\), s.t for any step size \(0<s\leq\min\left\{\frac{\epsilon}{2G(S,\tilde{L})},S\right\}\) and for \(t\geq\frac{1}{\lambda_{c}}\log\left(\frac{2C(s,\tilde{L})|\rho_{\mathrm{ initial}}|-\mu_{s}|_{\mu_{s}}^{2}}{\epsilon}\right)\) we have that,_ \[\mathbb{E}\,\tilde{L}(\boldsymbol{W}(t))-\min_{\boldsymbol{W}}\tilde{L}( \boldsymbol{W})\leq\epsilon.\] Proof.: The SoftPlus function is Lipschitz, hence using the same analysis as in (Appendix 5), we can claim that for \(\lambda>\lambda_{c}\) the loss function in Definition 1 with SoftPlus activations is a Villani function (and hence confining, by definition). Then, from Proposition 3.1 of Shi et al. (2020) it follows that, \(\exists\ C(s,\tilde{L})\), an increasing function of \(s\), that satisfies, \[\big{|}\mathbb{E}\tilde{L}\big{(}\mathbf{W}_{s}(t)\big{)}-\mathbb{E}\tilde{L} \big{(}\mathbf{W}_{s}(\infty)\big{)}\big{|}\leq C(s,\tilde{L})\big{|}\rho_{\text{ initial}}-\mu_{s}\big{|}_{\mu_{s}^{\perp}}e^{-\lambda_{s}t}.\] From Proposition 3.2 of Shi et al. (2020) it follows that, for any \(S>0\), for \(s\in(0,S)\), \(\exists\ G(S,\tilde{L})\) that quantifies the excess risk at the stationary point of the SDE as, \[\tilde{L}\big{(}\mathbf{W}(\infty)\big{)}-\min_{\mathbf{W}}\tilde{L}\leq G(S,\tilde{L })\,s\] Combining the above, the final result claimed follows as in Corollary 3.3 in Shi et al. (2020). An Experimental Demonstration of the Maintenance of Classification Accuracy At Various Regularizations at Different Widths For further illustration of the ramifications of the novel convergence theorems shown above, in here we present some experimental studies of doing binary classification by training depth \(2\) sigmoid activated nets with the regularized loss considered in the above convergence proofs. And we will be using the normalizations that correspond to the theoretically needed threshold value of the regularizer being \(\lambda_{c}=0.03125\) (Equation 3). We sampled the data from a clearly separable dataset with a margin - \(n\) data vectors in \(d\)-dimensions were sampled as a \(n\times d\) normally distributed matrix whereby after sampling each row vector was normalized to have unit norm. The data whose last coordinate were \(>0.2\) were assigned label \(+1\) and where the last coordinate was \(<-0.2\) was assigned label \(-1\) and the rest of the data were discarded. In our experimental setting, we fixed to \(d=10\). The data were split into being \(20\%\) test data and the rest used for training. Then we simulate SGD based training on the above data for multiple neural nets at various values of \(\lambda\) in \([0,\lambda_{C}]\) and for neural net widths \(p\). The step-length in the experiments is constant across all widths and lambda, across all settings. The elements of the (trainable) weight Matrix \(\mathbf{W}_{0}\) of dimension \(p\times d\) were initialized from a standard normal distribution, i.e \(\mathbf{W}_{0}\sim\mathcal{N}(0,\mathbf{I}_{p\times d})\). Likewise, the elements of the (fixed) outer layer \(\mathbf{a}\), of dimension \(1\times p\) were sampled as \(\mathbf{a}\sim\mathcal{N}(0,\mathbf{I}_{1\times p})\) and then normalized. For all experimental settings the neural networks were trained for \(500\) epochs, and with a mini-batch size of \(256\). At the end of training we measured the test accuracy of classification - as the downstream metric of measuring the goodness of training. As shown in the experimental graph (Figure 1), we demonstrate examples of nets trained on regularized logistic loss showing remarkable accuracy for regularization \(\lambda\leq\lambda_{c}\) - even though the model was not been exposed to the \(0-1\) criteria during training. The code for the experiments can be found at this Colaboratory File. Thus we have demonstrated that the threshold amount of regularization that was needed for the proof of convergence may not at all harm the downstream performance metric of classification. ## 4 Overview of Shi et al. (2020) In Section 5, we will give the proof for our main result (Theorem 3.2). As relevant background for the proof, in here we shall give a brief overview of the framework in Shi et al. (2020), which can be summarized as follows : suppose one wants to minimize the function \(\tilde{L}(\mathbf{W})=\frac{1}{n}\sum_{i=1}^{n}\tilde{L}_{i}(\mathbf{W})\), where \(i\) indexes the training data, \(\mathbf{W}\) is in the parameter space (the optimization space) of the loss function and \(\tilde{L}_{i}\) is the loss evaluated on the \(i^{th}\)-datapoint. On this objective, a constant step-size mini-batch implementation of the Stochastic Gradient Descent (SGD) consists of doing the following iterates, \(\mathbf{W}_{k+1}=\mathbf{W}_{k}-\frac{e}{\xi}\sum_{i}\nabla\tilde{L}_{i}(\mathbf{W}_{k})\), where the sum is over a mini-batch (a randomly sampled subset of the training data) of size \(b\) and \(s\) is the fixed step-length. In, Shi et al. (2020) the authors established that over any fixed time horizon \(T>0\), as \(s\to 0\), the dynamics of this SGD is arbitrarily well approximated (in expectation) by the Stochastic Differential Equation (SDE), \[\mathrm{d}\mathbf{W}_{s}(t)=-\nabla\tilde{L}(\mathbf{W}_{s}(t))\,\mathrm{d}t+\sqrt{s}\, \mathrm{d}\mathbf{B}(t)\qquad\text{(SGD--SDE)} \tag{5}\] where \(\mathbf{B}(t)\) is the standard Brownian motion. We recall that the Markov semigroup operator \(P_{t}\) for a stochastic process \(X_{t}\) and its infinitesimal generator \(\mathcal{L}\) are given as, \(P_{t}f(x)\coloneqq\mathbb{E}[f(X_{t})\mid X_{0}=x]\) and \(\mathcal{L}f\coloneqq\lim_{t\downarrow 0}\frac{P_{t}f-f}{t}\). Invoking the Forward Kolmogorov equation \(\partial_{t}f=\mathcal{L}^{*}f\), one obtains the following Fokker-Planck-Smoluchowski PDE governing the evolution of the density of the SDE, \[\frac{\partial\rho_{s}}{\partial t}=(\nabla\rho_{s},\nabla\tilde{L})+\rho_{s} \Delta\tilde{L}+\frac{s}{2}\Delta\rho_{s}\qquad\text{(FPS)} \tag{6}\] Further, under appropriate conditions on \(\tilde{L}\) the above implies that the density \(\rho_{s}(t)\) converges exponentially fast to the Gibbs' measure corresponding to the objective function i.e the distribution with p.d.f \[\mu_{s}=\frac{1}{Z_{s}}\mathrm{exp}\left(-\frac{2\tilde{L}(\mathbf{W})}{s}\right)\] where \(Z_{s}\) is the normalization factor. The sufficient conditions on \(\tilde{L}\) that were shown to be needed to achieve this "mixing" and to know a rate for it, are that of \(\tilde{L}\) being a "Villani Function" as defined below, **Definition 4** (Villani Function (Villani (2009); Shi et al. (2020))).: _A map \(f:\mathbb{R}^{d}\to\mathbb{R}\) is called a Villani function if it satisfies the following conditions,_ 1. \(f\in C^{\infty}\)__ 2. \(\lim_{|\mathbf{x}|\to\infty}f(\mathbf{x})=+\infty\)__ 3. \(\int_{\mathbb{R}^{d}}\mathrm{exp}\left(-\frac{2f(\mathbf{x})}{s}\right)\mathrm{d }\mathbf{x}<\infty\;\;\forall s>0\)__ 4. \(\lim_{|\mathbf{x}|\to\infty}\left(-\Delta f(\mathbf{x})+\frac{1}{s}\cdot\left\|\nabla f (\mathbf{x})\right\|^{2}\right)=+\infty\;\;\forall s>0\)__ Figure 1: Test Accuracy across various widths \(p\) and regularizer \(\lambda\) _Further, any \(f\) that satisfies conditions 1 - 3 is said to be "confining"._ From Lemma 5.2 Shi et al. (2020), the empirical or the population risk, \(\tilde{L}\), being confining is sufficient for the FPS PDE (equation 6) to evolve the density of SGD-SDE (equation 4) to the said Gibbs' measure. But, to get non-asymptotic guarantees of convergence (Corollary 3.3, Shi et al. (2020)) - even for the SDE, we need a Poincare-type inequality to be satisfied (as defined below) by the aforementioned Gibbs' measure \(\mu_{s}\). A sufficient condition for this Poincare-type inequality to be satisfied is if a confining loss function \(\tilde{L}\) also satisfied the last condition in definition 4 (and is consequently a Villani function). **Theorem 4.1** (Poincare-type Inequality (Shi et al. (2020))).: _Given a \(f:\mathbb{R}^{d}\to\mathbb{R}\) which is a Villani Function (Definition 4), for any given \(s>0\), define a measure with the density, \(\mu_{s}(\mathbf{x})=\frac{1}{Z_{s}}\mathrm{exp}\left(-\frac{2f(\mathbf{x})}{s}\right)\), where \(Z_{s}\) is a normalization factor. Then this (normalized) Gibbs' measure \(\mu_{s}\) satisfies a Poincare-type inequality i.e \(\exists\ \lambda_{s}>0\) (determined by \(f\)) s.t \(\forall h\in C_{c}^{\infty}(\mathbb{R}^{d})\) we have,_ \[\mathrm{Var}_{\mu_{s}}[h]\leq\frac{s}{2\lambda_{s}}\cdot\mathbb{E}_{\mu_{s}}[ \|\nabla h\|^{2}]\] The approach of Shi et al. (2020) has certain key interesting differences from many other contemporary uses of SDEs to prove the convergence of discrete time stochastic algorithms. Instead of focusing on the convergence of parameter iterates \(\mathbf{W}^{k}\), they instead look at the dynamics of the expected error i.e \(\mathbb{E}[\tilde{L}(\mathbf{W}^{k})]\), for \(\tilde{L}\) the empirical or the population risk. This leads to a transparent argument for the convergence of \(\mathbb{E}[\tilde{L}(\mathbf{W}^{k})]\) to \(\inf_{\mathbf{W}}\tilde{L}(\mathbf{W})\), by leveraging standard results which help one pass from convergence guarantees on the SDE to a convergence of the SGD. We note that Shi et al. (2020) achieve this conversion of guarantees from SDE to SGD by additionally assuming gradient smoothness of \(\tilde{L}\) - and we would show that this assumption holds for the natural neural net loss functions that we consider. ## 5 Proof of Theorem 3.2 Proof.: Note that \(\tilde{L}\) being a confining function can be easily read off from Definition 4. Further, as shown in Appendix A, the following inequalities hold, \[\|\nabla_{\mathbf{W}}L\left(\mathbf{W}\right)\|^{2}\geq\left(\lambda^{2 }-\frac{\lambda\|\mathbf{a}\|_{2}^{2}M_{D}B_{x}^{2}L}{2}\right)\|\mathbf{W}\|_{F}^{2}- \lambda\|\mathbf{W}\|_{F}\|\mathbf{a}\|_{2}M_{D}B_{x}\left(1+\frac{\|\mathbf{a}\|_{2}\|\bm {c}\|_{2}}{2}\right)\] \[\Delta_{\mathbf{W}\mathbf{W}}\tilde{L}\leq p\left[M_{d}^{2}B_{x}^{2}\|\mathbf{a}\|_{2}^{2}+ \|\mathbf{a}\|_{2}\left[\left(B_{y}+\|\mathbf{a}\|_{2}\left(\|\mathbf{c}\|_{2}+LB_{x}\|\bm {W}\|_{F}\right)\right)\left(M_{D}^{\prime}B_{x}^{2}\right)\right]+\lambda d\right] \tag{7}\] Combining the above two inequalities we can conclude that, \(\exists\) functions \(g_{1},g_{2},g_{3}\) such that, \[\frac{1}{s}\big{\|}\nabla_{\mathbf{W}}\tilde{L}\big{\|}^{2}-\Delta_{\mathbf{W}\mathbf{W}} \tilde{L}\geq g_{1}(\lambda,s)\big{\|}\mathbf{W}\big{\|}_{F}^{2}-g_{2}(\lambda,s) \big{\|}\mathbf{W}\big{\|}_{F}+g_{3}(\lambda,s)\] where in particular, \[g_{1}(\lambda,s)=\lambda^{2}-2\lambda\cdot M_{D}LB_{x}^{2}\|\mathbf{a}\|_{2}^{2}.\] Hence we can conclude that for \(\lambda>\lambda_{c}\coloneqq 2M_{D}LB_{x}^{2}\|\mathbf{a}\|_{2}^{2},\forall s>0, \frac{1}{s}\big{\|}\nabla_{\mathbf{W}}\tilde{L}\big{\|}^{2}-\Delta_{\mathbf{W}\mathbf{W}} \tilde{L}\) diverges as \(\|\mathbf{W}\|\to+\infty\), since \(g_{1}(\lambda,s)>0.\) The key aspect of the above analysis being that the bound on \(\Delta_{\mathbf{W}\mathbf{W}}\) does not depend on \(\big{\|}\mathbf{W}\big{\|}_{F}^{2}.\) Thus we have, that the following limit holds, \[\lim_{\|\mathbf{W}\|_{F}\to+\infty}\left(\frac{1}{s}\big{\|}\nabla_{\mathbf{W}}\tilde{ L}\big{\|}^{2}-\Delta_{\mathbf{W}\mathbf{W}}\tilde{L}\right)=+\infty\] for the range of \(\lambda\) as given in the theorem, hence proving that \(\tilde{L}\) is a Villani function. Towards getting an estimate of the step-length as given in the theorem statement, we also show in Appendix B that the loss function \(\tilde{L}\) is gradient-Lipschitz with the smoothness coefficient being upperbounded as, \[\text{gLip}\big{(}\tilde{L}\big{)}\leq\sqrt{p}\left(\frac{\sqrt{p}\big{|} \boldsymbol{a}\big{\|}_{2}M_{D}^{2}B_{x}}{4}+\left(\frac{2+\big{|}\boldsymbol {c}\big{|}_{2}+\big{|}\boldsymbol{a}\big{|}_{2}B_{\sigma}}{4}\right)M_{D}^{ \prime}B_{x}p+\lambda\right)\] Now we can invoke Theorem 3 (Part 1), Shi et al. (2020) with appropriate choices of \(s\) and initialization to get the main result as given in Theorem 3.2. In Appendix C one can find a discussion of the computation of the specific constants involved in the expression for the suggested step-length \(s^{*}\) and the class of initial weight distribution p.d.fs \(\rho_{\text{initial}}\). ## 6 Conclusion To the best of our knowledge, in this work, we have shown the first proof of convergence of SGD to the global minima of logistic loss on a neural net whilst not making any assumptions about the data or the width of the net. Our result relied on the convergence of discrete time algorithms like SGD to their continuous time counterpart (SDEs) - a theme which has lately been an active field of research. We posit that trying to reproduce our Theorem 3.2 using a direct analysis of the dynamics of SGD could be a fruitful venture leading to interesting insights. Additionally, our results motivate a new direction of pursuit in deep-learning theory, centered around understanding the nature of the Poincare constant of Gibbs' measures induced by neural nets. Our experiments further demonstrated that there exists neural nets and binary class labelled data such that optimizing on our provably good smooth loss functions also does highly accurate classification. This motivates a fascinating direction of future pursuit about the phenomenon of classification calibration or Bayes consistency i.e to be able to analytically identify cases where convergence guarantees as given here could be extended to guarantees on the classification error. Lastly, given the new method of proving neural training by SGD that has been opened up in this work, it naturally begets the question if these insights could also resolve similar mysteries for more exotic loss functions and neural architectures that are in use.
2309.05462
Equivariant line bundles with connection on the p-adic upper half plane
Let $F$ be a finite extension of $\mathbb{Q}_p$, let $\Omega_F$ be Drinfeld's upper half-plane over $F$ and let $G^0$ the subgroup of $GL_2(F)$ consisting of elements whose determinant has norm $1$. By working locally on $\Omega_F$, we construct and classify the torsion $G^0$-equivariant line bundles with integrable connection on $\Omega$ in terms of the smooth linear characters of the units of the maximal order of the quaternion algebra over $F$.
Konstantin Ardakov, Simon J. Wadsley
2023-09-11T14:00:41Z
http://arxiv.org/abs/2309.05462v2
# Equivariant line bundles with connection on the \(p\)-adic upper half plane ###### Abstract. Let \(F\) be a finite extension of \(\mathbb{Q}_{p}\), let \(\Omega_{F}\) be Drinfeld's upper half-plane over \(F\) and let \(G^{0}\) the subgroup of \(GL_{2}(F)\) consisting of elements whose determinant has norm \(1\). By working locally on \(\Omega_{F}\), we construct and classify the torsion \(G^{0}\)-equivariant line bundles with integrable connection on \(\Omega\) in terms of the smooth linear characters of the units of the maximal order of the quaternion algebra over \(F\). ###### Contents * 1 Introduction * 1.1 Background * 1.2 The main result * 1.3 Motivation * 1.4 An overview of some related works * 1.5 Acknowledgements * 1.6 Conventions and Notation * 1.7.1 The We consider a slight modification of Drinfeld's construction, due to Rapoport-Zink [25]. They constructed a tower of coverings \[\cdots\to\mathcal{M}_{n}\to\mathcal{M}_{n-1}\to\cdots\to\mathcal{M}_{1}\to \mathcal{M}_{0}\] of rigid \(\breve{F}\)-analytic spaces. Their base space \(\mathcal{M}_{0}\) can be viewed as being the disjoint union of countably many copies of \(\Omega\) (cf [25, Theorem 3.72]): there is a non-canonical isomorphism \(\mathcal{M}_{0}\cong\Omega\times\mathbb{Z}\). The tower comes equipped with an action of \(G\), so that the maps \(\mathcal{M}_{n}\to\mathcal{M}_{n-1}\) are all \(G\)-equivariant. The \(G\)-action on the base space is given by \[g\cdot(z,n)=(g\cdot z,n+v_{\pi_{F}}(\det g)).\] Here the \(G\)-action on \(\Omega\) is the usual one by Mobius transformations. Thus each copy of \(\Omega\) is stabilised by the subgroup \(G^{0}\) of \(G\) consisting of matrices \(g\) in \(G\) such that \(v_{\pi_{F}}(\det g)=0\). In this way we may view \(\mathcal{M}_{0}\) as being isomorphic to \(G\times_{G^{0}}\Omega\). Let \(D\) be the quaternion division algebra over \(F\) with valuation ring \(\mathcal{O}_{D}\) and let \(\Pi\) denote a generator of the unique maximal ideal in \(\mathcal{O}_{D}\). The maps \(\mathcal{M}_{n}\to\mathcal{M}_{n-1}\) in the tower are all finite etale and Galois with \[\operatorname{Gal}(\mathcal{M}_{n}/\mathcal{M}_{0})\cong\mathcal{O}_{D}^{ \times}/(1+\Pi^{n}\mathcal{O}_{D}).\] Moreover, the actions of \(G\) and \(\operatorname{Gal}(\mathcal{M}_{n}/\mathcal{M}_{0})\) on \(\mathcal{M}_{n}\) commute. Scholze-Weinstein have proved in [29, Theorem 7.3.1] that if \(\mathbf{C}\) denotes a complete and algebraically closed extension of \(\breve{F}\) and \(\widetilde{\Omega}\) is a finite etale \(G\)-equivariant cover of the base-change \(\Omega\times_{\breve{F}}\mathbf{C}\), then there is an integer \(m\geqslant 0\) such that \(\mathcal{M}_{m}\times_{\breve{F}}\mathbf{C}\to\Omega\times_{\breve{F}} \mathbf{C}\) factors through \(\widetilde{\Omega}\). One way to better understand these finite etale equivariant covers is through the study of equivariant vector bundles with flat connections on the base space: the two theories are essentially equivalent. Here we briefly sketch how to obtain equivariant vector bundles with flat connection from the Drinfeld tower. Suppose that \(\rho\) is a smooth geometrically irreducible representation of \(\mathcal{O}_{D}^{\times}\) whose character is defined over some finite extension \(K\) of \(\breve{F}\). There is some \(m\geqslant 0\) such that \(\rho\) factors through \(\mathcal{O}_{D}^{\times}/(1+\Pi^{m}\mathcal{O}_{D})\) and, for all \(n\geqslant m\), \(\mathcal{O}_{\mathcal{M}_{n,K}}\) has a \(\rho\)-isotypical component \(\mathcal{V}^{\rho}\) independent of the choice of \(n\). Then the pushforward of \(\mathcal{V}^{\rho}\) down to \(\mathcal{M}_{0,K}\) is a \(G\)-equivariant vector bundle over \(\mathcal{M}_{0,K}\) of rank \((\dim\rho)^{2}\), equipped with an integrable connection. In this paper we only consider the smooth \(K\)-linear representations of \(\mathcal{O}_{D}^{\times}\) of degree \(1\), that is, the torsion characters \(\theta:\mathcal{O}_{D}^{\times}\to K^{\times}\). Then the pushforward of \(\mathcal{V}^{\theta}\) is a \(G\)-equivariant line bundle on \(\mathcal{M}_{0,K}\) with an integrable connection. For example, when \(\mathbf{1}\) is the trivial representation, the corresponding line bundle is just the structure sheaf \(\mathcal{O}_{\mathcal{M}_{0}}\) with its natural \(G\)-action and natural integrable connection. The set \(\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}\) of torsion characters \(\mathcal{O}_{D}^{\times}\to K^{\times}\) forms a group under pointwise multiplication of characters; given a continuous action1 of a topological group \(H\) on a smooth rigid \(K\)-analytic space \(X\), we denote by \(\operatorname{PicCon}^{H}(X)\) the group of isomorphism classes of \(H\)-equivariant line bundles with an integrable connection on \(X\), where the group structure comes from the tensor product of these line bundles. In this way, the Drinfeld tower gives us a group homomorphism Footnote 1: see §3.2 for the precise definitions \[\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}} \quad\longrightarrow\quad\operatorname{PicCon}^{G}(\mathcal{M}_{0,K})_{ \operatorname{tors}}. \tag{1}\] ### The main result In this paper we will consider, for each complete valued field extension \(K\) of \(F\), the subgroup \(\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\) of \(\operatorname{PicCon}^{G^{0}}(\Omega)\) consisting of the isomorphism classes of the \(G^{0}\)-equivariant line bundles with integrable connection on the rigid \(K\)-analytic space \(\Omega:=\Omega_{F}\times_{F}K\) that have finite order under \(\otimes\). This group is naturally isomorphic to \(\operatorname{PicCon}^{G}(G\times_{G_{0}}\Omega)_{\operatorname{tors}}\) via an induction map, and therefore also, non-canonically, to \(\operatorname{PicCon}^{G}(\mathcal{M}_{0,K})_{\operatorname{tors}}\) whenever \(K\) contains \(\check{F}\). Our main result is then **Theorem A**.: Suppose that \(K\) contains the quadratic unramified extension \(L\) of \(F\). Then there is an isomorphism of abelian groups \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\quad \stackrel{{\cong}}{{\longrightarrow}}\quad\operatorname{Hom}( \mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}.\] We note that Theorem A cannot be true without the condition that \(K\) contains \(L\) in the statement, because \(\operatorname{PicCon}^{G^{0}}(\Omega)[p^{\prime}]\) is cyclic of order \(q^{2}-1\) for any choice of field extension \(K\), but \(K^{\times}\) contains no such subgroup unless \(K\) contains \(L\) and so \(\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})\) cannot either. We also note here in passing that in fact all line bundles on \(\Omega\) are known to be trivial -- see, e.g. [14, Theoreme A]. Our isomorphism depends on a choice of a point \(z\in\Omega_{F}(L)\) and an \(F\)-algebra embedding \(\iota:L\hookrightarrow D\). However, there is a natural \(G\)-action on \(\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\) and a natural conjugation \(D^{\times}\)-action on \(\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}\): the first action factors through \(G/G^{0}F^{\times}\) and the second action factors through \(D^{\times}/\mathcal{O}_{D}^{\times}F^{\times}\), both cyclic groups of order \(2\). The isomorphisms in Theorem A are compatible with respect to these actions, provided we identify \(G/G^{0}F^{\times}\) with \(D^{\times}/\mathcal{O}_{D}^{\times}F^{\times}\) in the only possible way. Quotienting out by these actions on both sides, we obtain a bijection \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}/G\quad \stackrel{{\cong}}{{\longrightarrow}}\quad\operatorname{Hom}( \mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}/D^{\times}\] which no longer depends on any choices. We expect, but do not check it in this paper, that when \(K=\check{F}\), the isomorphism in Theorem A is in fact inverse to (1) for some choice of identification of \(\mathcal{M}_{0}\) with \(G\times_{G^{0}}\Omega\), depending on our other choices. ### Motivation We do not use the result of Scholze-Weinstein [29], nor even the existence of the Drinfeld tower, to prove Theorem A. Instead, we give an explicit construction of the elements \(\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\) as finite order \(G^{0}\)-equivariant line bundles with connection on \(\Omega\) and prove directly that there are no others. More precisely: in this paper we give an _elementary construction_2 of finite order \(GL_{2}(\mathcal{O}_{F})\)-equivariant line bundles with connection on the \(K\)-affinoid subdomain \(\Omega_{0}\) of \(\Omega\) that is the inverse image of the vertex fixed by \(GL_{2}(\mathcal{O}_{F})\) in the Bruhat-Tits tree under the reduction map, and show that each of these line bundles extends uniquely to a \(G^{0}\)-equivariant line bundle with connection on all of \(\Omega\). We have provided full arguments for some results that already appear in the literature in order to stress the elementary nature of our work. Footnote 2: elementary in the sense of not depending on the duality of Rapoport-Zink spaces, nor on the theory of perfectoid spaces In our forthcoming work [1], we will use this elementary construction to better understand the structure of the strong duals of the global sections \(\mathscr{L}^{\theta}(\Omega)^{\prime}_{s}\) of these equivariant line bundles as locally \(F\)-analytic representations of \(G\). If \(j:\Omega\to\mathbb{P}^{1}_{K}\) denotes the open inclusion, then the results of this paper are used to show that \(j_{*}\mathscr{L}^{\theta}\) is a coadmissible \(G\)-equivariant \(\mathcal{D}\)-module on \(\mathbb{P}^{1}_{K}\) in the sense of [2]. It is a formal consequence that \(\mathscr{L}^{\theta}(\Omega)^{\prime}_{s}\) is an _admissible_\(F\)-locally analytic \(K\)-representation of \(G\), which is a consequence of a result of Patel, Schmidt and Strauch [22, Theorem 7.2.1(iv)]. However we are able to prove stronger results about the structure of these representations: we will show that for each \(\theta\in\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{ tors}}\), \(\mathscr{L}^{\theta}(\Omega)^{\prime}\) is an admissible locally \(F\)-analytic \(K\)-representation of \(G^{0}\) that is _topologically irreducible_ when \(\theta\) is not fixed by the \(D^{\times}\)-action on \(\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}\) mentioned above, and that has a codimension \(1\)-submodule that is topologically irreducible otherwise. ### An overview of some related works Because of the central position of the theory of Drinfeld coverings and Rapoport-Zink spaces in the local Langlands program [13], it is difficult to make a comprehensive literature review. In his work [33], Teitelbaum found explicit local equations for the first Drinfeld covering of \(\Omega\): this yields explicit torsion line bundles with connection on \(\Omega_{0}\) together with a (non-explicit) \(GL_{2}(\mathcal{O}_{F})\)-equivariant structure. However neither Teitelbaum, nor Lue Pan in his closely-related work [21], consider the flat connections on these line bundles, nor do they make an attempt to classify the appropriate equivariant structures in an elementary manner. We acknowledge our intellectual debt to the Introduction of [10], where Dospinescu and Le Bras explain the importance of the equivariant vector bundles on \(\Omega\) in the context of locally analytic representations of \(GL_{2}(F)\), and in particular in the \(p\)-adic local Langlands program. We mention here in passing that the constructions in our paper enable us to define the first Drinfeld covering of \(\Omega\) directly over \(F\) rather than over \(\tilde{F}\) without appealing to the theory of Weil descent. We also take the opportunity to mention here the monumental works of Colmez, Dospinescu and Niziol [8], [9] that use Scholze's pro-etale methods and build upon [10] to show, amongst other things, that the \(p\)-adic etale cohomology of the Drinfeld tower realises the \(p\)-adic local Langlands correspondence, at least in the case when \(F=\mathbb{Q}_{p}\). Finally, we mention that Junger in his recent preprint [15] has classified equivariant line bundles on Deligne's formal model \(\widehat{\Omega}\) of \(\Omega_{F}\). There are some formal similarities in our methods (for example his Proposition 2.10 plays a similar role to our Proposition 3.2.14), however we cannot deduce his results from ours, nor vice versa. We expect that our results here can be naturally extended to analogues of \(\Omega\) in higher dimensions. ### Acknowledgements We thank Aurel Page for pointing the paper of Riehm [26] on MathOverflow. We also thank James Taylor for his comments. The second author thanks Brasenose College, Oxford for its hospitality. ### Conventions and Notation \(F\) will denote a finite extension of \(\mathbb{Q}_{p}\) with ring of integers \(\mathcal{O}_{F}\), uniformiser \(\pi_{F}\) and residue field \(k_{F}\) of order \(q\). \(K\) will denote a field containing \(F\) that is complete with respect to a non-archimedean norm \(|\cdot|\) such that \(|p|=1/p\). We will write: * \(K^{\circ}:=\{a\in K:|a|\leqslant 1\}\) for the valuation ring of \(K\), * \(K^{\circ\circ}:=\{a\in K:|a|<1\}\) for the maximal ideal of \(K^{\circ}\), * \(\overline{K}\) for a fixed algebraic closure of \(K\), and * \(\mathbf{C}\) for the completion of \(\overline{K}\). Let \(X\) be a rigid \(K\)-analytic variety. When \(Y\) is a subset \(X\), we will say that \(Y\) is an _affinoid subdomain_ of \(X\) to mean that \(Y\) is an admissible open subspace of \(X\), itself isomorphic to an affinoid \(K\)-variety. When \(X\) itself happens to be a \(K\)-affinoid variety, this agrees by [4, Corollary 8.2.1/4] with the standard definition found at [4, Definition 7.2.2/2]. We will write * \(|\cdot|_{X}\) to denote the (power-multiplicative) supremum seminorm on \(X\), * \(\mathcal{O}(X)^{\circ}:=\{f\in\mathcal{O}(X):|f|_{X}\leqslant 1\}\), * \(\mathcal{O}(X)^{\circ\circ}:=\{f\in\mathcal{O}(X):|f|_{X}<1\}\), * \(\mathcal{O}(X)^{\times\times}:=1+\mathcal{O}(X)^{\times\times}\), the subgroup of _small units_ in \(\mathcal{O}(X)^{\times}\), and * \(\mathcal{O}(X)_{r}^{\times\times}:=\{1+f:|f|_{X}\leqslant r\}\leqslant \mathcal{O}(X)^{\times\times}\) for each real number \(r\in(0,1)\). \(\operatorname{Pic}(X)\) will denote the _Picard group_ of \(X\) consisting of the isomorphism classes of line bundles on \(X\) with the group operation given by tensor product. When \(K^{\prime}\) is a complete field extension of \(K\), we will write \(X_{K^{\prime}}:=X\times_{K}K^{\prime}\) for the base change of \(X\) to \(K^{\prime}\), and \[X(K^{\prime})=\{\phi\colon\operatorname{Sp}K^{\prime}\to X:\phi\text{ is a morphism of rigid $K$-analytic varieties}\}\] will denote the set of \(K^{\prime}\)_-valued points of \(X\)_. Let \(A\) be an abelian group and let \(d\) be a non-zero integer. We will write * \(A[d]=\{a\in A\mid da=0\}\) to denote the \(d\)-torsion subgroup of \(A\), * \(A[p^{\prime}]:=\bigcup_{(d,p)=1}A[d]\) to denote the prime-to-\(p\) torsion subgroup of \(A\), * \(A[p^{\infty}]:=\bigcup_{n=1}^{\infty}A[p^{n}]\) to denote the \(p\)-power torsion subgroup of \(A\). Let a group \(G\) act on a set \(X\). We will write * \(G_{x}:=\{g\in G:gx=x\}\) for the stabilizer of a point \(x\in X\), and * \(X^{G}:=\{x\in X:gx=x\text{ for all }g\in G\}\) for the set of elements fixed by \(G\). When we discuss cochains, cocycles and coboundaries we will work with the continuous cochain cohomology of Tate, [32]. That is if \(G\) is a topological group and \(A\) is a topological abelian group equipped with a continuous action of \(G\) then: * \(C^{n}(G,A):=\{f\colon G^{n}\to A:f\text{ is continuous}\}\) is the set of continuous \(A\)-valued \(n\)-cochains, * \(Z^{n}(G,A)\) is the set of continuous \(A\)-valued \(n\)-cocycles, * \(B^{n}(G,A)\) is the set of continuous \(A\)-valued \(n\)-coboundaries, * \(H^{n}(G,A)\) is the \(n\)th continuous cohomology group \(Z^{n}(G,A)/B^{n}(G,A)\), * \(\delta_{G}\colon C^{0}(G,A)=A\to C^{1}(G,A)\) is the map given by \(\delta_{G}(a)(g)=g\cdot a-a\). Note that whenever \(\theta\colon A\to B\) is a \(G\)-equivariant map of topological abelian groups with continuous \(G\)-action, we have \(\delta_{G}\theta=\theta\delta_{G}\). In the particular case where \(G\) acts trivially on \(A\), we will usually write \(\operatorname{Hom}(G,A)\) instead of \(Z^{1}(G,A)\) or \(H^{1}(G,A)\) to denote the group of continuous homomorphisms from \(G\) to \(A\). For any continuous group homomorphism \(\varphi\colon G\to H\), we denote by \[\varphi^{*}:\operatorname{Hom}(H,A)\to\operatorname{Hom}(G,A)\] the map given by pre-composition by \(\varphi\). ## 2. Background from algebra ### Measures on profinite sets We begin by adapting some definitions from [12, Definitions 2.7.10]. **Definition 2.1.1**.: Let \(Z\) be a profinite set and let \(\mathfrak{a}\) be an abelian group. * An \(\mathfrak{a}\)_-valued measure on_\(Z\) is a function \(\nu\) from the set of clopen subsets of \(Z\) to \(\mathfrak{a}\), satisfying \(\nu(U)=\nu(V)+\nu(U\backslash V)\) whenever \(V\subseteq U\) are clopen subsets of \(Z\). 2. \(M(Z,\mathfrak{a})\) denotes the abelian group of all \(\mathfrak{a}\)-valued measures on \(Z\) under pointwise operations. 3. \(M_{0}(Z,\mathfrak{a}):=\{\nu\in M(Z,\mathfrak{a}):\nu(Z)=0\}\) is the subgroup of \(\mathfrak{a}\)_-valued measures on \(Z\) with total value zero_. **Examples 2.1.2**.: 1. If \(Z\) is any profinite set and \(\mathfrak{a}\) is a unital ring then for each \(z\in Z\) we can define a measure \(\delta_{z}\) by \[\delta_{z}(U)=\begin{cases}1&\text{ if }z\in U\\ 0&\text{ otherwise.}\end{cases}\] 2. If \(Z\) happens to be finite we can define a 'counting measure' in \(M(Z,\mathbb{Z})\) via \[\Sigma_{Z}(U)=|U|\text{ for all }U\subset Z.\] Indeed \(\Sigma_{Z}=\sum_{z\in Z}\delta_{z}\). Let \(C(Z,\mathbb{Z})\) be the set of continuous \(\mathbb{Z}\)-valued functions on \(Z\), where we give \(\mathbb{Z}\) the discrete topology. Of course every such function is locally constant. **Proposition 2.1.3**.: Let \(Z\) be a profinite set and let \(\mathfrak{a}\) be an abelian group. 1. There is a natural additive isomorphism \(M(Z,\mathfrak{a})\to\operatorname{Hom}_{\mathbb{Z}}(C(Z,\mathbb{Z}), \mathfrak{a})\). 2. Let \((Z_{i})_{i\in I}\) be a filtered inverse system of finite sets. Then every isomorphism \(Z\cong\varprojlim Z_{i}\) of profinite sets induces an isomorphisms of abelian groups \(M(Z,\mathfrak{a})\cong\varprojlim M(Z_{i},\mathfrak{a})\) and \(M_{0}(Z,\mathfrak{a})\cong\varprojlim M_{0}(Z_{i},\mathfrak{a})\). 3. The functor \(\mathfrak{a}\mapsto M(Z,\mathfrak{a})\) is exact. Proof.: (a) Let \(\nu\in M(Z,\mathfrak{a})\) and let \(f:Z\to\mathbb{Z}\) be locally constant. Because \(Z\) is profinite and hence compact, we can choose an open partition \(\{U_{1},\cdots,U_{m}\}\) of \(Z\) such that \(f_{|U_{i}}\) is constant for each \(i\). Choose \(z_{i}\in U_{i}\) for each \(i=1,\cdots,m\) and define \(\langle\nu,f\rangle:=\sum\limits_{i=1}^{m}f(z_{i})\nu(U_{i})\in\mathfrak{a}\). Then \(\langle\nu,f\rangle\) does not depend on the choice of open partition or the choice of the \(z_{i}\)'s, and \(\nu\mapsto(f\mapsto\langle\nu,f\rangle)\) defines an additive map \(M(Z,\mathfrak{a})\to\operatorname{Hom}_{\mathbb{Z}}(C(Z,\mathbb{Z}), \mathfrak{a})\). Let \(1_{U}\) denote the characteristic function of the clopen subset \(U\) of \(Z\). Given an additive map \(g:C(Z,\mathbb{Z})\to\mathfrak{a}\), setting \(\nu(U):=g(1_{U})\in\mathfrak{a}\) for each clopen \(U\) defines an element \(\nu\in M(Z,\mathfrak{a})\) such that \(\langle\nu,f\rangle=g(f)\) for all \(f\in C(Z,\mathbb{Z})\) because the characteristic functions \(1_{U}\) generate \(C(Z,\mathbb{Z})\) as an abelian group. The function \(g\mapsto\nu\) is then an inverse to \(M(Z,\mathfrak{a})\to\operatorname{Hom}_{\mathbb{Z}}(C(Z,\mathbb{Z}), \mathfrak{a})\). (b) The isomorphism \(Z\cong\varprojlim Z_{i}\) induces an isomorphism of abelian groups \(C(Z,\mathbb{Z})\cong\varprojlim C(Z_{i},\mathbb{Z})\). The functor \(\operatorname{Hom}_{\mathbb{Z}}(-,\mathfrak{a})\) converts colimits into limits; now apply part (a) to get \(M(Z,\mathfrak{a})\cong\varprojlim M(Z_{i},\mathfrak{a})\). Since taking limits commutes with taking kernels the other part follows immediately. (c) By a theorem of Nobeling -- see [28, Theorem 5.4] -- \(C(Z,\mathbb{Z})\) is a free abelian group. Hence \(\operatorname{Hom}_{\mathbb{Z}}(C(Z,\mathbb{Z}),-)\) is exact; now apply part (a). **Definition 2.1.4**.: For every abelian group \(\mathfrak{a}\) the map \(Z\mapsto M(Z,\mathfrak{a})\) from profinite sets to abelian groups extends to a functor from the category of profinite sets and continuous functions to the category of abelian groups sending a continuous function \(f\colon Z_{1}\to Z_{2}\) to the group homomorphism \[f_{*}\colon M(Z_{1},\mathfrak{a})\to M(Z_{2},\mathfrak{a});\ \ f_{*}(\nu)(U)=\nu(f^{-1}(U)).\] In particular an action of group \(G\) on a profinite set \(Z\) by homeomorphisms induces an action on \(M(Z,\mathfrak{a})\) by automorphisms of abelian groups via \(g\cdot\nu=g_{*}(\nu)\). **Remark 2.1.5**.: If \(\nu\in M_{0}(Z_{1},\mathfrak{a})\) in the setting of Definition 2.1.4 then \(f_{*}(\nu)\in M_{0}(Z_{2},\mathfrak{a})\) since \(f_{*}(\nu)(Z_{2})=\nu(f^{-1}(Z_{2}))=\nu(Z_{1})=0\) so \(Z\mapsto M_{0}(Z,\mathfrak{a})\) also defines a functor. The following result will be useful in what follows. **Lemma 2.1.6**.: Let \(Z\) be a profinite set and let \(d\) be a non-zero integer. Then the sequence \(0\to M_{0}(Z,\mathbb{Z})\stackrel{{ d}}{{\longrightarrow}}M_{0} (Z,\mathbb{Z})\to M_{0}(Z,\mathbb{Z}/d\mathbb{Z})\to 0\) is exact. Proof.: Consider the multiplication-by-\(d\) map on the short exact sequence of abelian groups \(0\to M_{0}(Z,\mathbb{Z})\to M(Z,\mathbb{Z})\to\mathbb{Z}\to 0\). This gives a \(3\times 3\) diagram of abelian groups, whose rows are exact, whose second column is exact by Proposition 2.1.3(c) and whose third column is exact for trivial reasons. Hence the first column is also exact by the Nine Lemma. Recall our conventions concerning continuous group cohomology from SS1.6. **Lemma 2.1.7**.: Let \(G\) be a profinite group with a continuous transitive action on a finite set \(X\). Then * \(M(X,\mathbb{Z})^{G}=\mathbb{Z}\cdot\Sigma_{X}\); * \(M_{0}(X,\mathbb{Z})^{G}=0\); * \(H^{1}(G,M(X,\mathbb{Z}))=0\). Proof.: (a) It is easy to compute that \(M(X,\mathbb{Z})^{G}=\mathbb{Z}\cdot\Sigma_{X}\) where \(\Sigma_{X}\) is the counting measure from Example 2.1.2(2). (b) Use \(M_{0}(X,\mathbb{Z})^{G}=\ker\big{(}M(X,\mathbb{Z})^{G}\to\mathbb{Z};\nu\mapsto \nu(X)\big{)}\) and \(n\Sigma_{X}(X)=n|X|\). (c) Choose an arbitrary point \(x\in X\). Since \(X\) is finite, \(M(X,\mathbb{Z})\) is isomorphic to the (co)induced module \(\operatorname{Ind}_{G}^{G_{x}}\mathbb{Z}\) in the sense of [20, Chapter I, SS6], where \(\mathbb{Z}\) denotes the trivial \(G_{x}\)-module. Then by Shapiro's Lemma, [20, Proposition 1.6.4], we have \[H^{1}(G,M(X,\mathbb{Z}))\cong H^{1}(G_{x},\mathbb{Z}).\] Since the action of \(G_{x}\) on \(\mathbb{Z}\) is trivial, \(G_{x}\) is profinite and \(\mathbb{Z}\) is a discrete torsion-free group we deduce that \(H^{1}(G_{x},\mathbb{Z})=\operatorname{Hom}(G_{x},\mathbb{Z})=0\). **Proposition 2.1.8**.: Let \(G\) be a profinite group acting continuously and transitively on two non-empty finite sets \(X,Y\), and let \(\pi\colon X\to Y\) be a \(G\)-equivariant function. Let \(d\geqslant 1\) be an integer and let \(h:=\gcd(d,|X|)\). Then * \(M(X,\mathbb{Z}/d\mathbb{Z})^{G}\) is cyclic of order \(d\), generated by the image of \(\Sigma_{X}\), * \(M_{0}(X,\mathbb{Z}/d\mathbb{Z})^{G}\) is cyclic of order \(h\), generated by the image of \(\frac{d}{h}\Sigma_{X}\), and * \(\pi_{*}\left(\Sigma_{X}\right)=\frac{|X|}{|Y|}\Sigma_{Y}\). Proof.: (a) The short exact sequence \[0\to M(X,\mathbb{Z})\stackrel{{ d}}{{\to}}M(X,\mathbb{Z})\to M (X,\mathbb{Z}/d\mathbb{Z})\to 0\] induces a long exact sequence of cohomology \[0\to M(X,\mathbb{Z})^{G}\stackrel{{ d}}{{\to}}M(X,\mathbb{Z})^{G} \to M(X,\mathbb{Z}/d\mathbb{Z})^{G}{\to}H^{1}(G,M(X,\mathbb{Z}))\] whose final term is \(0\) by Lemma 2.1.7(c). Since \(M(X,\mathbb{Z})^{G}=\mathbb{Z}\cdot\Sigma_{X}\) by Lemma 2.1.7(a) the result follows. (b) Note that \(M_{0}(X,\mathbb{Z}/d\mathbb{Z})^{G}=\{\nu\in M(X,\mathbb{Z}/d\mathbb{Z})^{G}:\nu (X)=d\mathbb{Z}\}\). By part (a), every such \(\nu\) is the image of \(n\Sigma_{X}\) for some \(n\). But \(n\Sigma_{X}(X)=n|X|\) so \[M_{0}(X,\mathbb{Z}/d\mathbb{Z})^{G}=\{n\Sigma_{X}\in M(X,\mathbb{Z}/d\mathbb{ Z}):d\text{ divides }n|X|\}\] giving the result. (c) Whenever \(U\subseteq Y\), we have \(\pi_{*}(\Sigma_{X})(U)=\Sigma_{X}(\pi^{-1}(U))=\big{|}\pi^{-1}(U)\big{|}\). Since \(G\) acts transitively on \(X\) and \(Y\), all the fibres have order \(\frac{|X|}{|Y|}\) and so \[\big{|}\pi^{-1}(U)\big{|}=\frac{|X|}{|Y|}|U|\] as required. ### Some stabilisers in \(G^{0}\) and their linear characters **Notation 2.2.1**.: 1. \(G^{0}:=\{g\in GL_{2}(F):v_{\pi_{F}}(\det g)=1\}\)_._ 2. _The Iwahori subgroup_ \(I\) _of_ \(GL_{2}(\mathcal{O}_{F})\) _is_ \[I:=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in GL_{2}(\mathcal{O}_{F}):c\equiv 0\operatorname{mod}\pi_{F} \mathcal{O}_{F}\right\}.\] 3. _We write_ \(w:=\begin{pmatrix}0&1\\ \pi_{F}&0\end{pmatrix}\in GL_{2}(F)\)_._ 4. _We write_ \(A:=GL_{2}(\mathcal{O}_{F})\) _and_ \(B:={}^{w}A=wAw^{-1}\)_._ We recall from [31, SSSSII.1.2-3] that if \(\mathcal{BT}\) is the Bruhat-Tits tree associated with \(PGL_{2}(F)\), then \(A,B\) and \(I\) arise as stabilisers in \(G^{0}\) under the natural \(G^{0}\)-action on \(\mathcal{BT}\) as follows: there is a vertex \(s\) of \(\mathcal{BT}\) such that \((s\,\,ws)\) is an edge of \(\mathcal{BT}\), and \[A=G^{0}_{s},\quad B=G^{0}_{ws},\quad\text{and}\quad I=G^{0}_{(s\,\,ws)}.\] In particular, we have \(I=A\cap B\). The following classical result will be crucial to our arguments later on in SS4.4. **Theorem 2.2.2**.: \(G^{0}\) is the amalgamated product of its open subgroups \(A\) and \(B\) over their intersection \(I\): \[G^{0}=A\underset{I}{*}B.\] Proof.: This is [31, Theorem II.3]. Recall that \(F^{\times}\) is the direct product of its subgroups \(\mu_{p^{\prime}}(F)\), \(\mathcal{O}_{F}^{\times\times}\) and \(\langle\pi_{F}\rangle\): \[F^{\times}\cong\mu_{p^{\prime}}(F)\times\mathcal{O}_{F}^{\times\times}\times \langle\pi_{F}\rangle.\] We write \(a\mapsto\widehat{a}\) to denote the homomorphism that is the projection \(F^{\times}\to\mu_{p^{\prime}}(F)\) with kernel \(\mathcal{O}_{F}^{\times\times}\times\langle\pi_{F}\rangle\). We will use the same notation \(\hat{\cdot}\) to denote the analogous projection \(L^{\times}\to\mu_{p^{\prime}}(L)\) for other finite extensions \(L/\mathbb{Q}_{p}\). **Lemma 2.2.3**.: 1. \(\operatorname{Hom}(A,K^{\times})[p^{\prime}]=\left\{\widetilde{\det}^{k}:k\in \mathbb{Z}/(q-1)\mathbb{Z}\right\}\)_._ 2. Every element of \(\operatorname{Hom}(I,K^{\times})[p^{\prime}]\) is of the form \[\begin{pmatrix}a&b\\ \pi_{F}c&d\end{pmatrix}\mapsto\widehat{a}^{n_{1}}\widehat{d}^{n_{2}}\] for some \(n_{1},n_{2}\in\mathbb{Z}/(q-1)\mathbb{Z}\). 3. \(\left(\operatorname{Hom}(I,K^{\times})[p^{\prime}]\right)^{\langle w\rangle}= \left\{\widehat{\det}^{k}:k\in\mathbb{Z}/(q-1)\mathbb{Z}\right\}\). Proof.: (a) For any \(\theta\in\operatorname{Hom}(A,K^{\times})[p^{\prime}]\), every pro-\(p\) subgroup of \(A\) is contained in \(\ker\theta\). Since \(SL_{2}(\mathcal{O}_{F})\) is generated by its pro-\(p\) subgroups it follows that \(\theta\) factors through \(\det\colon A\to\mathcal{O}_{F}^{\times}\). Since \(\mathcal{O}_{F}^{\times\times}\) is also pro-\(p\) we see that any element of \(\operatorname{Hom}(\mathcal{O}_{F}^{\times},K^{\times})[p^{\prime}]\) factors through \(\widehat{\cdot}\). It remains to observe that \(\operatorname{Hom}(\mu_{p^{\prime}}(F),K^{\times})\) consists of maps of the form \(a\mapsto a^{k}\) for some \(k\in\mathbb{Z}/(q-1)\mathbb{Z}\), because \(\mu_{p^{\prime}}(F)\) is cyclic of order \(q-1\). (b) Once again, if \(\theta\in\operatorname{Hom}(I,K^{\times})[p^{\prime}]\) then any pro-\(p\) subgroup of \(I\) is contained in \(\ker\theta\). The kernel of the map \(I\to\mu_{p^{\prime}}(F)\times\mu_{p^{\prime}}(F)\) sending \(\begin{pmatrix}a&b\\ \pi_{F}c&d\end{pmatrix}\) to \((\widehat{a},\widehat{c})\) is pro-\(p\). Now the result follows as in (a). (c) By part (b), every \(\chi\in\operatorname{Hom}(I,K^{\times})[p^{\prime}]\) is of the form \[\begin{pmatrix}a&b\\ \pi_{F}c&d\end{pmatrix}\mapsto\widehat{a}^{n_{1}}\widehat{d}^{n_{2}}\] for some \(n_{1},n_{2}\in\mathbb{Z}/(q-1)\mathbb{Z}\). Now \[(w\cdot\chi)\left(\begin{pmatrix}a&b\\ \pi_{F}c&d\end{pmatrix}\right)=\chi\left(\begin{pmatrix}d&c\\ \pi_{F}b&a\end{pmatrix}\right)=\widehat{a}^{n_{2}}\widehat{d}^{n_{1}}.\] Thus if \(\chi=w\cdot\chi\) then \(n_{1}=n_{2}\) and \(\chi=\widehat{\det}^{n_{1}}\). **Remark 2.2.4**.: It follows from the proof of Lemma 2.2.3 that any \(p^{\prime}\)-quotient of \(A,B\) or \(I\) has exponent dividing \(q-1\). We will now recall the structure of the stabilisers in \(GL_{2}(F)\) of points in \(\mathbb{P}^{1}(L)\) for quadratic extensions \(L\) of \(F\), under the \(GL_{2}(F)\)-action by Mobius transformations. **Lemma 2.2.5**.: Suppose that \(F(z)\) is a quadratic extension of \(F\). Let \[N,\operatorname{tr}\colon F(z)\to F\] be the norm and trace maps respectively. 1. The stabiliser \(GL_{2}(F)_{z}\) of \(z\) in \(GL_{2}(F)\) is \[GL_{2}(F)_{z}=\left\{\begin{pmatrix}a&-cN(z)\\ c&a-c\operatorname{tr}(z)\end{pmatrix}\middle|(a,c)\in F^{2}\backslash\{(0,0) \}\right\}.\] 2. There is a commutative diagram of groups \[\diagram{c:=1.5}{GL_{2}(F)_{z}=\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{ \diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5}{\diagram{c:=1.5} {\diagram{c:1.5}{\diagram{c:1.5}{\diagram{c:1.5}{\diagram{c:1.5}{\diagram{c:1.5} {\diagram{c:1.5}\diagram{c:1.5}{\diagram{c:1.5}{\diagram{c:1.5}\diagram{c:1.5} {\diagram{c:1.5}{\diagram{c:1.5}\diagram{c:1.5}{\diagram{c:1.5}\diagram{c:1.5} {\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}{ \diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5} {\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5} \diagram{c:1.5}\diagram{c:1.5}\diagram{c:1.5}\diagram{c:1. is an isomorphism of topological abelian groups. 3. \(j_{z}(G^{0}_{z})=\mathcal{O}^{\times}_{F(z)}\) and \(j_{z}(SL_{2}(F)_{z})=\ker N\cap\mathcal{O}^{\times}_{F(z)}\). 4. Let \(\sigma\in\operatorname{Gal}(F(z)/F)\) be the unique element of order two. Then \[\sigma\cdot g=\det(g)g^{-1}\quad\text{for all}\quad g\in GL_{2}(F)_{z}\] defines an continuous action of \(\operatorname{Gal}(F(z)/F)\) on \(GL_{2}(F)_{z}\) such that 1. \(j_{z}\) is \(\operatorname{Gal}(F(z)/F)\)-equivariant, 2. \(G^{0}_{z}\), \(SL_{2}(F)_{z}\) and the Sylow pro-\(p\)-subgroup \(P_{z}\) of \(SL_{2}(F)_{z}\) are all \(\operatorname{Gal}(F(z)/F)\)-stable. Proof.: (a) We can compute that \(\frac{az+b}{cz+d}=z\) if and only if \(cz^{2}+(d-a)z-b=0\). Moreover the minimal polynomial of \(z\) is \(t^{2}-\operatorname{tr}(z)t+N(z)\). So \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in GL_{2}(F)\) fixes \(z\) if and only if \((d-a)=-c\operatorname{tr}(z)\) and \(b=-cN(z)\) as claimed. (b) Since \[\det\begin{pmatrix}a&-cN(z)\\ c&a-c\operatorname{tr}(z)\end{pmatrix}=a^{2}-ac\operatorname{tr}(z)+c^{2}N(z)=N( a-cz)\] the given diagram commutes. For any \((a_{1},c_{1}),(a_{2},c_{2})\in F^{2}\backslash\{(0,0\}\), we have \[\begin{pmatrix}a_{1}&-c_{1}N(z)\\ c_{1}&a_{1}-c_{1}\operatorname{tr}(z)\end{pmatrix}\begin{pmatrix}a_{2}&-c_{2}N( z)\\ c_{2}&a_{2}-c_{2}\operatorname{tr}(z)\end{pmatrix}=\begin{pmatrix}a_{1}a_{2}-c_{1} c_{2}N(z)&*\\ c_{1}a_{2}+a_{1}c_{2}-c_{1}c_{2}\operatorname{tr}(z)&*\end{pmatrix}\in GL_{2}(F )_{z}\] and \[(a_{1}-c_{1}z)(a_{2}-c_{2}z) = (a_{1}a_{2}-(a_{1}c_{2}+c_{1}a_{2})z+c_{1}c_{2}z^{2})\] \[= (a_{1}a_{2}-c_{1}c_{2}N(z))-(c_{1}a_{2}+a_{1}c_{2}-c_{1}c_{2} \operatorname{tr}(z))z\in F(z)^{\times}.\] This implies that \(j_{z}\) is a group isomorphism. (c) Since \(G^{0}_{z}=\ker\left(v_{\pi_{F}}\circ\det\colon GL_{2}(F)_{z}\to\mathbb{Z}\right)\), part (b) implies that \[j_{z}(G^{0}_{z})=\ker\left(v_{\pi_{F}}\circ N\colon F(z)^{\times}\to\mathbb{Z} \right)=\mathcal{O}^{\times}_{F(z)}.\] Similarly since \(SL_{2}(F)_{z}=\ker\left(\det\colon GL_{2}(F)_{z}\to F^{\times}\right)\), we see that \(j_{z}(SL_{2}(F)_{z})=\ker\left(N\colon F(z)^{\times}\to F^{\times}\right)\), which is contained in \(\mathcal{O}^{\times}_{F(z)}\). (d) We have \(N(\lambda)=\lambda\sigma(\lambda)\) for any \(\lambda\in F(z)^{\times}\). Now it follows from (b) that \[j_{z}^{-1}(\sigma(\lambda))=\det(j_{z}^{-1}(\lambda))j_{z}^{-1}(\lambda)^{-1} \quad\text{for any}\quad\lambda\in F(z)^{\times}\] and we can define the claimed action by transport of structure. Since \(SL_{2}(F)_{z}\) is \(\sigma\)-stable and \(P_{z}\) is a characteristic subgroup of \(SL_{2}(F)_{z}\), the rest follows. **Corollary 2.2.6**.: Suppose that \(F(z)\) is a quadratic extension of \(F\). Then \[G^{0}_{z}\cdot SL_{2}(F)=G^{0}.\] Proof.: Let \(g\in G^{0}\) so that \(\det(g)\in\mathcal{O}^{\times}_{F}\). The norm map \(N:\mathcal{O}^{\times}_{F(z)}\to\mathcal{O}^{\times}_{F}\) is surjective by [30, Chapter V, Corollary to Proposition 3], so we can find \(x\in\mathcal{O}^{\times}_{F(z)}\) such that \(N(x)=\det(g)\). Using Lemma 2.2.5(c), we can choose \(h\in G^{0}_{z}\) such that \(j_{z}(h)=x\). Then Lemma 2.2.5(b) implies that \(\det(h)=N(j_{z}(h))=N(x)=\det(g)\), so \(g=h\cdot h^{-1}g\in G^{0}_{z}\cdot SL_{2}(F)\) as required. **Remark 2.2.7**.: Note that, with the notation of Lemma 2.2.5, \(GL_{2}(F)_{z}=GL_{2}(F)_{\sigma\cdot z}\) but \(j_{z}\neq j_{\sigma\cdot z}\). However \(j_{z}^{*}\) does induce a canonical bijection \[\operatorname{Hom}(G_{z}^{0},K^{\times})/\operatorname{Gal}(F(z)/F)\to \operatorname{Hom}(\mathcal{O}_{F(z)}^{\times},K^{\times})/\operatorname{Gal} (F(z)/F).\] When \(F(z)\) a quadratic extension of \(F\), we define a continuous homomorphism \(\widehat{j_{z}}\colon G_{z}^{0}\to F(z)^{\times}\) by setting \[\widehat{j_{z}}(g):=\widehat{j_{z}(g)}\quad\text{for all}\quad g\in G_{z}^{0}.\] **Lemma 2.2.8**.: Suppose that \(F(z)\) is a quadratic extension of \(F\). Then \[\operatorname{Hom}(G_{z}^{0},K(z)^{\times})[p^{\prime}]=\operatorname{Hom}(G _{z}^{0},\mu_{p^{\prime}}(F(z)))=\left\langle\widehat{j_{z}}\right\rangle\] is a cyclic group. Its order is \(q^{2}-1\) if \(F(z)/F\) is unramified, and \(q-1\) otherwise. Proof.: Since \(j_{z}\colon G_{z}^{0}\to\mathcal{O}_{F(z)}^{\times}\) is an isomorphism of topological groups by Lemma 2.2.5(c), it suffices to show that every element of \(\operatorname{Hom}(\mathcal{O}_{F(z)}^{\times},K(z)^{\times})[p^{\prime}]\) is of the form \(a\mapsto\hat{a}^{k}\) for some \(k\in\mathbb{Z}\). Since the kernel of \(\hat{\ \ **Definition 2.3.5**.: Let \(P^{1}_{L}\) denote the following subgroup of \(\mathcal{O}^{\times}_{L}\): \[P^{1}_{L}:=\ker N_{L/F}\quad\cap\quad\ker(\omega_{L}:\mathcal{O}^{\times}_{L} \to k^{\times}_{L}).\] Let \(\varrho:\mathcal{O}^{\times}_{D}\twoheadrightarrow(\mathcal{O}^{\times}_{D})^{ \mathrm{ab}}\) denote the canonical projection. **Proposition 2.3.6**.: The \(F\)-algebra homomorphism \(\iota:F\hookrightarrow L\) induces an isomorphism of profinite abelian groups \[\overline{\varrho\circ\iota}:\mathcal{O}^{\times}_{L}/P^{1}_{L}\stackrel{{ \cong}}{{\longrightarrow}}(\mathcal{O}^{\times}_{D})^{\mathrm{ab}}.\] Proof.: The projection \(\varrho\) appears in the following diagram: We have \(\mathrm{Nrd}\circ_{L}=N_{L/F}\) by [23, Proposition 16.2(b)]. The inclusion \(\iota:L\hookrightarrow D\) induces ring homomorphisms \(\iota:\mathcal{O}_{L}\hookrightarrow\mathcal{O}_{D}\) and \(\overline{\iota}:k_{L}\hookrightarrow k_{D}\)3, and we have \(\omega_{D}\circ\iota=\overline{\iota}\circ\omega_{L}\). We can now see that the square on the left is commutative. Since \(\mathcal{O}^{\times}_{F}\times k^{\times}_{D}\) is abelian, \(\mathrm{Nrd}\times\omega_{D}\) factors through \(\varrho\), giving the diagonal arrow \(q\) and making the entire diagram commutative. Footnote 3: In fact, \(\overline{\iota}\) is an isomorphism Proposition 2.3.4 tells us that \(\ker\varrho=\ker(\mathrm{Nrd}\times\omega_{D})\), so the map \(q\) is injective. Since \(\overline{\iota}\) is injective, chasing the diagram shows that \[\ker(\varrho\circ\iota)=\ker(q\circ\varrho\circ\iota)=\ker((\mathrm{Nrd} \times\omega_{D})\circ\iota)=\ker(N_{L/F}\times\omega_{L})=P^{1}_{L}.\] Hence \(\varrho\circ\iota:\mathcal{O}^{\times}_{L}\to(\mathcal{O}^{\times}_{D})^{ \mathrm{ab}}\) descends to give an injective group homomorphism \[\overline{\varrho\circ\iota}:\mathcal{O}^{\times}_{L}/P^{1}_{L}\hookrightarrow( \mathcal{O}^{\times}_{D})^{\mathrm{ab}}.\] Since both \(\mathcal{O}^{\times}_{L}/P^{1}_{L}\) and \((\mathcal{O}^{\times}_{D})^{\mathrm{ab}}\) are abelian groups that are virtually pro-\(p\), to show that \(\overline{\varrho\circ\iota}\) is surjective, it suffices to check this on the Sylow pro-\(p\) subgroups of both groups, and on the subgroups of elements of order coprime to \(p\). The Sylow pro-\(p\) subgroup \(S\) of \((\mathcal{O}^{\times}_{D})^{\mathrm{ab}}\) appears in the commutative triangle where the diagonal arrow is surjective by [30, Chapter V, Proposition 3(a)]. Since \(q\) is injective, we see that \(\varrho\circ\iota:1+\pi_{F}\mathcal{O}_{L}\to S\) is surjective as well. Finally, \(\ker\omega_{D}=1+\mathcal{P}_{D}\) is a pro-\(p\) subgroup of \(\mathcal{O}^{\times}_{D}\), so \(\varrho:\mathcal{O}^{\times}_{D}\twoheadrightarrow(\mathcal{O}^{\times}_{D})^{ \mathrm{ab}}\) induces a surjective homomorphism \(k^{\times}_{D}\twoheadrightarrow(\mathcal{O}^{\times}_{D})^{\mathrm{ab}}[p^{ \prime}]\). Since \(\overline{\iota}:k_{L}\to k_{D}\) is an isomorphism, this implies that \(\varrho\circ i:\mathcal{O}^{\times}_{L}[p^{\prime}]\to(\mathcal{O}^{\times}_ {D})^{\mathrm{ab}}[p^{\prime}]\) is surjective as well. **Corollary 2.3.7**.: Let \(\iota:L\hookrightarrow D\) be an \(F\)-algebra homomorphism. 1. There is an isomorphism of abelian groups \[\overline{\varrho\circ\iota}^{*}:\mathrm{Hom}(\mathcal{O}^{\times}_{D},K^{ \times})\stackrel{{\cong}}{{\longrightarrow}}\mathrm{Hom}( \mathcal{O}^{\times}_{L}/P^{1}_{L},K^{\times}).\] 2. This induces a bijection \[\overline{\overline{\varrho\circ t^{*}}}:\operatorname{Hom}(\mathcal{O}_{D}^{ \times},K^{\times})/D^{\times}\xrightarrow{\cong}\operatorname{Hom}(\mathcal{O}_ {L}^{\times}/P_{L}^{1},K^{\times})/\operatorname{Gal}(L/F).\] 3. The bijection \(\overline{\overline{\varrho\circ t^{*}}}\) does not depend on the choice of \(\iota\). Proof.: (a) This follows immediately from Proposition 2.3.6. (b) Let \(\operatorname{Gal}(L/F)=\langle\sigma\rangle\); then, by [23, Theorem 17.10, Proposition 15.1a], we can find an element \(\Pi\in D\) such that \(\Pi^{2}=\pi_{F}\) and \[\Pi\;\iota(a)\;\Pi^{-1}=\iota(\sigma(a))\quad\text{for all}\quad a\in L.\] Since \(\mathcal{O}_{D}^{\times}\) is normal in \(D^{\times}\), there is a natural conjugation action of \(D^{\times}\) on \((\mathcal{O}_{D}^{\times})^{\operatorname{ab}}\) which evidently factors through \(D^{\times}/F^{\times}\mathcal{O}_{D}^{\times}\), a group of order \(2\) generated by the image of \(\Pi\). Then the above formula shows that this action of \(D^{\times}/F^{\times}\mathcal{O}_{D}^{\times}\) on \((\mathcal{O}_{D}^{\times})^{\operatorname{ab}}\) corresponds under the isomorphism \(\overline{\varrho\circ t}\) to the natural \(\operatorname{Gal}(L/F)\)-action on \(\mathcal{O}_{L}^{\times}/P_{L}^{1}\). Hence, the isomorphism in part (a) is equivariant with respect to the \(D^{\times}/F^{\times}\mathcal{O}_{D}^{\times}\)-action on \(\operatorname{Hom}(\mathcal{O}_{D}^{\times},K^{\times})\) and the \(\operatorname{Gal}(L/F)\)-action on \(\operatorname{Hom}(\mathcal{O}_{L}^{\times}/P_{L}^{1},K^{\times})\), when we identify \(\operatorname{Gal}(L/F)\) with \(D^{\times}/F^{\times}\mathcal{O}_{D}^{\times}\) via \(\sigma\mapsto\Pi\). (c) Let \(\iota^{\prime}:L\hookrightarrow D\) be another \(F\)-algebra homomorphism. Then by Corollary 2.3.3 we can find \(d\in D^{\times}\) such that \(\iota^{\prime}(x)=d\;\iota(x)\;d^{-1}\) for all \(x\in L\). Let \(c_{d}:\mathcal{O}_{D}^{\times}\to\mathcal{O}_{D}^{\times}\) and \(\overline{c_{d}}:(\mathcal{O}_{D}^{\times})^{\operatorname{ab}}\to(\mathcal{O }_{D}^{\times})^{\operatorname{ab}}\) denote the conjugations by \(d\), so that \(\iota^{\prime}=c_{d}\circ\iota\) and \(\overline{c_{d}}\circ\varrho=\varrho\circ c_{d}\). Then \(\overline{c_{d}}\circ\varrho\circ\iota=\varrho\circ c_{d}\circ\iota=\varrho \circ\iota^{\prime}\), so \(\overline{c_{d}}\circ\overline{\varrho\circ t}=\overline{\varrho\circ t^{ \prime}}\). Hence \(\overline{\varrho\circ t^{*}}=\overline{\varrho\circ t^{*}}\circ\overline{c_{d }}^{*}\) and we can now see that \(\overline{\varrho\circ t^{*}}=\overline{\overline{\varrho\circ t^{*}}}\). ### Equivariant sheaves and amalgamated products We begin by recalling some material from [2, SS2.3]. Let \(X\) be a set equipped with a Grothendieck topology in the sense of [4, Definition 9.1.1/1]. Note that we do not assume at the outset that there is a final object in the category of admissible open subsets of \(X\), as \(X\) is not itself required to be admissible open in the \(G\)-topology. Let \(\operatorname{Homeo}(X)\) be the group of continuous bijections from \(X\) to itself. We say that a group \(G\)_acts on_\(X\) if there is given a group homomorphism \(\rho:G\to\operatorname{Homeo}(X)\). If this action is understood, we write \(gU\) to denote the image of an admissible open subset \(U\) of \(X\) under the action of \(g\in G\). For every \(g\in G\), there is an auto-equivalence \(\rho(g)_{*}\) of the category of sheaves on \(X\), with inverse \(\rho(g)^{*}=\rho(g^{-1})_{*}\). To simplify the notation, we will simply denote these auto-equivalences by \(g_{*}\) and \(g^{*}\), respectively. Thus \[(g_{*}\mathcal{F})(U)=\mathcal{F}(g^{-1}U)\quad\text{and}\quad(g^{*}\mathcal{ F})(U)=\mathcal{F}(gU)\] for all admissible open subsets \(U\) of \(X\) and all \(g\in G\). **Definition 2.4.1**.: Let \(G\) act on \(X\), and let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\), where \(R\) is a commutative base ring. 1. An \(R\)_-linear equivariant structure_ on \(\mathcal{F}\) is a set \(\{g^{\mathcal{F}}:g\in G\}\), where \[g^{\mathcal{F}}:\mathcal{F}\to g^{*}\mathcal{F}\] is a morphism of presheaves of \(R\)-modules for each \(g\in G\), such that 2. \[(gh)^{\mathcal{F}}=h^{*}(g^{\mathcal{F}})\circ h^{\mathcal{F}}\quad\text{for all}\quad g,h\in G,\quad\text{and}\quad 1^{ \mathcal{F}}=1_{\mathcal{F}}.\] 3. An \(R\)_-linear_\(G\)_-equivariant presheaf_ is a pair \((\mathcal{F},\{g^{\mathcal{F}}\}_{g\in G})\), where \(\mathcal{F}\) is a presheaf of \(R\)-modules on \(X\), and \(\{g^{\mathcal{F}}\}_{g\in G}\) is an \(R\)-linear equivariant structure on \(\mathcal{F}\). 3. A _morphism_ of \(R\)-linear \(G\)-equivariant presheaves \[\varphi:(\mathcal{F},\{g^{\mathcal{F}}\})\to(\mathcal{F}^{\prime},\{g^{ \mathcal{F}^{\prime}}\})\] is a morphism of presheaves of \(R\)-modules \(\varphi:\mathcal{F}\to\mathcal{F}^{\prime}\) such that \[g^{*}(\varphi)\circ g^{\mathcal{F}}=g^{\mathcal{F}^{\prime}}\circ\varphi\quad \text{for all}\quad g\in G.\] We will frequently use this abuse of notation, and simply write \(\varphi(x)\) to mean \(\varphi(U)(x)\) if \(x\) is a section of \(\mathcal{F}\) over the admissible open subset \(U\) of \(X\). Note that with this abuse of notation, the cocycle condition (2) becomes simply \[g^{\mathcal{F}}(h^{\mathcal{F}}(x))=(gh)^{\mathcal{F}}(x)\quad\text{for all} \quad x\in\mathcal{F},g,h\in G. \tag{3}\] When the base ring \(R\) and the \(R\)-linear equivariant structure on a sheaf \(\mathcal{F}\) of \(R\)-modules is understood, we will simply say that \(\mathcal{F}\) is a \(G\)_-equivariant sheaf_, and omit the equivariant structure from the notation. **Definition 2.4.2**.: Let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\). 1. An _automorphism of \(\mathcal{F}\) over \(X\)_ is a pair \((\alpha,\beta)\), where \(\alpha\in\mathrm{Homeo}(X)\) and \(\beta:\mathcal{F}\to\alpha^{*}\mathcal{F}\) is an \(R\)-linear isomorphism of presheaves on \(X\). 2. Define \(\mathrm{Aut}(\mathcal{F}/X)\) to be the set of all automorphisms of \(\mathcal{F}\) over \(X\). 3. Given \((\alpha_{1},\beta_{1}),(\alpha_{2},\beta_{2})\in\mathrm{Aut}(\mathcal{F}/X)\), define \[(\alpha_{1},\beta_{1})\square(\alpha_{2},\beta_{2}):=(\alpha_{1}\alpha_{2}, \alpha_{2}^{*}(\beta_{1})\circ\beta_{2}).\] This is again an element of \(\mathrm{Aut}(\mathcal{F}/X)\). **Lemma 2.4.3**.: Let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\). Then the binary operation \(\square\) turns \(\mathrm{Aut}(\mathcal{F}/X)\) into a group. Proof.: The identity element is \((1_{X},1_{\mathcal{F}})\). Let \((\alpha_{i},\beta_{i})\), \(i=1,2,3\) be three elements of \(\mathrm{Aut}(\mathcal{F}/X)\). Checking that the operation \(\square\) is associative boils down to the formula \[\alpha_{3}^{*}(\alpha_{2}^{*}(\beta_{1})\circ\beta_{2})\circ\beta_{3}=(\alpha _{2}\alpha_{3})^{*}(\beta_{1})\circ\alpha_{3}^{*}(\beta_{2})\circ\beta_{3},\] which is readily verified. The inverse of \((\alpha,\beta)\in\mathrm{Aut}(\mathcal{F}/X)\) is \((\alpha^{-1},\alpha_{*}(\beta)^{-1})\). By Definition 2.4.2(c), the first projection map \(\mathrm{pr}_{1}:\mathrm{Aut}(\mathcal{F}/X)\to\mathrm{Homeo}(X)\) is a group homomorphism. **Definition 2.4.4**.: Let \(G\) be a group acting on \(X\) via \(\rho:G\to\mathrm{Homeo}(X)\), and let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\). Form the fibre product \[\mathrm{Aut}(\mathcal{F}/X/G):=G\quad\underset{\mathrm{Homeo}(X)}{\times}\quad \mathrm{Aut}(\mathcal{F}/X)\] with respect to the group homomorphisms \[\rho:G\to\mathrm{Homeo}(X)\quad\text{and}\quad\mathrm{pr}_{1}:\mathrm{Aut}( \mathcal{F}/X)\to\mathrm{Homeo}(X).\] By definition, the elements of \(\mathrm{Aut}(\mathcal{F}/X/G)\) have the form \((g,(\rho(g),\beta))\) for some \(g\in G\) and some \(R\)-linear isomorphism \(\beta:\mathcal{F}\xrightarrow{\cong}g^{*}\mathcal{F}\); evidently, such an element is completely determined by the pair \((g,\beta)\). In order to simplify the notation, we will abuse notation and write \[\mathrm{Aut}(\mathcal{F}/X/G)=\left\{(g,\beta):g\in G,\quad\beta:\mathcal{F} \xrightarrow{\cong}g^{*}\mathcal{F}\ R-\mathrm{linear}\right\},\] where the product is given by the formula \[(g_{1},\beta_{1})\square(g_{2},\beta_{2})=(g_{1}g_{2},g_{2}^{*}(\beta_{1})\circ \beta_{2}). \tag{4}\] **Definition 2.4.5**.: Let \(G\) be a group acting on \(X\) via \(\rho:G\to\operatorname{Homeo}(X)\), and let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\). Define \[\mathcal{S}(G,\mathcal{F}):=\{\sigma\in\operatorname{Hom}(G,\operatorname{Aut} (\mathcal{F}/X/G)):\operatorname{pr}_{1}\circ\sigma=1_{G}\}\] to be the set of sections of the first projection \(\operatorname{pr}_{1}:\operatorname{Aut}(\mathcal{F}/X/G)\to G\). We make these definitions in order to formulate the following **Lemma 2.4.6**.: Let \(G\) be a group acting on \(X\) via \(\rho:G\to\operatorname{Homeo}(X)\), and let \(\mathcal{F}\) be a presheaf of \(R\)-modules on \(X\). Then the rule \[\{g^{\mathcal{F}}\}_{g\in G}\quad\mapsto\quad\big{[}g\mapsto(g,g^{\mathcal{F} })\in\operatorname{Aut}(\mathcal{F}/X/G)\big{]}\] defines a bijection between the set all \(R\)-linear \(G\)-equivariant structures on \(\mathcal{F}\) and \(\mathcal{S}(G,\mathcal{F})\). Proof.: Let \(\{g^{\mathcal{F}}\}_{g\in G}\) be an \(R\)-linear \(G\)-equivariant structure on \(X\). Define the map \(\sigma:G\to\operatorname{Aut}(\mathcal{F}/X/G)\) by setting \(\sigma(g)=(g,g^{\mathcal{F}})\) for all \(g\in G\). Using the cocycle condition (2), we compute that for all \(g,h\in G\) we have \[\sigma(gh)=(gh,h^{*}(g^{\mathcal{F}})\circ h^{\mathcal{F}})=(g,g^{\mathcal{F} })\square(h,h^{\mathcal{F}})=\sigma(g)\square\sigma(h).\] Since \(\sigma(1)=(1,1^{\mathcal{F}})=(1,1_{\mathcal{F}})\), we see that \(\sigma\) is a group homomorphism such that \(\operatorname{pr}_{1}\circ\sigma=1_{G}\), that is, \(\sigma\in\mathcal{S}(G,\mathcal{F})\). Conversely, for each \(\sigma\in\mathcal{S}(G,\mathcal{F})\), the set \(\{\operatorname{pr}_{2}(\sigma(g))\}_{g\in G}\) forms an \(R\)-linear \(G\)-equivariant structure on \(\mathcal{F}\) by reversing the above argument. Next, we recall the following definitions from [2]: **Definition 2.4.7**.: Let \(G\) act on \(X\), and let \(\mathcal{A}\) be a sheaf of \(R\)-algebras on \(X\). * We say that \(\mathcal{A}\) is a \(G\)_-equivariant sheaf of \(R\)-algebras_ if there is given an \(R\)-linear \(G\)-equivariant structure \(\{g^{\mathcal{A}}:g\in G\}\) such that each \(g^{\mathcal{A}}:\mathcal{A}\to g^{*}\mathcal{A}\) is a morphism of sheaves of \(R\)-algebras. * Let \(\mathcal{A}\) be a \(G\)-equivariant sheaf of \(R\)-algebras on \(X\). A \(G\)_-\(\mathcal{A}\)-module_ is an \(R\)-linear \(G\)-equivariant sheaf \(\mathcal{M}\) on \(X\), such that \(\mathcal{M}\) is a sheaf of left \(\mathcal{A}\)-modules and \(g^{\mathcal{M}}(a\cdot m)=g^{\mathcal{A}}(a)\cdot g^{\mathcal{M}}(m)\) for all \(g\in G\), \(a\in\mathcal{A},m\in\mathcal{M}\). We want to study all possible \(G\)-\(\mathcal{A}\)-module structures on a given \(\mathcal{A}\)-module \(\mathcal{M}\). **Definition 2.4.8**.: Let \(G\) act on \(X\), let \(\mathcal{A}\) be a \(G\)-equivariant sheaf of \(R\)-algebras on \(X\) and let \(\mathcal{M}\) be an \(\mathcal{A}\)-module. We define \[\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G):=\left\{\begin{array}{rcl} (g,\beta)&\in&\operatorname{Aut}(\mathcal{M}/X/G):\\ \beta(a\cdot m)&=&(g\cdot a)\cdot\beta(m)\quad\text{for all}\quad a\in \mathcal{A},m\in\mathcal{M}\end{array}\right\}.\] **Lemma 2.4.9**.: With the hypotheses of Definition 2.4.8, \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) is a subgroup of \(\operatorname{Aut}(\mathcal{M}/X/G)\). Proof.: It is clear that \((1,1_{\mathcal{F}})\) lies in \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\). Let \((g_{1},\beta_{1})\) and \((g_{2},\beta_{2})\) be two elements of \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\), so that \[\beta_{i}(a\cdot m)=(g_{i}\cdot a)\cdot\beta_{i}(m)\quad\text{for all}\quad a \in\mathcal{A},m\in\mathcal{M},i=1,2. \tag{5}\] Let \((g_{3},\beta_{3}):=(g_{1},\beta)\square(g_{2},\beta_{2})\) so that \(g_{3}=g_{1}g_{2}\) and \(\beta_{3}=g_{2}^{*}(\beta_{1})\circ\beta_{2}\). On local sections, \(\beta_{3}\) is simply the composition \(\beta_{1}\beta_{2}\). For any \(a\in\mathcal{A}\) and \(m\in\mathcal{M}\), we use the fact that \(\mathcal{A}\) is a \(G\)-equivariant sheaf together with (5) to compute \[\beta_{3}(a\cdot m) = \beta_{1}(\beta_{2}(a\cdot m))=\beta_{1}((g_{2}\cdot a)\cdot\beta_{ 2}(m))=(g_{1}\cdot(g_{2}\cdot a))\cdot\beta_{1}(\beta_{2}(m))\] \[= ((g_{1}g_{2})\cdot a)\cdot(\beta_{1}\beta_{2}(m))=(g_{3}\cdot a) \cdot\beta_{3}(m).\] So, \((g_{3},\beta_{3})\in\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) and \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) is closed under composition. To show that \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) is stable under inversion in \(\operatorname{Aut}(\mathcal{M}/X/G)\), let \((g,\beta)\in\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\). Then for \(b:=g^{-1}\cdot a\in\mathcal{A}\) and \(w:=\beta^{-1}(v)\in\mathcal{M}\) we have \(\beta(b\cdot w)=(g\cdot b)\cdot\beta(w)=a\cdot v.\) Applying \(\beta^{-1}\) to this equation gives \(\beta^{-1}(a\cdot v)=b\cdot w=(g^{-1}\cdot a)\cdot\beta^{-1}(v)\), so \((g,\beta)^{-1}=(g^{-1},g_{*}(\beta^{-1}))\in\operatorname{Aut}_{\mathcal{A}}( \mathcal{M}/X/G)\). **Definition 2.4.10**.: With the hypotheses of Definition 2.4.8, define \[\mathcal{S}_{\mathcal{A}}(G,\mathcal{M}):=\{\sigma\in\operatorname{Hom}(G, \operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)):\operatorname{pr}_{1} \circ\sigma=1_{G}\}\] to be the set of sections of the first projection \(\operatorname{pr}_{1}:\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\to G\). We can now give the generalisation of Lemma 2.4.6 to the case of \(\mathcal{A}\)-modules: the proof is completely straightforward and is therefore omitted. **Proposition 2.4.11**.: Assume the hypotheses of Definition 2.4.8. Then \[\{g^{\mathcal{M}}\}_{g\in G}\quad\mapsto\quad\big{[}g\mapsto(g,g^{\mathcal{M}} )\in\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G))\big{]}\] defines a bijection between the set all \(G\)-\(\mathcal{A}\)-module structures on \(\mathcal{M}\) extending the given \(\mathcal{A}\)-module structure on \(\mathcal{M}\), and \(\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\). Next, we briefly study the functorialities of \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) and \(\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\). **Lemma 2.4.12**.: Assume the hypotheses of Definition 2.4.8, and let \(H\) be a subgroup of \(G\). * \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/H)\) is a subgroup of \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\). * Restriction of functions induces a map \(\operatorname{Res}^{G}_{H}:\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\to \mathcal{S}_{\mathcal{A}}(H,\mathcal{M})\). * For any subgroup \(J\) of \(H\), we have \(\operatorname{Res}^{G}_{J}=\operatorname{Res}^{H}_{J}\circ\operatorname{Res}^ {G}_{H}\). Proof.: (a) An element of \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/H)\) is a pair \((h,\beta)\) where \(h\in H\) and \(\beta:\mathcal{M}\to h^{*}\mathcal{M}\) is an \(R\)-linear isomorphism of sheaves such that \(\beta(a\cdot m)=(h\cdot a)\cdot\beta(m)\) for all \(a\in\mathcal{A}\) and \(m\in\mathcal{M}\). Evidently such a pair is also an element of \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\). (b) Given a group homomorphism \(\sigma:G\to\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) such that \(\operatorname{pr}_{1}\circ\sigma=1_{G}\), the restriction \(\sigma|_{H}:H\to\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) takes values in \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/H)\). It is still a group homomorphism, and \(\operatorname{pr}_{1}\circ\sigma|_{H}=(\operatorname{pr}_{1}\circ\sigma)|_{H}= (1_{G})_{H}=1_{H}\). Hence \(\sigma\mapsto\sigma|_{H}\) defines the required function \(\operatorname{Res}^{G}_{H}:\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\to \mathcal{S}_{\mathcal{A}}(H,\mathcal{M})\). (c) This is clear from the definitions. We now come to the application of the above formalism. Suppose that the group \(G\) is equal to an amalgamated product \[G=A\underset{C}{*}B\] of its subgroups \(A\) and \(B\), along their common subgroup \(C\). Using Lemma 2.4.12, we see that sending \(\sigma\in\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\) to the pair \((\operatorname{Res}^{G}_{A}(\sigma),\operatorname{Res}^{G}_{B}(\sigma))\) defines a function \[\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\longrightarrow\mathcal{S}_{\mathcal{A }}(A,\mathcal{M})\underset{\mathcal{S}_{\mathcal{A}}(C,\mathcal{M})}{\times} \mathcal{S}_{\mathcal{A}}(B,\mathcal{M}). \tag{6}\] **Theorem 2.4.13**.: Let \(G\) be a group acting on \(X\), let \(\mathcal{A}\) be a \(G\)-equivariant sheaf of \(R\)-algebras on \(X\) and let \(\mathcal{M}\) be an \(\mathcal{A}\)-module. Suppose further that \(G\) is equal to the amalgamated product \(G=A\underset{C}{*}B\) of its subgroups \(A\) and \(B\) along their common subgroup \(C\). Then the map (6) is a bijection. Proof.: Using Lemma 2.4.12(a), we have the commutative diagram of groups and group homomorphisms Let \(\sigma_{1},\sigma_{2}\in\mathcal{S}_{\mathcal{A}}(G,\mathcal{M})\) be such that \(\operatorname{Res}^{G}_{A}(\sigma_{1})=\operatorname{Res}^{G}_{A}(\sigma_{2})\) and \(\operatorname{Res}^{G}_{B}(\sigma_{1})=\operatorname{Res}^{G}_{B}(\sigma_{2})\). Using the above diagram, we may regard \(\sigma_{1}\) and \(\sigma_{2}\) having the same codomain \(\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\). Then \((\sigma_{1})|_{A}=(\sigma_{2})|_{A}\) and \((\sigma_{1})|_{B}=(\sigma_{2})|_{B}\). Since \(A\) and \(B\) generate \(G\) as a group, it follows that \(\sigma_{1}=\sigma_{2}\). Suppose now that \((\tau,\psi)\) is an element of the fibre product on the right hand side of (6). Then \(\tau:A\to\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) and \(\psi:B\to\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) have the same restriction to \(C\). By the universal property of amalgamated products -- see [31, equation \((*),\lx@sectionsign\) 1.1] -- \(\tau\) and \(\psi\) extend to a unique group homomorphism \(\sigma:G\to\operatorname{Aut}_{\mathcal{A}}(\mathcal{M}/X/G)\) such that \(\sigma|_{A}=\tau\) and \(\sigma|_{B}=\psi\). Then \((\operatorname{pr}_{1}\circ\sigma)|_{A}=\operatorname{pr}_{1}\circ(\sigma|_{ A})=\operatorname{pr}_{1}\circ\tau=1_{A}\) and \((\operatorname{pr}_{1}\circ\sigma)|_{B}=\operatorname{pr}_{1}\circ(\sigma|_{ B})=\operatorname{pr}_{1}\circ\psi=1_{B}\) because \(\tau\in\mathcal{S}_{\mathcal{A}}(\mathcal{M}/X/A)\) and \(\psi\in\mathcal{S}_{\mathcal{A}}(\mathcal{M}/X/B)\). Since \(A\) and \(B\) generate \(G\) as a group, and the group homomorphism \(\operatorname{pr}_{1}\circ\sigma:G\to G\) fixes both \(A\) and \(B\) pointwise, we conclude that \(\operatorname{pr}_{1}\circ\sigma=1_{G}\). So, \(\sigma\in\mathcal{S}_{\mathcal{A}}(\mathcal{M}/X/G)\), \(\operatorname{Res}^{G}_{A}(\sigma)=\tau\) and \(\operatorname{Res}^{G}_{B}(\sigma)=\psi\). To spell out the meaning of Theorem 2.4.13 together with Proposition 2.4.11: the data of a \(G\)-\(\mathcal{A}\)-module structure on \(\mathcal{M}\) is equivalent to the data of an \(A\)-\(\mathcal{A}\)-module structure and a \(B\)-\(\mathcal{A}\)-module structures whose restrictions to a \(C\)-\(\mathcal{A}\)-module structure agree. ## 3. Topics in rigid analytic geometry ### Line bundles with flat connection on smooth rigid spaces Let \(X\) be a smooth rigid \(K\)-analytic space. By a _line bundle with flat connection_ we mean a \(\mathcal{D}\)-module \(\mathscr{L}\) on \(X\) which is invertible as an \(\mathcal{O}\)-module. If \(\mathscr{L}\) and \(\mathscr{M}\) are two line bundles with flat connection on \(X\), then so are \(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M}\) and \(\mathscr{L}^{\otimes-1}:=\mathscr{H}\mathfrak{o}_{\mathcal{O}}(\mathscr{L}, \mathcal{O})\): the tangent sheaf \(\mathcal{T}_{X}\) acts via the Leibniz rule on \(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M}\), and on \(\mathscr{L}^{\otimes-1}\) via the rule \[(v\cdot f)(\ell)=v\cdot f(\ell)-f(v\cdot\ell)\quad\text{for all}\quad v\in \mathcal{T}_{X},f\in\mathscr{L}^{\otimes-1},v\in\mathscr{L}.\] **Definition 3.1.1**.: 1. \(\operatorname{PicCon}(X)\) denotes the abelian group of isomorphism classes of line bundles with flat connection on \(X\) under the operation \(-\otimes_{\mathcal{O}^{-}}\). 2. \(\operatorname{Con}(X):=\ker(\operatorname{PicCon}(X)\to\operatorname{Pic}(X))\) denotes the group of isomorphism classes of line bundles with flat connection on \(X\) that are trivial after forgetting the connection. We now show that when \(X\) is connected, \(\mathscr{L}\) is a simple \(\mathcal{D}_{X}\)-module for any \([\mathscr{L}]\in\operatorname{PicCon}(X)\). We start with the case where \(X\) is \(K\)-affinoid. **Lemma 3.1.2**.: Suppose that \(X\) is a connected \(K\)-affinoid variety such that \(\mathcal{T}_{X}\) is a free \(\mathcal{O}_{X}\)-module. Then for every \([\mathscr{L}]\in\operatorname{Con}(X)\), \(\mathscr{L}(X)\) is a simple \(\mathcal{D}(X)\)-module. Proof.: Suppose that \(n=\dim X\). Following the formalism of [19, SS1] we see that \(\mathcal{O}(X)\) satisfies the conditions found in SS1.1.2 of _loc. cit._ Let \(\partial_{1},\dots,\partial_{n}\) denote a free generating set for \(\mathcal{T}(X)\) as a \(\mathcal{O}(X)\)-module so we may consider \[\mathcal{D}(X)=\mathcal{O}(X)[\partial_{1},\dots,\partial_{n}]\] as a filtered \(K\)-algebra with associated graded ring \[\operatorname{gr}\mathcal{D}(X)\cong\mathcal{O}(X)[T_{1},\dots,T_{n}]\] with \(\mathcal{O}(X)\) in degree \(0\) and \(T_{1},\dots,T_{n}\) in degree \(1\), the principal symbols of \(\partial_{1},\dots,\partial_{n}\) respectively. The filtration of \(\mathscr{L}(X)\) whose \(0\)th filtered part is \(\mathscr{L}(X)\) and whose \(-1\)st filtered part is \(0\) is a good filtration, so that \[\operatorname{gr}\mathscr{L}(X)\cong\mathcal{O}(X)[T_{1},\dots,T_{n}]/(T_{1}, \dots,T_{n}).\] Since \(X\) is also connected, \(\mathcal{O}(X)\) is an integral domain by the proof of [3, Proposition 4.2]. Thus any non-zero proper \(\mathcal{D}(X)\)-module quotient of \(\mathscr{L}(X)\) must have dimension \(<n\). However, by [19, Theoreme 1.1.4, Corollaire 1.2.3], no such \(\mathcal{D}(X)\)-module can exist and so \(\mathscr{L}(X)\) is a simple \(\mathcal{D}(X)\)-module as claimed. **Proposition 3.1.3**.: Suppose \(X\) is connected. Then every \(\mathscr{L}\in\operatorname{PicCon}(X)\) is simple as a \(\mathcal{D}\)-module. Proof.: Suppose that \([\mathscr{L}]\in\operatorname{PicCon}(X)\). Since \(\mathscr{L}\) is a line bundle, there is an admissible cover \(\mathcal{U}\) of \(X\) consisting of \(K\)-affinoid subdomains such that the line bundle \(\mathscr{L}|_{U}\) is trivial for all \(U\in\mathcal{U}\). By passing to a refinement, we may also assume that for each \(U\in\mathcal{U}\), \(U\) is connected and that \(\mathcal{T}|_{U}\) is a free \(\mathcal{O}_{U}\)-module. Suppose that \(\mathcal{M}\) is a subobject of \(\mathscr{L}\) as a \(\mathcal{D}_{X}\)-module and consider \[\mathcal{V}_{1}:=\{U\in\mathcal{U}:\mathcal{M}(U)=\mathscr{L}(U)\}\text{ and }\mathcal{V}_{2}=\{U\in\mathcal{U}:\mathcal{M}(U)=0\}.\] Then \(\mathcal{U}\) is the disjoint union of \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) by Lemma 3.1.2. Now if \(U\in\mathcal{V}_{1}\) and \(V\in\mathcal{V}_{2}\) with \(U\cap V\neq\emptyset\), then \[\mathcal{L}(U\cap V)\cong\mathcal{O}(U\cap V)\otimes_{\mathcal{O}(U)}\mathcal{ M}(U)\cong\mathcal{M}(U\cap V)\cong\mathcal{O}(U\cap V)\otimes_{\mathcal{O}(V)} \mathcal{M}(V)=0,\] a contradiction. Since \(X\) is connected if follows that \(\mathcal{U}=\mathcal{V}_{1}\) or \(\mathcal{U}=\mathcal{V}_{2}\). Hence \(\mathcal{M}=\mathscr{L}\) or \(\mathcal{M}=0\) as required. The following result enables us to characterise the trivial line bundle with trivial connection in terms of its horizontal sections, at least when \(X\) is quasi-Stein and geometrically connected. **Proposition 3.1.4**.: Suppose that \(X\) is quasi-Stein and geometrically connected and \([\mathscr{L}]\in\operatorname{PicCon}(X)\). Then \(\mathscr{L}(X)^{\mathcal{T}(X)=0}=K\) if and only if \(\mathscr{L}\) is the trivial line bundle with trivial flat connection. Proof.: First we show that if \(\mathcal{O}\) is equipped with the trivial connection then \[\mathcal{O}(X)^{\mathcal{T}(X)=0}=K.\] Since \(X\) is smooth over \(K\), by [5, Proposition 2.7] we can find an affinoid subdomain \(U\) of \(X\) which admits an etale map \(g\) to a polydisc \(\operatorname{Sp}K\langle t_{1},\dots,t_{n}\rangle\). By recentering the disc, we can find a point \(x\in U\) such that \(g(x)\) is the origin \(t_{1}=\cdots=t_{n}=0\) in this polydisc. Consider the completion \(\widehat{\mathcal{O}_{X,x}}\) of the local ring \(\mathcal{O}_{X,x}\) at \(x\). Since \(X\) is connected, the restriction map \(\mathcal{O}(X)\to\widehat{\mathcal{O}_{X,x}}\) is injective. This completed local ring \(\widehat{\mathcal{O}_{X,x}}\) is isomorphic to a power-series ring \(K(x)[[t_{1},\ldots,t_{n}]]\) where \(K(x)\) is the residue field of \(\mathcal{O}_{X,x}\) at \(x\). Since \(X\) is quasi-Stein, the local vector fields \(\mathcal{T}_{X,x}=\bigoplus_{i=1}^{n}\mathcal{O}_{X,x}\partial_{t_{i}}\) are generated as an \(\mathcal{O}_{X,x}\)-module by \(\mathcal{T}(X)\), and then \[\mathcal{O}(X)^{\mathcal{T}(X)=0}\quad\subset\quad\widehat{\mathcal{O}_{X,x}} ^{\mathcal{T}_{X,x}=0}\quad\cong\quad K(x)[[t_{1},\ldots,t_{n}]]^{\partial_{t_ {1}}=\cdots=\partial_{t_{n}}=0}=K(x).\] Note that \(\mathcal{O}(X)^{\mathcal{T}(X)=0}\) is a \(K\)-subalgebra of \(\mathcal{O}(X)\). Since \(\dim_{K}K(x)<\infty\), the \(K\)-subalgebra \(\mathcal{O}(X)^{\mathcal{T}(X)=0}\) of \(K(x)\) is a finite field extension of \(K\), \(L\) say. If \(L\) was a proper field extension of \(K\) then base changing \(X\) to \(L\) would yield non-trivial idempotents in \(L\otimes L\subset\mathcal{O}(X)\otimes L=\mathcal{O}(X_{L})\) and show that \(X_{L}\) is not connected. Since \(X\) was assumed to be geometrically connected, we conclude that \(L=\mathcal{O}(X)^{\mathcal{T}(X)=0}\) is in fact equal to \(K\). For the converse we choose \(0\neq v\in\mathcal{L}(X)^{\mathcal{T}(X)=0}\), which is possible by assumption. We may use \(v\) to construct a morphism of \(\mathcal{D}_{X}\)-modules \(\varphi\colon\mathcal{O}\to\mathscr{L}\); \(f\mapsto fv\). Since \(v\neq 0\), \(\varphi\neq 0\). Since \(\mathcal{O}\) and \(\mathscr{L}\) are both simple by Proposition 3.1.3 it follows that \(\varphi\) is an isomorphism as required. It follows that an isomorphism between two line bundles with flat connection is unique up to scalars. More precisely we have the following result. **Corollary 3.1.5**.: Suppose that \(X\) is quasi-Stein and geometrically connected. Let \(\varphi_{1},\varphi_{2}:\mathscr{L}_{1}\to\mathscr{L}_{2}\) be two isomorphisms between two line bundles with flat connection on \(X\). Then there is a scalar \(\lambda\in K^{\times}\) such that \(\varphi_{2}=\lambda\varphi_{1}\). Proof.: By tensoring \(\varphi_{1}\) and \(\varphi_{2}\) by \(\mathscr{L}_{2}^{\otimes-1}\) we may assume that \(\mathscr{L}_{2}=\mathcal{O}\). But then \(\varphi_{1}\circ\varphi_{2}^{-1}:\mathcal{O}\to\mathcal{O}\) is a \(\mathcal{D}\)-linear isomorphism, so by (8) is given by multiplication by a non-zero element of \(\mathcal{O}(X)^{\mathcal{T}(X)=0}\). Now apply Proposition 3.1.4. **Corollary 3.1.6**.: Suppose that \(X\) is quasi-Stein and that \((X_{n})_{n\geqslant 0}\) is an increasing admissible cover of \(X\) by geometrically connected affinoid subdomains. Then the family of restriction maps \(\operatorname{PicCon}(X)\to\operatorname{PicCon}(X_{n})\) induce a natural isomorphism \[\operatorname{PicCon}(X)\xrightarrow{\cong}\varprojlim\operatorname{PicCon}(X_{ n}).\] Proof.: Suppose for a contradiction that \(X\) is not geometrically connected. Then \(X_{\mathbf{C}}\) has two non-empty families of non-empty admissible open subsets \(\mathcal{U}\) and \(\mathcal{V}\) such that \(\mathcal{U}\cup\mathcal{V}\) is an admissible cover of \(X_{\mathbf{C}}\) and \[\bigcup_{U\in\mathcal{U}}U\cap\bigcup_{V\in\mathcal{V}}V=\emptyset.\] Since \((X_{n,\mathbf{C}})_{n\geqslant 0}\) is an admissible cover of \(X\) that is ordered by inclusion, we can find \(n\geqslant 0\), \(U\in\mathcal{U}\) and \(V\in\mathcal{V}\) such that \(X_{n,\mathbf{C}}\cap U\) and \(X_{n,\mathbf{C}}\cap V\) are both non-empty. But then \(\{U\cap X_{n,\mathbf{C}}:U\in\mathcal{U}\}\) and \(\{V\cap X_{n,\mathbf{C}}:V\in\mathcal{V}\}\) together form an admissible cover of \(X_{n,\mathbf{C}}\) that disconnects it, giving the required contradiction. Now suppose that \([\mathscr{L}]\in\operatorname{PicCon}(X)\) is such that \([\mathscr{L}|_{X_{n}}]=0\in\operatorname{PicCon}(X_{n})\) for all \(n\geqslant 0\). Then \(\mathscr{L}^{\mathcal{T}=0}\) is a subsheaf of \(\mathscr{L}\) with \(\mathscr{L}(X_{n})^{\mathcal{T}=0}=K\) for all \(n\geqslant 0\) by Proposition 3.1.4. Moreover the restriction maps \(\mathscr{L}(X_{n+1})^{\mathcal{T}=0}\to\mathscr{L}(X_{n})^{\mathcal{T}=0}\) are all injective, and thus isomorphisms, since the restriction maps \(\mathcal{O}(X_{n+1})\to\mathcal{O}(X_{n})\) are all injective. Thus by the sheaf condition on \(\mathscr{L}^{\mathcal{T}=0}\), \(\mathscr{L}(X)^{\mathcal{T}=0}=K\). Thus by Proposition 3.1.4 again we can deduce that \([\mathscr{L}]=0\in\operatorname{PicCon}(X)\) i.e. the homomorphism in the statement is injective. To see that the homomorphism is also surjective we consider a family of line bundles with connection \([\mathscr{L}_{n}]_{n\geqslant 0}\in\prod_{n\geqslant 0}\operatorname{PicCon}(X_{n})\) such that \[[\mathscr{L}_{n}|_{X_{m}}]=[\mathscr{L}_{m}]\text{ in }\operatorname{PicCon}(X_{ m})\quad\text{whenever}\quad n\geqslant m.\] Choose isomorphisms of \(\mathcal{D}\)-modules \(\varphi_{m}\colon\mathscr{L}_{m+1}|_{X_{m}}\xrightarrow{\cong}\mathscr{L}_{m}\) for all \(m\geqslant 0\). Then whenever \(n\geqslant m\) we can define an isomorphism of sheaves \(\varphi_{n,m}\colon\mathscr{L}_{n}|_{X_{m}}\to\mathscr{L}_{m}\) by \[\varphi_{n,m}:=\varphi_{n-1}|_{X_{m}}\circ\cdots\circ\varphi_{m}|_{X_{m}}.\] It is then easy to verify that the construction in [12, SS4.4] gives a sheaf \(\mathscr{L}\) of \(\mathcal{D}\)-modules on \(X\) together with isomorphisms of \(\mathcal{D}\)-modules \(\mathscr{L}|_{X_{n}}\xrightarrow{\cong}\mathscr{L}_{n}\). Then \(\mathscr{L}\) is a line bundle on \(X\), since each \(\mathscr{L}(X_{n})\) is free of rank \(1\) over \(\mathcal{O}(X_{n})\). The family \([\mathscr{L}_{n}]\in\varprojlim\operatorname{PicCon}(X_{n})\) is then the required image of \([\mathscr{L}]\). The following result will also be useful. **Corollary 3.1.7**.: Suppose that \(X\) is quasi-Stein and geometrically connected, and that \(d\) is a positive integer. Then \(\mathcal{O}(X)^{\times}/K^{\times}\) has no \(d\)-torsion. Proof.: Suppose that \(f\in\mathcal{O}(X)^{\times}\) with \(f^{d}\in K^{\times}\). Then for every \(\partial\in\mathcal{T}(X)\) we see that \(0=\partial(f^{d})=df^{d-1}\partial(f)\). Since \(df^{d-1}\in\mathcal{O}(X)^{\times}\) it follows that \(\partial(f)=0\). Thus by Proposition 3.1.4, \(f\in K^{\times}\). **Lemma 3.1.8**.: Suppose that a group \(G\) acts on \(X\). Then \(G\) acts naturally on \(\operatorname{PicCon}(X)\) by abelian group automorphisms via \[g\cdot[\mathscr{L}]=[g_{*}\mathscr{L}]\] where \(g_{*}\mathscr{L}\) is a \(\mathcal{D}\)-module via the ring isomorphsim \((g^{-1})^{\mathcal{D}}\colon\mathcal{D}\xrightarrow{\cong}g_{*}\mathcal{D}\). Moreover \(\operatorname{Con}(X)\) is a \(G\)-stable subgroup of \(\operatorname{PicCon}(X)\). Proof.: We can check that \(g\cdot([\mathscr{L}][\mathscr{M}])=(g\cdot[\mathscr{L}])(g\cdot[\mathscr{M}])\) for any \([\mathscr{L}],[\mathscr{M}]\in\operatorname{PicCon}(X)\), because \(g_{*}(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M})\) is naturally isomorphic to \((g_{*}\mathscr{L})\otimes_{g_{*}\mathcal{O}}(g_{*}\mathscr{M})\) as a \(g_{*}\mathcal{D}\)-module. Since \(g_{*}\mathscr{L}\) is trivial as a line bundle whenever \(\mathscr{L}\) is trivial as a line bundle the last part is immediate. **Remark 3.1.9**.: We note more generally, in the context of Lemma 3.1.8, that if \(U\) is an admissible open subspace of \(X\), each \(g\in G\) induces a group isomorphism \(\operatorname{PicCon}(U)\to\operatorname{PicCon}(g\cdot U)\); \([\mathscr{L}]\mapsto[g_{*}\mathscr{L}]\) where again \(g_{*}\mathscr{L}\) is a \(\mathcal{D}_{gU}\)-module via the ring isomorphism \((g^{-1})^{\mathcal{D}}\colon\mathcal{D}_{gU}\to g_{*}\mathcal{D}_{U}\). Moreover these restrict to isomorphisms \(\operatorname{Con}(U)\to\operatorname{Con}(g\cdot U)\). We will now construct some connections on the trivial line bundle by using units. **Lemma 3.1.10**.: Suppose that \(d\) is a positive integer and \(u\in\mathcal{O}(X)^{\times}\). Then there is a unique element \([\mathscr{L}_{u,d}]\) of \(\operatorname{Con}(X)\) such that \(\mathscr{L}_{u,d}\) has a free generator \(v\) as a \(\mathcal{O}\)-module with \(\partial(v)=\frac{1}{d}\frac{\partial(u)}{u}v\) for all \(\partial\in\mathcal{T}\). Proof.: Suppose that \(\mathscr{L}=\mathcal{O}v\) is a line bundle with a flat connection satisfying \[\partial(v)=\frac{1}{d}\frac{\partial(u)}{u}v\] for all \(\partial\in\mathcal{T}\). For all \(f\in\mathcal{O}\) and \(\partial\in\mathcal{T}\), necessarily \[\partial(fv)=\left(\partial(f)+\frac{1}{d}\frac{\partial(u)}{u}f\right)v\] so as \(\mathcal{D}\) is generated by \(\mathcal{O}\) and \(\mathcal{T}\) there is at most one element of \(\mathrm{Con}(X)\) with the property given in the statement. To prove the existence of such a line bundle with flat connection it suffices to show that for all \(\partial_{1},\partial_{2}\in\mathcal{T}\) \[(\partial_{1}\partial_{2}-\partial_{2}\partial_{1})(v)=[\partial_{1}\partial _{2}](v)\] where \([--]\) denotes the Lie bracket on \(\mathcal{T}\). But \[\partial_{1}\partial_{2}(v) = \partial_{1}\left(\frac{1}{d}\frac{\partial_{2}(u)}{u}v\right)\] \[= \frac{1}{d}\partial_{1}\left(\frac{\partial_{2}(u)}{u}\right)v+ \frac{1}{d}\frac{\partial_{2}(u)}{u}\frac{1}{d}\frac{\partial_{1}(u)}{u}v\] \[= \frac{1}{d}\frac{u\partial_{1}\partial_{2}(u)-\partial_{2}(u) \partial_{1}(u)}{u^{2}}v+\frac{1}{d}\frac{\partial_{2}(u)}{u}\frac{1}{d}\frac {\partial_{1}(u)}{u}v\] and so \[(\partial_{1}\partial_{2}-\partial_{2}\partial_{1})(v)=\frac{1}{d}\frac{[ \partial_{1}\partial_{2}](u)}{u}v\] as required. **Proposition 3.1.11**.: Let \(d\) be a non-zero integer and let \(X\) be a geometrically connected, smooth, quasi-Stein rigid \(K\)-analytic space. There is a homomorphism of abelian groups \(\mathcal{O}(X)^{\times}\to\mathrm{Con}(X)\) given by \(u\mapsto[\mathscr{L}_{u,d}]\). The kernel of this homomorphism is \(K^{\times}\mathcal{O}(X)^{\times d}\) and its image is \(\mathrm{Con}(X)[d]\). Proof.: Suppose that \(u_{1},u_{2}\in\mathcal{O}(X)^{\times}\). Then if, for \(i=1,2\), \(v_{i}\) generates \(\mathscr{L}_{u_{i},d}\) with \(\partial(v_{i})=\frac{1}{d}\frac{\partial(u_{i})}{u_{i}}\), \(v_{1}\otimes v_{2}\) is a generator of \(\mathscr{L}_{u_{1},d}\otimes\mathscr{L}_{u_{2},d}\) and \[\partial(v_{1}\otimes v_{2}) = \partial(v_{1})\otimes v_{2}+v_{1}\otimes\partial(v_{2})\] \[= \frac{1}{d}\left(\frac{\partial(u_{1})}{u_{1}}+\frac{\partial(u _{2})}{u_{2}}\right)(v_{1}\otimes v_{2})\] \[= \frac{1}{d}\frac{\partial(u_{1}u_{2})}{u_{1}u_{2}}(v_{1}\otimes v _{2})\] for all \(\partial\in\mathcal{T}\). Thus \(u\mapsto[\mathscr{L}_{u,d}]\) does define a homomorphism. Note that \(u\in\mathcal{O}(X)^{\times}\) is in the kernel of the homomorphism if and only if \([\mathscr{L}_{u,d}]=[\mathcal{O}.v]=[\mathcal{O}]\) in \(\mathrm{Con}(X)\). That is \(u\) is in the kernel if and only if there is \(w\in\mathcal{O}(X)^{\times}\) such that \(\partial(wv)=0\) for all \(\partial\in\mathcal{T}(X)\). But \[\partial(wv)=\left(\frac{\partial(w)}{w}+\frac{1}{d}\frac{\partial(u)}{u} \right)wv.\] Now \[\frac{\partial(uw^{d})}{uw^{d}}=\frac{\partial(u)}{u}+d\frac{\partial(w)}{w}\] so \(\partial(wv)=0\) if and only if \(\partial(uw^{d})=0\). Thus by Proposition 3.1.4 the kernel is precisely \(K^{\times}\mathcal{O}(X)^{\times d}\) as claimed. Moreover \([\mathscr{L}_{u,d}^{\otimes d}]=[\mathscr{L}_{u^{d},d}]=[\mathcal{O}]\) so each \([\mathscr{L}_{u,d}]\) is indeed \(d\)-torsion. Given \([\mathscr{L}]\in\mathrm{Con}(X)[d]\), we use the hypothesis that \(\mathscr{L}\) is trivial as a line bundle to pick a generator \(v\in\mathscr{L}(X)\) as an \(\mathcal{O}(X)\)-module, and we choose a \(\mathcal{D}\)-linear isomorphism \(\psi\colon\mathscr{L}^{\otimes d}\stackrel{{\cong}}{{\longrightarrow}} \mathcal{O}\) using the fact that \(d\cdot[\mathscr{L}]=0\) in \(\operatorname{Con}(X)\). We claim that if \(\psi(v^{\otimes d})=u\) then \(\partial(v)=\frac{1}{d}\frac{\partial(u)}{u}\) for all \(\partial\in\mathcal{T}\) and so \([\mathscr{L}]=[\mathscr{L}_{u,d}]\). For this we compute that \(\partial(v^{\otimes d})=\frac{\partial(u)}{u}v^{\otimes d}\) for all \(\partial\in\mathcal{T}\). But if \(\partial(v)=av\) then \(\partial(v^{\otimes d})=dav^{\otimes d}\) so \(a=\frac{1}{d}\frac{\partial(u)}{u}\) as claimed. **Definition 3.1.12**.: If \(X\) is a geometrically connected, smooth, quasi-Stein rigid \(K\)-analytic space then we define \[\theta_{d}\colon\operatorname{Con}(X)[d]\to\mathcal{O}(X)^{\times}/K^{\times} \mathcal{O}(X)^{\times d}\] to be the inverse of the isomorphism induced by the homomorphism in Proposition 3.1.11. The proof of surjectivity in Proposition 3.1.11 shows that \(\theta_{d}\left([\mathcal{O}v]\right)\) is determined by the image of \(v^{\otimes d}\) under a \(\mathcal{D}\)-linear isomorphism \(\psi\colon(\mathcal{O}v)^{\otimes d}\to\mathcal{O}\), via \[\theta_{d}([\mathcal{O}v])=\psi(v^{\otimes d})K^{\times}\mathcal{O}(X)^{ \times d}. \tag{7}\] **Proposition 3.1.13**.: Let \(d\) be a non-zero integer and let \(X\) be a geometrically connected, smooth, quasi-Stein rigid \(K\)-analytic space and let \(G\) be a group acting on \(X\). Then \(\theta_{d}\) is a \(G\)-equivariant isomorphism \[\theta_{d}\colon\operatorname{Con}(X)[d]\to\mathcal{O}(X)^{\times}/K^{\times} \mathcal{O}(X)^{\times d}.\] Proof.: Let \(g\in G\), \([\mathscr{L}]\in\operatorname{Con}(X)[d]\) and fix a \(\mathcal{D}\)-linear isomorphism \(\psi:\mathscr{L}^{\otimes d}\stackrel{{\cong}}{{\longrightarrow}} \mathcal{O}\). The map \(g^{\mathcal{O}}:\mathcal{O}\to g^{*}\mathcal{O}\) is a \(\mathcal{D}\)-linear isomorphism, so \(g^{\mathcal{O}}\circ\psi:\mathscr{L}^{\otimes d}\to g^{*}\mathcal{O}\) is also a \(\mathcal{D}\)-linear isomorphism. After identifying \(g_{*}(\mathscr{L}^{\otimes d})\) with \((g_{*}\mathscr{L})^{\otimes d}\) and \(g_{*}(g^{*}\mathcal{O})\) with \(\mathcal{O}\), we obtain a \(g_{*}\mathcal{D}\)-linear isomorphism \[\psi^{\prime}:=g_{*}(g^{\mathcal{O}}\circ\psi):(g_{*}\mathscr{L})^{\otimes d} \stackrel{{\cong}}{{\longrightarrow}}\mathcal{O}.\] Recall Lemma 3.1.8 that \(g\cdot[\mathscr{L}]=[g_{*}\mathscr{L}]\), where \(\mathcal{D}\) acts on \(g_{*}\mathscr{L}\) via the ring isomorphism \((g^{-1})^{\mathcal{D}}:\mathcal{D}\stackrel{{\cong}}{{\longrightarrow }}g_{*}\mathcal{D}\). So \(\psi^{\prime}\) becomes an \(\mathcal{D}\)-linear isomorphism in this way, and we can use it to compute \(\theta_{d}([g_{*}\mathscr{L}])\) as follows: let \(v\in\mathscr{L}(X)\) be such that \(\mathscr{L}(X)=\mathcal{O}(X)v\); then by definition of \(\psi^{\prime}\) we have \(\psi^{\prime}(v^{\otimes d})=g\cdot\psi(v^{\otimes d})\), so \[\theta_{d}(g\cdot[\mathscr{L}])=\theta_{d}([g_{*}\mathscr{L}])=\psi^{\prime}( v^{\otimes d})K^{\times}\mathcal{O}(X)^{\times d}=g\cdot\psi(v^{\otimes d})K^{ \times}\mathcal{O}(X)^{\times d}=g\cdot\theta_{d}([\mathscr{L}])\] as required. **Remark 3.1.14**.: Proposition 3.1.13 can be viewed as saying that \[[g_{*}\mathscr{L}_{u,d}]=[\mathscr{L}_{g\cdot u,d}]\in\operatorname{Con}(X)[d].\] More generally if \(U\) is an admissible open subset of \(X\) and \(u\in\mathcal{O}(U)^{\times}\) then \[[g_{*}\mathscr{L}_{u,d}]=[\mathscr{L}_{g\cdot u,d}]\in\operatorname{Con}(g \cdot U).\] ### Equivariant line bundles with flat connections We now turn to a discussion of _equivariant_ line bundles with flat connection. In this section we will assume that \(G\) is a topological group acting continuously on a smooth rigid \(K\)-analytic space \(X\) in the sense of [2, Definition 3.1.8]. We first consider \(G\)-equivariant line bundles. Our next definition, Definition 3.2.3 below, will require some preparation. **Lemma 3.2.1**.: Let \(\mathscr{M}\) be a coherent \(\mathcal{O}\)-module on \(X\). Suppose that \(\{g^{\mathscr{M}}\}_{g\in G}\) is a \(G\)-equivariant structure on \(\mathscr{M}\). Then for every affinoid subdomain \(U\) of \(X\) and \(g\in G\), the structure map \[g^{\mathscr{M}}:\mathscr{M}(U)\to\mathscr{M}(gU)\] is continuous with respect to canonical \(K\)-Banach topologies on the domain and codomain. Proof.: Let \(U\) be an affinoid subdomain of \(X\). By Kiehl's Theorem -- see, e.g. [12, Theorem 4.5.2] -- \(\mathscr{M}(U)\) is a finitely generated module over the affinoid algebra \(\mathcal{O}(U)\) because \(\mathscr{M}_{|U}\) is a coherent \(\mathcal{O}_{U}\)-module. Recalling from [27, Proposition 2.1] that every finitely generated module \(M\) over an affinoid algebra \(A\) carries a canonical \(K\)-Banach-space topology, we see that \(\mathscr{M}(U)\) carries a canonical \(K\)-Banach space topology. Fixing \(g\in G\), we can regard \(\mathscr{M}(gU)\) as an \(\mathcal{O}(U)\)-module via a twisted action, by defining \(a*v:=(g\cdot a)v\) for all \(a\in\mathcal{O}(U)\) and \(v\in\mathscr{M}(gU)\). Then \(\mathscr{M}(gU)\) is still a finitely generated \(\mathcal{O}(U)\)-module, and the structure map \(g^{\mathscr{M}}(U):\mathscr{M}(U)\to\mathscr{M}(gU)\) is now a \(\mathcal{O}(U)\)-linear homomorphism between two finitely generated \(\mathcal{O}(U)\)-modules. It is therefore automatically continuous by [12, Corollary 1.2.4]. **Lemma 3.2.2**.: Let \(\mathscr{M}\) be a coherent \(\mathcal{O}_{X}\)-module. Suppose that \(\{g^{\mathscr{M}}\}_{g\in G}\) is a \(G\)-equivariant structure on \(\mathscr{M}\) and that \(L\) is a finite field extension of \(K\). Then for every \(z\in X(L)\) there is a natural group homomorphism \[\phi_{z,\mathscr{M}}:G_{z}\to\operatorname{Aut}_{L}(\mathscr{M}(z))\] where \(\mathscr{M}(z):=L\otimes_{\mathcal{O}_{X,z}}\mathscr{M}_{z}\) denotes the fibre of \(\mathscr{M}\) at \(z\). Proof.: Let \(g\in G_{z}\). The \(G\)-equivariant structure on \(\mathcal{O}\) gives us a local \(K\)-algebra automorphism \(g_{z}^{\mathcal{O}}:\mathcal{O}_{X,z}\to\mathcal{O}_{X,z}\), whereas the \(G\)-equivariant structure on \(\mathscr{M}\) gives a \(K\)-linear automorphism \(g_{z}^{\mathscr{M}}:\mathscr{M}_{z}\to\mathscr{M}_{z}\), satisfying \(g_{z}^{\mathscr{M}}(a.m)=g_{z}^{\mathcal{O}}(a)\cdot g_{z}^{\mathscr{M}}(a)\) for all \(a\in\mathcal{O}_{X,z}\) and \(m\in\mathscr{M}_{z}\). It is now straightforward to check that setting \[g\cdot(\lambda\otimes m):=\lambda\otimes g_{z}^{\mathscr{M}}(a)\quad\text{for all}\quad g\in G_{z},\lambda\in L,m\in\mathscr{M}_{z}\] gives a well-defined \(L\)-linear action of \(G_{z}\) on \(L\otimes_{\mathcal{O}_{X,z}}\mathscr{M}_{z}\). Suppose that \(M\) is any \(K\)-Banach space. Then the \(K\)-algebra of bounded \(K\)-linear endomorphisms \(\mathcal{B}(M)\) is also a \(K\)-Banach algebra through the operator norm \(||T||:=\sup\limits_{v\in V\setminus\{0\}}\frac{|Tv|}{|v|}\), so its unit group \(\mathcal{B}(M)^{\times}\) becomes a topological group -- using the geometric series, one can check that the inversion map on \(\mathcal{B}(M)^{\times}\) is continuous. If \(\mathcal{M}\subset M\) is the unit ball in \(M\), then the congruence subgroups of \(\mathcal{B}(M)^{\times}\) \[\Gamma_{n}(\mathcal{M}):=\{\gamma\in\mathcal{B}(M)^{\times}:(\gamma-1)( \mathcal{M})\subseteq\pi_{F}^{n}\mathcal{M}\}\] form a fundamental system of open neighbourhoods of the identity in \(\mathcal{B}(M)^{\times}\). Since any isomorphism of \(K\)-Banach spaces \(M\xrightarrow{\cong}N\) induces an isomorphism of topological groups \(\mathcal{B}(M)^{\times}\xrightarrow{\cong}\mathcal{B}(N)^{\times}\) via 'conjugation', we see that the topology on \(\mathcal{B}(M)^{\times}\) only depends on the topology on \(M\) and not on any particular choice of \(K\)-Banach norm on \(M\). Let \(\mathscr{M}\) be a coherent \(\mathcal{O}_{X}\)-module and let \(\{g^{\mathscr{M}}\}_{g\in G}\) be a \(G\)-equivariant structure on \(\mathscr{M}\). For each affinoid subdomain \(U\) of \(X\) and each \(g\in G_{U}\), the maps \(g^{\mathscr{M}}(U):\mathscr{M}(U)\to\mathscr{M}(U)\) induce, by Lemma 3.2.1 a homomorphism \[G_{U}\to\mathcal{B}(\mathscr{M}(U))^{\times}.\] **Definition 3.2.3**.: A \(G\)_-equivariant line bundle on \(X\)_ is a \(G\)-equivariant \(\mathcal{O}_{X}\)-module \(\mathscr{L}\) on \(X\) such that * \(\mathscr{L}\) is invertible as an \(\mathcal{O}_{X}\)-module, and 2. the action map \(G_{U}\to\mathcal{B}(\mathscr{L}(U))^{\times}\) is continuous for every affinoid subdomain \(U\) of \(X\). **Lemma 3.2.4**.: \(\mathcal{O}_{X}\) with its usual equivariant structure is a \(G\)-equivariant line bundle on \(X\). Proof.: Let \(U\) be an affinoid subdomain of \(X\). Consider the sup norm \(|\cdot|_{U}\) on \(\mathcal{O}(U)\) whose unit ball in \(\mathcal{A}:=\mathcal{O}(U)^{\circ}\). For each \(n\geqslant 0\), the congruence subgroup \(\Gamma_{n}(\mathcal{O}(U)^{\circ})\) of \(\mathcal{B}(\mathcal{O}(U))^{\times}\) contains the group \(\mathcal{G}_{\pi_{\mathbb{F}}^{n}}(\mathcal{A})\) appearing on [2, p. 19]. Through the Raynaud generic fibre functor rig, \(\mathcal{G}_{\pi_{\mathbb{F}}^{n}}(\mathcal{A})\) can be identified with a subgroup of the group of \(K\)-linear automorphisms \(\operatorname{Aut}_{K}(U,\mathcal{O}_{U})\) of the \(G\)-ringed topological space \((U,\mathcal{O}_{U})\). These subgroups of \(\operatorname{Aut}_{K}(U,\mathcal{O}_{U})\) form a filter base for a certain topology on \(\operatorname{Aut}_{K}(U,\mathcal{O}_{U})\) -- see [2, Theorem 3.1.5] -- and the action map \(G_{U}\to\operatorname{Aut}_{K}(U,\mathcal{O}_{U})\) is continuous with respect to this topology by [2, Definition 3.1.8], because \(G\) is assumed to act continuously on \(X\). It follows that the action map \(G_{U}\to\mathcal{B}(\mathcal{O}(U))^{\times}\) is continuous as required. A _morphism_ between two \(G\)-equivariant line bundles \(\mathscr{L}\) and \(\mathscr{M}\) is a morphism of \(G\)-equivariant \(\mathcal{O}\)-modules in the sense of [2, Definition 2.3.1(c)]. Given any such morphism \(\varphi:\mathscr{L}\to\mathscr{M}\) and an affinoid subdomain \(U\) of \(X\), the map \(\varphi(U):\mathscr{L}(U)\to\mathscr{M}(U)\) is then an \(\mathcal{O}(U)\)-linear homomorphism between two finitely generated \(\mathcal{O}(U)\)-modules, and is therefore automatically continuous. **Definition 3.2.5**.: We let \(\operatorname{Pic}^{G}(X)\) denote the set of isomorphism classes of \(G\)-equivariant line bundles on \(X\). **Lemma 3.2.6**.: Let \(\mathscr{L}\) and \(\mathscr{M}\) be \(G\)-equivariant line bundles on \(X\). Then so are \(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M}\) and \(\mathscr{L}^{\otimes-1}=\mathpzc{Hom}_{\mathcal{O}}(\mathscr{L},\mathcal{O})\). Proof.: We can easily verify that the usual formula for tensor product and contragredient representations satisfies Definition 3.2.3. With respect to these operations \(\operatorname{Pic}^{G}(X)\) is an abelian group with unit given by the structure \(\{g^{\mathcal{O}}\}\) on \(\mathscr{L}=\mathcal{O}_{X}\). **Proposition 3.2.7**.: Suppose that \(L\) is a finite extension of \(K\) and \(z\in X(L)\). There is a natural group homomorphism \(\phi_{z}\colon\operatorname{Pic}^{G}(X)\to\operatorname{Hom}(G_{z},L^{\times})\) given by \[\phi_{z}([\mathscr{L}])=\phi_{z,\mathscr{L}}.\] Proof.: We note that if \(\mathscr{L}\) is a line bundle on \(X\) with a \(G\)-equivariant structure then \(\mathscr{L}(z)\) is a one-dimensional vector space over the residue field of the local ring \(\mathcal{O}_{X,z}\), and so, by Lemma 3.2.2, \(\phi_{z,\mathscr{L}}\) can be viewed as a homomorphism \(G_{z}\to L^{\times}\). Next we show that \(\phi_{z,\mathscr{L}}\) is continuous. To this end we choose an affinoid subdomain \(U\) of \(X\) such that \(z\colon\operatorname{Sp}(L)\to X\) factors through \(U\). Since \(G_{z}\cap G_{U}\) is an open subgroup of \(G_{z}\) it suffices to show that \(\phi_{z,\mathscr{L}}|_{G_{U}\cap G_{z}}\) is continuous. Since the natural map \(\mathcal{B}(\mathscr{L}(U))^{\times}\to L^{\times}\) is continuous this follows from Definition 3.2.3(b). It remains to show that \(\phi_{z}\) is a group homomorphism. This is immediate, because whenever \(\mathscr{L}_{1}\) and \(\mathscr{L}_{2}\) are elements of \(\operatorname{Pic}^{G}(X)\), there is a canonical isomorphism \[(\mathscr{L}_{1}\otimes_{\mathcal{O}}\mathscr{L}_{2})(z)\to\mathscr{L}_{1}(z )\otimes_{L}\mathscr{L}_{2}(z)\] which is compatible with the \(G\)-actions. This discussion leads us on to the following definition. **Definition 3.2.8**.: A \(G\)_-equivariant line bundle with flat connection on \(X\)_ is a \(G\)-equivariant \(\mathcal{D}_{X}\)-module \(\mathscr{L}\), such that when \(\mathscr{L}\) is viewed as a \(G\)-equivariant \(\mathcal{O}_{X}\)-module by restriction, it is a \(G\)-equivariant line bundle on \(X\). It follows easily from Lemma 3.2.4 that \(\mathcal{O}_{X}\) equipped with the trivial connection and its usual equivariant structure is a \(G\)-equivariant line bundle with connection. We can also typically put other \(G\)-equivariant structures on the trivial line bundle with trivial connection by considering \(\operatorname{Hom}(G,K^{\times})\), the abelian group of _continuous_ group homomorphisms from \(G\) to \(K^{\times}\). **Definition 3.2.9**.: Given \(\chi\in\operatorname{Hom}(G,K^{\times})\) we define a new \(G\)-equivariant structure on \(\mathcal{O}\) equipped with the trivial connection: for each affinoid subdomain \(U\) of \(X\) and for each \(g\in G\) we define \(K\)-linear continuous maps \[g^{\mathcal{O}_{\chi}}(U)\colon\mathcal{O}(U)\longrightarrow\mathcal{O}(gU)\] by \(f\mapsto\chi(g)g^{\mathcal{O}}(f)\). This family \(\{g^{\mathcal{O}_{\chi}}\}\) then defines a \(G\)-equivariant line bundle with connection on \(X\) in the sense of Definition 3.2.8 that we will denote by \(\mathcal{O}_{\chi}\). Note that condition (b) in Definition 3.2.8 follows from the assumption that \(\chi:G\to K^{\times}\) is continuous. **Lemma 3.2.10**.: Let \(\mathscr{L}\) and \(\mathscr{M}\) be \(G\)-equivariant line bundles with flat connection. Then so are \(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M}\) and \(\mathscr{L}^{\otimes-1}=\mathpzc{Hom}(\mathscr{L},\mathcal{O})\). Proof.: We saw at the start of SS3.1 that \(\mathscr{L}\otimes_{\mathcal{O}}\mathscr{M}\) and \(\mathscr{L}^{\otimes-1}\) are \(\mathcal{D}\)-modules on \(X\). The usual formula for the tensor product and contragredient representations allow us to see that they also carry standard \(G\)-equivariant \(\mathcal{D}\)-module structures and Lemma 3.2.6 shows that this makes them \(G\)-equivariant line bundles. **Definition 3.2.11**.: We denote the set of isomorphism classes of \(G\)-equivariant line bundles with flat connection on \(X\) by \(\operatorname{PicCon}^{G}(X)\). In view of Lemma 3.2.10, the operations \(-\otimes_{\mathcal{O}}-\) and \((-)^{\otimes-1}\) endow \(\operatorname{PicCon}^{G}(X)\) with the structure of an abelian group. The unit element in this group is given by the isomorphism class of \(\mathcal{O}_{X}\) equipped with the trivial connection together with its usual \(G\)-equivariant structure. **Definition 3.2.12**.: We define \(\operatorname{Con}^{G}(X)\) by \[\operatorname{Con}^{G}(X):=\ker(\operatorname{PicCon}^{G}(X)\to\operatorname {Pic}(X)),\] the group of isomorphism classes of \(G\)-equivariant line bundles with flat connection on \(X\) that are trivial after forgetting the connection and the \(G\)-action. We record that \(\operatorname{Con}\) is functorial in a natural way. **Proposition 3.2.13**.: Let \(U\) be a geometrically connected admissible open subset of \(X\), and suppose that \(H\) is a closed subgroup of \(G_{U}\). Then for each \(g\in G\), 1. the map \([\mathscr{L}]\mapsto[g_{*}\mathscr{L}]\) induces a natural isomorphism \[g\colon\operatorname{PicCon}^{H}(U)\quad\stackrel{{\cong}}{{ \longrightarrow}}\quad\operatorname{PicCon}^{{}^{g}H}(gU);\] 2. for \(L\) a finite extension of \(K\) and \(z\in U(L)\), the following diagram commutes: \[\diagram{\begin{array}{c}\operatorname{PicCon}^{H}(U)\xrightarrow{\phi_{z}} \operatorname{Hom}(H_{z},L^{\times})\\ \operatorname{PicCon}^{{}^{g}H}(gU)\xrightarrow{\phi_{g\cdot z}} \operatorname{Hom}({}^{g}H_{z},L^{\times}).\end{array}\] Proof.: (a) We've already seen in Remark 3.1.9 that \(\mathscr{L}\mapsto g_{*}\mathscr{L}\) induces an isomorphism \(\operatorname{PicCon}(U)\to\operatorname{PicCon}(gU)\). It remains to see how the \(H\)-equivariant structure on \(\mathscr{L}\in\operatorname{PicCon}^{H}(U)\) induces an \({}^{g}H\)-equivariant structure on \(g_{*}\mathscr{L}\): for each \(h\in H\) we have an isomorphism \(h^{\mathscr{L}}\colon\mathscr{L}\to h^{*}\mathscr{L}\). This induces an isomorphism \[(ghg^{-1})^{g_{*}\mathscr{L}}\colon g_{*}\mathscr{L}\to(ghg^{-1})^{*}g_{*} \mathscr{L}=g_{*}h^{*}\mathscr{L}\] given by \((ghg^{-1})^{g_{*}\mathscr{L}}=g_{*}(h^{\mathscr{L}})\). It is easy to verify that this induces the desired isomorphism \(\operatorname{PicCon}^{H}(U)\to\operatorname{PicCon}^{{}^{g}H}(gU)\). (b) Fix \([\mathscr{L}]\in\operatorname{PicCon}^{H}(U)\) and consider the stalk \((g_{*}\mathscr{L})_{g\cdot z}\) of \(g_{*}\mathscr{L}\) at \(g\cdot z\in gU\). There is a natural bijection between the affinoid subdomains of \(gU\) containing \(g\cdot z\), and the affinoid subdomains of \(U\) containing \(z\), given by \(V\mapsto g^{-1}V\). This gives a \(K\)-linear isomorphism \(\tau_{g}:\mathscr{L}_{z}\to(g_{*}\mathscr{L})_{g\cdot z}\) which is appropriately equivariant with respect to the \(H_{z}\)-action on \(\mathscr{L}_{z}\) and the \(H_{g\cdot z}={}^{g}H_{z}\)-action on \((g_{*}\mathscr{L})_{g\cdot z}\): \[\tau_{g}(h\cdot m)=c_{g}(h)\cdot\tau_{g}(m)\quad\text{for all}\quad h\in H_{z},m \in\mathscr{L}_{z}.\] Now let \(h\in H_{z}\). Using Lemma 3.2.2, we see that the scalar \(\phi_{z}([\mathscr{L}])(h)\in L^{\times}\) is completely determined by the following equation inside \(\mathscr{L}(z)\): \[1\otimes h\cdot m=\phi_{z}([\mathscr{L}])(h)\otimes m\quad\text{for all}\quad m\in \mathscr{L}_{z}.\] Since \(c_{g}(h)=ghg^{-1}\) lies in \(H_{g\cdot z}\), we have a similar equation inside \((g_{*}\mathscr{L})(g\cdot z)\): \[1\otimes c_{g}(h)\cdot m=\phi_{g\cdot z}([g_{*}\mathscr{L}])(c_{g}(h))\otimes m \quad\text{for all}\quad m\in(g_{*}\mathscr{L})_{g\cdot z}.\] Note that the map \(\tau_{g}\) satisfies \(\tau_{g}(a\cdot m)=g_{z}^{\mathcal{O}}(a)\cdot\tau_{g}(m)\) for all \(a\in\mathcal{O}_{X,z},m\in\mathscr{L}_{z}\). Therefore \(1\otimes\tau_{g}:L\otimes_{K}\mathscr{L}_{z}\to L\otimes_{K}(g_{*}\mathscr{L })_{g\cdot z}\) descends to a well-defined \(L\)-linear map \(\mathscr{L}(z)\to(g_{*}\mathscr{L})(g\cdot z)\). Applying this map to the first equation and comparing the result with the second shows that \[\phi_{z}([\mathscr{L}])(h)=\phi_{g\cdot z}([g_{*}\mathscr{L}])(c_{g}(h))=(c_{g }^{*}\circ\phi_{g\cdot z}\circ g)([\mathscr{L}])(h)\quad\text{for all}\quad h \in H_{z}.\] This implies the commutativity of the diagram in the statement. Forgetting the \(G\)-equivariant structure gives us a functor \(\underline{\omega}\) from the category of \(G\)-equivariant line bundles with flat connection on \(X\) and isomorphisms between them to the category of line bundles with flat connection on \(X\) and isomorphisms between them. Moreover \(\underline{\omega}\) induces a group homomorphism \[\omega\colon\operatorname{PicCon}^{G}(X)\longrightarrow\operatorname{PicCon}(X).\] **Proposition 3.2.14**.: Suppose that \(X\) is quasi-Stein and geometrically connected. There is an exact sequence of abelian groups \[0\to\operatorname{Hom}(G,K^{\times})\to\operatorname{PicCon}^{G}(X) \xrightarrow{\omega}\operatorname{PicCon}(X)^{G}\] with the first non-trivial map given by \(\chi\mapsto\mathcal{O}_{\chi}\). Proof.: It is easy to verify that \(\chi\mapsto\mathcal{O}_{\chi}\) defines a group homomorphism from \(\operatorname{Hom}(G,K^{\times})\) to \(\operatorname{PicCon}^{G}(X)\). Moreover we observe that for any \(G\)-equivariant line bundle with flat connection \(\mathscr{L}\), the space of global horizontal sections4 of \(\mathscr{L}\) Footnote 4: Recall that \(\operatorname{Hom}_{\mathcal{O}}(\mathcal{O},\mathcal{F})=\mathcal{F}(X)\) for _any_\(\mathcal{O}\)-module \(\mathcal{F}\) and any ringed space \((X,\mathcal{O})\) \[\operatorname{Hom}_{\mathcal{D}}(\mathcal{O},\mathscr{L})=\mathscr{L}(X)^{ \mathcal{T}(X)=0} \tag{8}\] is a \(K\)-linear \(G\)-representation: if \(g\in G\) and \(\mathcal{T}(X)\cdot v=0\) for some \(v\in\mathscr{L}(X)\), then \(\partial\cdot(g\cdot v)=g\cdot((g^{-1}\cdot\partial)\cdot v)=0\) for all \(\partial\in\mathcal{T}(X)\) so that \(g\cdot v\in\mathscr{L}(X)^{\mathcal{T}(X)=0}\) again. Suppose that \(\chi\in\operatorname{Hom}(G,K^{\times})\) is such that \(\mathcal{O}_{\chi}\) is isomorphic to \(\mathcal{O}\) as a \(G\)-equivariant line bundle with flat connection. Considering the global horizontal sections, we obtain an isomorphism of continuous \(G\)-representations \[\mathcal{O}(X)^{\mathcal{T}(X)=0}\quad\cong\quad\mathcal{O}_{\chi}(X)^{ \mathcal{T}(X)=0}.\] By Lemma 3.1.4, both of these \(K\)-vector spaces are \(1\)-dimensional and spanned by \(1\in\mathcal{O}(X)\). However the \(G\)-action on the first is trivial, whereas the \(G\)-action on the second is through the character \(\chi\). Hence \(\chi\) is the trivial character, and the map \(\operatorname{Hom}(G,K^{\times})\to\operatorname{PicCon}^{G}(X)\) is injective. If \(\mathscr{L}\) is \(G\)-equivariant, then \((g^{-1})^{\mathscr{L}}:\mathscr{L}\to g_{*}\mathscr{L}\) is a \(\mathcal{D}\)-linear isomorphism for all \(g\in G\) which means that the class \([\mathscr{L}]\) in \(\operatorname{PicCon}(X)\) is fixed by this \(G\)-action i.e. the image of the map \(\operatorname{PicCon}^{G}(X)\to\operatorname{PicCon}(X)\) is indeed contained in \(\operatorname{PicCon}(X)^{G}\). Finally suppose that \(\mathscr{L}\) is a \(G\)-equivariant line bundle with flat connection on \(X\) which becomes trivial after forgetting the \(G\)-structure. Then we can find a \(\mathcal{D}\)-linear isomorphism \(\varphi:\mathcal{O}\xrightarrow{\cong}\mathscr{L}\). Applying the functor of global horizontal sections together with Lemma 3.1.4, we deduce that the \(K\)-linear \(G\)-representation \(\mathscr{L}(X)^{\mathcal{T}(X)=0}\) is in fact one-dimensional. Let \(v:=\varphi(X)(1)\in\mathscr{L}(X)^{\mathcal{T}(X)=0}\); since \(1\in\mathcal{O}(X)\) generates \(\mathcal{O}\) as an \(\mathcal{O}\)-module, we see that \(v\in\mathscr{L}(X)\) generates \(\mathscr{L}\) as an \(\mathcal{O}\)-module: \(\mathscr{L}=\mathcal{O}\cdot v\). Let \(\chi\in\operatorname{Hom}(G,K^{\times})\) describe the \(G\)-action on \(v\), so that \(g\cdot v=\chi(g)v\) for all \(g\in G\). This gives us a character \(\chi:G\to K^{\times}\) and we will show that \(\chi\) is continuous. To see this, choose a non-empty affinoid subdomain \(U\) of \(X\) and let \(\mathcal{A}\) be an affine formal model in \(\mathcal{O}(U)\), so that \(\mathcal{A}\cdot v_{|U}\) is the unit ball in \(\mathscr{L}(U)=\mathcal{O}(U)\cdot v_{|U}\) with respect to some \(K\)-Banach norm defining the canonical topology on \(\mathscr{L}(U)\). By Definition 3.2.8(b), for any \(n\geqslant 0\) we can find an open subgroup \(H_{n}\) of \(G_{U}\) such that \(h\cdot v_{|U}\equiv v_{|U}\operatorname{mod}\pi_{F}^{n}\mathcal{A}\cdot v_{|U}\) for all \(h\in H_{n}\). Hence \(\chi(h)\equiv 1\operatorname{mod}\pi_{F}^{n}K^{\circ}\) for all \(h\in H_{n}\), and \(\chi:G\to K^{\times}\) is therefore continuous. With \(\chi\in\operatorname{Hom}(G,K^{\times})\) in hand, consider the \(G\)-equivariant line bundle with connection \(\mathscr{L}\otimes_{\mathcal{O}}\mathcal{O}_{\chi^{-1}}\), and the \(\mathcal{D}\)-linear isomorphism \(\varphi_{\chi}:\mathcal{O}\to\mathscr{L}\otimes_{\mathcal{O}}\mathcal{O}_{ \chi^{-1}}\) given by \(\varphi_{\chi}(f):=\varphi(f)\otimes 1\). We claim that \(\varphi_{\chi}\) is \(G\)-equivariant; given this claim, it follows immediately that \(\mathscr{L}\cong\mathcal{O}_{\chi}\) as a \(G\)-equivariant \(\mathcal{D}\)-module, and then \([\mathscr{L}]\in\operatorname{PicCon}^{G}(X)\) lies in the image of \(\operatorname{Hom}(G,K^{\times})\) as required. To establish the claim, we first check that \(\varphi_{\chi}(X):\mathcal{O}(X)\to(\mathscr{L}\otimes\mathcal{O}_{\chi^{-1}})(X)\) is \(G\)-equivariant: for \(g\in G\) and \(f\in\mathcal{O}(X)\) we have \[g\cdot\varphi_{\chi}(X)(f)=g\cdot(fv\otimes 1)=(g\cdot f)(g\cdot v)\otimes \chi^{-1}(g)=(g\cdot f)v\otimes 1=\varphi_{\chi}(X)(g\cdot f).\] By replacing \(\mathscr{L}\) by \(\mathscr{L}\otimes\mathcal{O}_{\chi^{-1}}\), it now remains to show that if \(\varphi:\mathcal{O}\to\mathscr{L}\) is a \(\mathcal{D}\)-linear isomorphism such that \(\varphi(X):\mathcal{O}(X)\to\mathscr{L}(X)\) is \(G\)-equivariant, then \(\varphi\) is also \(G\)-equivariant. To this end, fix \(g\in G\), and consider the morphisms \(g^{*}(\varphi)\circ g^{\mathcal{O}}:\mathcal{O}\to g^{*}\mathscr{L}\) and \(g^{\mathscr{L}}\circ\varphi:\mathcal{O}\to g^{*}\mathscr{L}\). By precomposing the \(g^{*}\mathcal{O}\)-module structure on \(g^{*}\mathscr{L}\) with the ring isomorphism \(g^{\mathcal{O}}:\mathcal{O}\to g^{*}\mathcal{O}\) we may regard \(g^{*}\mathscr{L}\) to be an \(\mathcal{O}\)-module. Then it is coherent, and the two maps are \(\mathcal{O}\)-linear. Since they have the same global sections by our computation above and since \(X\) is quasi-Stein, we conclude using [27, Corollary 3.3] that the two maps are equal. This means that \(\varphi\) is \(G\)-equivariant. In order to understand \(\operatorname{PicCon}^{G^{0}}(\Omega)\) the following result will prove to be useful. **Proposition 3.2.15**.: Suppose that \(X\) is quasi-Stein and geometrically connected and \(G\) is equal to an amalgamated product \(A\ast_{C}B\) of its open subgroups \(A\) and \(B\) along their common subgroup \(C\). Then the homomorphism \[(p_{A},p_{B})\colon\operatorname{PicCon}^{G}(X)\to\operatorname{PicCon}^{A}(X )\underset{\operatorname{PicCon}^{C}(X)}{\times}\operatorname{PicCon}^{B}(X)\] induced by restriction of equivariant structures is an isomorphism. Proof.: Suppose that \([\mathscr{L}]\in\operatorname{PicCon}^{G}(X)\) lies in the kernel of \((p_{A},p_{B})\). Then \[\omega([\mathscr{L}])=[\mathcal{O}]\in\operatorname{PicCon}(X).\] By Proposition 3.2.14, \([\mathscr{L}]=[\mathcal{O}^{\chi}]\) for some \(\chi\in\operatorname{Hom}(G,K^{\times})\). However, also by Proposition 3.2.14, \(\chi|_{A}\) and \(\chi|_{B}\) are both trivial and so, as \(G\) is generated by \(A\) and \(B\), \(\chi\) is trivial. Thus \([\mathscr{L}]=[\mathcal{O}]\in\operatorname{PicCon}^{G}(X)\) and \((p_{A},p_{B})\) is injective. Suppose now that \([\mathscr{L}_{A}]\in\operatorname{PicCon}^{A}(X)\), \([\mathscr{L}_{B}]\in\operatorname{PicCon}^{B}(X)\) and that there is an isomorphism \(\theta\colon\mathscr{L}_{A}|_{C}\xrightarrow{\cong}\mathscr{L}_{B}|_{C}\) of \(C\)-equivariant line bundles with flat connection obtained by restriction of equivariant structures from \(A\) and \(B\) to \(C\). We transport the \(B\)-equivariant structure on \(\mathscr{L}_{B}\) along \(\theta\) to \(\mathscr{L}_{A}\). In this way, the \(\mathcal{D}\)-module \(\mathscr{L}:=\underline{\omega}(\mathscr{L}_{A})\) can be equipped with \(A\)-equivariant and \(B\)-equivariant structures whose restrictions to \(C\) agree. By Proposition 2.4.11, for every subgroup \(H\) of \(G\) there is a bijection between the set of all \(H-\mathcal{D}\)-module structures on \(\mathscr{L}\) extending the given \(\mathcal{D}\)-module structure on \(\mathscr{L}\), and the set \(\mathcal{S}_{\mathcal{D}}(H,\mathscr{L})\). This bijection is given by \[\{g^{\mathscr{L}}\}_{g\in H}\mapsto[g\mapsto(g,g^{\mathscr{L}})\in \operatorname{Aut}_{\mathcal{D}}(\mathscr{L}/X/H)].\] On the other hand, by Theorem 2.4.13 restriction induces a bijection \[\mathcal{S}_{\mathcal{D}}(G,\mathscr{L})\xrightarrow{\cong}\mathcal{S}_{ \mathcal{D}}(A,\mathscr{L})\times_{\mathcal{S}_{\mathcal{D}}(C,\mathscr{L})} \mathcal{S}_{\mathcal{D}}(B,\mathscr{L}).\] It follows that there exists a \(G-\mathcal{D}\)-module \(\mathscr{L}_{G}=(\mathscr{L},\{g^{\mathscr{L}}\}_{g\in G})\) whose restriction to \(A\) is \(\mathscr{L}_{A}\), and whose restriction to \(B\) is the transport of \(\mathscr{L}_{B}\) to \(\mathscr{L}_{A}\) along \(\theta\). Since \(A\) and \(B\) are open in \(G\), the action map \(G_{U}\to\mathcal{B}(\mathscr{L}(U))^{\times}\) is continuous for every affinoid subdomain \(U\) of \(X\), because the restrictions of this map to both \(A_{U}\) and \(B_{U}\) are continuous. This shows that \([\mathscr{L}_{G}]\in\operatorname{PicCon}^{G}(X)\). By construction, we have \(p_{A}([\mathscr{L}_{G}])=[\mathscr{L}_{A}]\) and \(p_{B}([\mathscr{L}_{G}])=[\mathscr{L}_{B}]\). Hence \((p_{A},p_{B})\) is surjective. **Lemma 3.2.16**.: If \(X\) is a geometrically connected quasi-Stein space with an admissible cover by an increasing chain \((X_{n})\) of \(G\)-stable affinoid subdomains then the restriction maps \(\operatorname{PicCon}^{G}(X)\to\operatorname{PicCon}^{G}(X_{n})\) induce an isomorphism of groups \[\operatorname{PicCon}^{G}(X)\cong\varprojlim_{\longleftarrow}\operatorname{ PicCon}^{G}(X_{n}).\] Proof.: Certainly the restriction maps induce a group homomorphism \[\alpha\colon\operatorname{PicCon}^{G}(X)\to\varprojlim_{\longleftarrow} \operatorname{PicCon}^{G}(X_{n}).\] By Corollary 3.1.6, if \([\mathscr{L}]\in\ker\alpha\) then \(\omega([\mathscr{L}])=[\mathcal{O}_{X}]\in\operatorname{PicCon}(X)\). Thus \([\mathscr{L}]=[\mathcal{O}_{X}^{\chi}]\) for some continuous character \(\chi\colon G\to K^{\times}\) by Lemma 3.2.14. But then \([\mathscr{L}]_{X_{n}}]=[\mathcal{O}_{X_{n}}^{\chi}]\) for any \(n\geqslant 0\) and so \(\chi\) is the trivial character since \(\alpha([\mathscr{L}])\) is trivial. Thus \(\alpha\) is injective. Let \(([\mathscr{L}_{n}])\in\prod\operatorname{PicCon}^{G}(X_{n})\) be a compatible family of isomorphism classes of equivariant line bundles with connection under restriction so that for each \(n\) we can find an isomorphism \[\theta_{n+1,n}\colon\mathscr{L}_{n+1}|_{X_{n}}\to\mathscr{L}_{n}\] of \(G\)-equivariant line bundles with connection on \(X_{n}\). Thus each \(l\geqslant m\), we can define \(\theta_{m,l}\colon\mathscr{L}_{l}|_{X_{m}}\to\mathscr{L}_{m}\) to be the composite of the restrictions of \(\theta_{l,l-1},\theta_{l-1,l-2},\ldots,\theta_{m+1,m}\) to \(X_{m}\). Now \((\mathscr{L}_{l},\theta_{m,l})\) forms gluing data for the cover \((X_{m})_{m\geqslant n}\) of \(X\). The resulting sheaf \(\mathscr{L}\) is an \(G\)-equivariant line bundle with connection on \(X\) with \(\alpha([\mathscr{L}])=([\mathscr{L}_{n}])\) and so \(\alpha\) is surjective. ### Cocycles and equivariant line bundles on affinoids In this technical subsection, we will explain how isomorphism classes of \(G\)-equivariant structures on the trivial line bundle over a \(K\)-affinoid variety \(X\) can be classified through the language of continuous \(1\)-cocycles of \(G\) acting on \(\mathcal{O}(X)^{\times}\). This material will be crucial to the proof of one of the main results in SS4.4, namely Theorem 4.4.1. We assume throughout SS3.3 that \(X\) is a smooth and geometrically connected \(K\)-**affinoid** space, with a topological group \(G\) acting continuously on it. **Lemma 3.3.1**.: 1. The set of \(G\)-equivariant structures on a trivial line bundle \(\mathscr{L}=\mathcal{O}\cdot v\) is in natural bijection with \(Z^{1}(G,\mathcal{O}(X)^{\times})\) under a function that sends \(\{g^{\mathscr{L}}:g\in G\}\) to the function \(\alpha:G\to\mathcal{O}(X)^{\times}\) determined by the rule \[g^{\mathscr{L}}(v)=\alpha(g)v\quad\text{for all}\quad g\in G.\] 2. The bijection in (a) induces an isomorphism \[\theta_{X}^{G}:\ker\left(\operatorname{Pic}^{G}(X)\to\operatorname{Pic}(X) \right)\stackrel{{\cong}}{{\longrightarrow}}H^{1}(G,\mathcal{O}( X)^{\times}).\] Proof.: (a) Suppose that for each \(g\in G\), we have a morphism of sheaves of \(K\)-vector spaces \(g^{\mathscr{L}}:\mathscr{L}\to g^{*}\mathscr{L}\) such that \[g^{\mathscr{L}}(fv)=g^{\mathcal{O}}(f)g^{\mathscr{L}}(v)\quad\text{for all} \quad g\in G,f\in\mathcal{O}.\] This data is completely determined by the function \(\alpha\colon G\to\mathcal{O}(X)\) given by \[g^{\mathscr{L}}(v)=\alpha(g)v\quad\text{for all}\quad g\in G.\] We first claim that \(\{g^{\mathscr{L}}:g\in G\}\) is a \(G\)-equivariant structure on \(\mathscr{L}\) if and only only \(\alpha\) is a \(1\)-cocycle with values in the group \(\mathcal{O}(X)^{\times}\). We see that for all \(g,h\in G\) \[h^{*}(g^{\mathscr{L}})h^{\mathscr{L}}(v)=h^{*}(g^{\mathscr{L}}) (\alpha(h)v) = (g\cdot\alpha(h))\alpha(g)v\text{ and} \tag{10}\] \[(gh)^{\mathscr{L}}(v) = \alpha(gh)v. \tag{9}\] Now if \(\{g^{\mathscr{L}}:g\in G\}\) defines a \(G\)-equivariant structure on \(\mathscr{L}\) then \(\alpha(1)=1\) and \(h^{*}(g^{\mathscr{L}})h^{\mathscr{L}}(v)=(gh)^{\mathscr{L}}(v)\). Thus by (9) and (10) \(\alpha(gh)=(g\cdot\alpha(h))\alpha(g)\) for all \(g,h\in G\) and in particular \[1=\alpha(gg^{-1})=(g\cdot\alpha(g^{-1}))\alpha(g)\] for all \(g\in G\). Thus \(\alpha\) is a \(1\)-cocycle with values in \(\mathcal{O}(X)^{\times}\). Conversely, if \(\alpha\) is a \(1\)-cocycle then by (9) and (10) again, for all \(g,h\in G\), \[(gh)^{\mathscr{L}}(v)=h^{*}(g^{\mathscr{L}})h^{\mathscr{L}}(v)\text{ and so}\] \[(gh)^{\mathscr{L}}=h^{*}(g^{\mathscr{L}})h^{\mathscr{L}}.\] Moreover \(\alpha(1^{2})=\alpha(1)^{2}\) and so, since \(X\) is connected, \(\alpha(1)=1\) and \(1^{\mathscr{L}}=\operatorname{id}_{\mathscr{L}}\). It remains to observe that \(\alpha\) is continuous if and only if for every affinoid subdomain \(U\) of \(X\), the action map \(G_{U}\to\mathcal{B}(\mathscr{L}(U))^{\times}\) is continuous. This holds because \[g\cdot(fv)=g^{\mathcal{O}}(f)\alpha(g)v\quad\text{ for all }\quad g\in G_{U},f \in\mathcal{O}(U)\] and because \(G\) acts continuously on \(X\). (b) Suppose now that \(\mathscr{L}_{1}=\mathcal{O}\cdot v_{1}\) and \(\mathscr{L}_{2}=\mathcal{O}\cdot v_{2}\) are two \(G\)-equivariant line bundles corresponding to \(1\)-cocycles \(\alpha_{1}\) and \(\alpha_{2}\) respectively. Let \(\varphi\colon\mathscr{L}_{1}\to\mathscr{L}_{2}\) be an isomorphism of the underlying line bundles. Then \(\varphi(v_{1})=fv_{2}\) for some \(f\in\mathcal{O}(X)^{\times}\), so for all \(g\in G\) we have \[\varphi(g^{\mathscr{L}_{1}}(v_{1}))=\varphi(\alpha_{1}(g)v_{1})=\alpha_{1}(g)fv _{2}\] whereas \[g^{\mathscr{L}_{2}}(\varphi(v_{1}))=g^{\mathscr{L}_{2}}(fv_{2})=g^{\mathcal{ O}}(f)\alpha_{2}(g)v_{2}.\] Hence \(\varphi\) defines an isomorphism of \(G\)-_equivariant_ line bundles if and only if \[\alpha_{2}(g)=\frac{g^{\mathcal{O}}(f)}{f}\alpha_{1}(g)\quad\text{ for all }\quad g\in G.\] Thus the map in (a) induces a bijection \[\theta_{X}^{G}\colon\ker\left(\operatorname{Pic}^{G}(X)\to\operatorname{Pic}( X)\right)\to H^{1}(G,\mathcal{O}(X)^{\times})\] which is a group homomorphism because \(g\cdot(v_{1}\otimes v_{2})=\alpha_{1}(g)v_{1}\otimes\alpha_{2}(g)v_{2}=( \alpha_{1}\alpha_{2})(g)(v_{1}\otimes v_{2})\) for all \(g\in G\). Recall the map \(\phi_{z}\) from Proposition 3.2.7; by abuse of notation, we will also denote its pre-composition with the forgetful map \(\operatorname{Con}^{G}(X)\to\operatorname{Pic}^{G}(X)\) by \(\phi_{z}\). **Proposition 3.3.2**.: Suppose that \(L\) is a finite field extension of \(K\). 1. The isomorphism from Lemma 3.3.1(b) induces a homomorphism \[\phi_{X}^{G}\colon\operatorname{Con}^{G}(X)\to H^{1}(G,\mathcal{O}(X)^{\times})\] by pre-composition with the forgetful map \[\operatorname{Con}^{G}(X)\to\ker(\operatorname{Pic}^{G}(X)\to\operatorname{ Pic}(X)).\] 2. For every \(z\in X(L)\) and every \([\mathscr{L}]\in\operatorname{Con}^{G}(X)\), we have \[z\circ(\operatorname{res}^{G}_{G_{z}}\phi_{X}^{G}([\mathscr{L}]))=\phi_{z}([ \mathscr{L}]).\] 3. For every \(z\in X(L)\) and every \(\chi\in\operatorname{Hom}(G,K^{\times})\), we have \[\phi_{z}([\mathcal{O}_{\chi}])=\chi|_{G_{z}}.\] 4. Let \(Y\subseteq X\) be a \(G\)-stable affinoid subdomain, with \(z\in Y(L)\subseteq X(L)\). Then the following diagram is commutative: \[\xy(-)!/_{[Y]}\xy(0,0)*{\rm Con}^{G}(X);(-)!/_{[Y]}\xy(0,0)*{\rm Hom}(G_{z},L^{ \times}).\] Proof.: (a) Since the forgetful map \({\rm Con}^{G}(X)\to\ker({\rm Pic}^{G}(X)\to{\rm Pic}(X))\) is a group homomorphism, the function \(\phi_{X}^{G}\) that is the composite of this forgetful map with \(\theta_{X}^{G}\) is also a homomorphism. (b) Suppose that \(\phi_{X}^{G}([\mathscr{L}])=[\alpha]\in H^{1}(G,\mathcal{O}(X)^{\times})\) so that we can write \(\mathscr{L}=\mathcal{O}\cdot v\) with \(g\cdot v=\alpha(g)v\) for all \(g\in G\). Then working inside \(L\otimes_{\mathcal{O}(X)}\mathscr{L}(X)\) we have \[\phi_{z}(g)(1\otimes v)=g\cdot(1\otimes v)=1\otimes\alpha(g)v=(z\circ\alpha)( g)\otimes v\quad\text{for all}\quad g\in G_{z}.\] (c) This follows from Lemma 3.3.1(a) and Definition 3.2.9. (d) If \([\mathscr{L}]=[\mathcal{O}_{X}\cdot v]\in{\rm Con}^{G}(X)\), then \([\mathscr{L}|_{Y}]=[\mathcal{O}_{Y}\cdot v]\in{\rm Con}^{G}(Y)\) and \(g^{\mathscr{L}|_{Y}}(v)=g^{\mathscr{L}}(v)\) for all \(g\in G\). Now use Lemma 3.3.1 together with part (a). **Definition 3.3.3**.: Recall from SS1.6 the map \(\delta_{G}\colon\mathcal{O}(X)^{\times}\to Z^{1}(G,\mathcal{O}(X)^{\times})\), given by \(\delta_{G}(u)(g)=g\cdot u/u\). For each \(u\in\mathcal{O}(X)^{\times}\) and \(d,e\geqslant 1\), we define \[\mathcal{Z}_{u,d,e}^{G,X}:=\left\{\alpha\in Z^{1}(G,\mathcal{O}(X)^{\times}): \alpha^{de}=\delta_{G}(u^{e})\right\}.\] This special set of \(1\)-cocycles will be useful for our explicit construction of torsion equivariant line bundles with flat connection. Recall from Lemma 3.1.10 the line bundle with connection \(\mathscr{L}_{u,d}\): it is the free \(\mathcal{O}_{X}\)-module on the \(1\)-element set \(\{v\}\), and the action of \(\mathcal{T}(X)\) is determined by \[\partial(v)=\frac{1}{d}\frac{\partial(u)}{u}v\quad\text{for all}\quad\partial \in\mathcal{T}(X). \tag{11}\] **Lemma 3.3.4**.: Let \(u\in\mathcal{O}(X)^{\times}\) and let \(d,e\geqslant 1\). 1. For each \(\alpha\in\mathcal{Z}_{u,d,e}^{G,X}\), there is a \((de)\)-torsion \(G\)-equivariant line bundle with connection \(\mathscr{L}_{u,d}^{\alpha}\) on \(X\) such that \[\omega([\mathscr{L}_{u,d}^{\alpha}])=\mathscr{L}_{u,d}\quad\text{and}\quad \phi_{X}^{G}([\mathscr{L}_{u,d}^{\alpha}])=[\alpha]\in H^{1}(G,\mathcal{O}(X) ^{\times}).\] 2. If \(u,w\in\mathcal{O}(X)^{\times}\), \(\alpha\in\mathcal{Z}_{u,d,e}^{G,X}\) and \(\beta\in\mathcal{Z}_{w,d,e}^{G,X}\), then \(\alpha\beta\in\mathcal{Z}_{uw,d,e}^{G,X}\) and \[\mathscr{L}_{u,d}^{\alpha}\otimes\mathscr{L}_{w,d}^{\beta}\cong\mathscr{L}_ {uw,d}^{\alpha\beta}.\] 3. For each \(\alpha\in\mathcal{Z}_{u,d,e}^{G,X}\), the following are equivalent: 1. \([\mathscr{L}_{u,d}^{\alpha}]=[\mathcal{O}]\) in \({\rm Con}^{G}(X)[de]\), 2. there is \(f\in\mathcal{O}(X)^{\times}\) such that \(u/f^{d}\in K^{\times}\) and \(\alpha=\delta_{G}(f)\). 4. The map \(\alpha\mapsto[\mathscr{L}_{u,d}^{\alpha}]\) defines a bijection \[\mathcal{Z}_{u,d,e}^{G,X}\stackrel{{\cong}}{{\longrightarrow}} \left\{[\mathscr{L}]\in{\rm Con}^{G}(X)[de]:\omega([\mathscr{L}])=[\mathscr{L }_{u,d}]\right\}.\] Proof.: (a) We equip \(\mathscr{L}_{u,d}\) with the \(G-\mathcal{O}_{X}\)-module structure associated to \(\alpha\) by Lemma 3.3.1. In particular the action map \(G_{U}\to\mathcal{B}(\mathscr{L}(U))^{\times}\) is continuous for every affinoid subdomain \(U\) of \(X\). We check that this is in fact a \(G-\mathcal{D}_{X}\)-module. Using \(\alpha^{de}=\delta_{G}(u^{e})\), we compute \[\frac{\partial(\alpha(g))}{\alpha(g)}=\frac{1}{d}\left(\frac{\partial(g\cdot u) }{g\cdot u}-\frac{\partial(u)}{u}\right)\quad\text{for each}\quad\partial\in \mathcal{T}_{X},g\in G.\] Using this together with (11) we compute \[(g\cdot\partial)(g\cdot v) = (g\cdot\partial)(\alpha(g))v+\alpha(g)(g\cdot\partial)(v)\] \[= \frac{1}{d}\left(\frac{(g\cdot\partial)(g\cdot u)}{g\cdot u}- \frac{(g\cdot\partial)u}{u}\right)\alpha(g)v+\alpha(g)\frac{1}{d}\frac{(g \cdot\partial)(u)}{u}v\] \[= \frac{1}{d}\frac{g\cdot(\partial(u))}{g\cdot u}g\cdot v\] \[= g\cdot(\partial(v)).\] Thus we obtain a \(G\)-equivariant line bundle with connection on \(X\) that we denote \(\mathscr{L}_{u,d}^{\alpha}\). It is evident that \(\omega\left([\mathscr{L}_{u,d}^{\alpha}]\right)=[\mathscr{L}_{u,d}]\). Using (11), we see that \(v^{\otimes de}\mapsto u^{e}\) defines an isomorphism \(\psi:\mathscr{L}_{u,d}^{\otimes de}\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{O}_{X}\) of line bundles with flat connection. To establish that \(\mathscr{L}_{u,d}^{\alpha}\) is \((de)\)-torsion, it suffices to show this isomorphism is \(G\)-linear. Since for all \(g\in G\) we have \[g\cdot(v^{\otimes de})=\alpha^{de}(g)v^{\otimes de}\quad\text{and}\quad g\cdot (u^{e})=\delta_{G}(u^{e})(g)u^{e};\] we have \(\alpha^{de}=\delta_{G}(u)^{e}\) and the \(G\)-linearity follows. (b) Since \(\alpha^{de}=\delta_{G}(u^{e})\) and \(\beta^{de}=\delta_{G}(w^{e})\) and \(\delta_{G}\colon\mathcal{O}(X)^{\times}\to Z^{1}(G,\mathcal{O}(X)^{\times})\) is a group homomorphism, we see that \((\alpha\beta)^{e}=\delta_{G}((uw)^{e})\). Hence \(\alpha\beta\in\mathcal{Z}_{uw,d,e}^{G,X}\). Using Proposition 3.1.11, we see that \([\mathscr{L}_{u,d}\otimes\mathscr{L}_{v,d}]=[\mathscr{L}_{uv,d}]\) in \(\operatorname{Con}(X)[d]\). Using the definition of the \(G\)-equivariant structure on \(\mathscr{L}_{u,d}^{\alpha}\) in part (a) and the definition of the product in \(\operatorname{Con}^{G}(X)\) given in the proof of Lemma 3.2.10, we also see that \([\mathscr{L}_{u,d}^{\alpha}\otimes\mathscr{L}_{w,d}^{\beta}]=[\mathscr{L}_{uw,d}^{\alpha\beta}]\) in \(\operatorname{Con}^{G}(X)\). (c) For any \(f\in\mathcal{O}(X)^{\times}\), there is an isomorphism \(\mathcal{O}\stackrel{{\cong}}{{\longrightarrow}}\mathscr{L}_{f^{ d},d}^{\delta_{G}(f)}\) of \(G\)-equivariant line bundles with connection on \(X\), sending \(1\) to \(f^{-1}v\). This gives the equality \([\mathscr{L}_{f^{d},d}^{\delta_{G}(f)}]=[\mathcal{O}]\) in \(\operatorname{Con}^{G}(X)\). Using part (b), we then also have \[[\mathscr{L}_{u,d}^{\alpha}]=[\mathscr{L}_{u,d}^{\alpha}]\cdot[\mathcal{O}]=[ \mathscr{L}_{u,d}^{\alpha}]\cdot[\mathscr{L}_{f^{d},d}^{\delta_{G}(f)}]^{-1}= [\mathscr{L}_{u/f^{d},d}^{\alpha/\delta_{G}(f)}]. \tag{12}\] Suppose now that \(u=\lambda f^{d}\) for some \(\lambda\in K^{\times}\) and some \(f\in\mathcal{O}(X)^{\times}\) such that \(\alpha=\delta_{G}(f)\). Using (12), we then have \([\mathscr{L}_{u,d}^{\alpha}]=[\mathcal{L}_{\lambda,d}^{1}]=[\mathcal{O}]\) in \(\operatorname{Con}^{G}(X)\). Conversely, suppose that \([\mathscr{L}_{u,d}^{\alpha}]=[\mathcal{O}]\) in \(\operatorname{Con}^{G}(X)\). Then \(\omega([\mathscr{L}_{u,d}^{\alpha}])=[\mathscr{L}_{u,d}]=[\mathcal{O}]\) in \(\operatorname{Con}(X)^{G}\), so using Proposition 3.1.11 we can find \(f\in\mathcal{O}(X)^{\times}\) and \(\lambda\in K^{\times}\) such that \(u=\lambda f^{d}\). Then (12) implies that \([\mathcal{O}]=[\mathscr{L}_{u,d}^{\alpha}]=[\mathscr{L}_{\lambda,d}^{\alpha/ \delta_{G}(f)}]\) in \(\operatorname{Con}^{G}(X)\). Hence \(G\) must fix the basis vector \(v\) of \(\mathscr{L}_{\lambda,d}^{\alpha/\delta_{G}(f)}(X)^{\mathcal{T}(X)=0}\), so \(\alpha=\delta_{G}(f)\). (d) Suppose that \([\mathscr{L}]\in\operatorname{Con}^{G}(X)[de]\) is such that \(\omega([\mathscr{L}])=[\mathscr{L}_{u,d}]\). Consider the function \(\alpha\colon G\to\mathcal{O}(X)^{\times}\) defined by \(g\cdot v=\alpha(g)v\). Then \(\alpha\in Z^{1}(G,\mathcal{O}(X)^{\times})\) by Lemma 3.3.1. Since \((de)\cdot[\mathscr{L}]=0\) in \(\operatorname{Con}^{G}(X)\), there is an isomorphism of \(G\)-equivariant line bundles with flat connection \(\varphi:\mathscr{L}^{\otimes de}\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{O}_{X}\). On the other hand we also have the isomorphism \(\psi:\mathscr{L}_{u,d}^{\otimes de}\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{O}_{X}\) of line bundles with flat connection constructed in the proof of part (a) above. Since \(\omega([\mathscr{L}])=\mathscr{L}_{u,d}\), Corollary 3.1.5 implies that \(\varphi=\lambda\psi\) for some \(\lambda\in K^{\times}\), and then \(\varphi(v^{\otimes de})=\lambda u^{e}\). Since \(\varphi\) is \(G\)-linear, we have \[g\cdot\lambda u^{e}=g\cdot\varphi(v^{\otimes de})=\varphi(g\cdot v^{\otimes de })=\alpha(g)^{de}\lambda u^{e}\quad\text{for all}\quad g\in G,\] so \(\alpha^{de}=\delta_{G}(u^{e})\). Hence \(\alpha\in\mathcal{Z}^{G,X}_{u,d,e}\) and \([\mathscr{L}]=[\mathscr{L}^{\alpha}_{u,d}]\) in \(\operatorname{Con}^{G}(X)\). Finally, suppose that \(\alpha,\beta\in\mathcal{Z}^{G,X}_{u,d,e}\) are such that \([\mathcal{L}^{\alpha}_{u,d}]=[\mathcal{L}^{\beta}_{u,d}]\). Then \([\mathcal{L}^{\alpha/\beta}_{1,d,e}]=[\mathcal{O}]\) by part (b), so \(\alpha/\beta=\delta_{G}(f)\) for some \(f\in\mathcal{O}(X)^{\times}\) such that \(1/f^{d}\in K^{\times}\). It follows, by Corollary 3.1.7, that \(f\in K^{\times}\). Then \(\delta_{G}(f)=1\) so \(\alpha=\beta\) as required. **Notation 3.3.5**.: We will write \(T(X)\) to denote the discrete abelian group \[T(X):=\mathcal{O}(X)^{\times}/\mathcal{O}(X)^{\times\times}\] and \(\pi_{T(X)}\) to denote the natural projection map \(\pi_{T(X)}\colon\mathcal{O}(X)^{\times}\to T(X)\). Since \(\mathcal{O}(X)^{\times\times}\) is open in \(\mathcal{O}(X)^{\times}\), \(\pi_{T(X)}\) is continuous. We also observe that \(T(-)\) defines a functor from affinoid varieties to abelian groups. Our next result gives conditions on the data \(u,d,e,\alpha\) that determine when \([\mathscr{L}^{\alpha}_{u,d,e}]\in\operatorname{Con}^{G}(X)\) is in fact the trivial element, in the case where \(d,e\) are both coprime to \(p\). **Proposition 3.3.6**.: Let \(d,e\geqslant 1\) be integers coprime to \(p\). Then for every \(u\in\mathcal{O}(X)^{\times}\) and \(\alpha\in\mathcal{Z}^{G,X}_{u,d,e}\), the following are equivalent: 1. there exists \(v\in\mathcal{O}(X)^{\times}\) such that \(u/v^{d}\in K^{\times}\) and \(\alpha=\delta_{G}(v)\), 2. there exists \(v\in\mathcal{O}(X)^{\times}\) and \(\lambda\in K^{\times}\) such that \[\pi_{T(X)}(\lambda v^{d})=\pi_{T(X)}(u)\quad\text{and}\quad\pi_{T(X)}\circ \alpha=\pi_{T(X)}\circ\delta_{G}(v).\] 3. \([\mathscr{L}^{\alpha}_{u,d,e}]=[\mathcal{O}]\in\operatorname{Con}^{G}(X)\). Proof.: The equivalence of (i) and (iii) is a special case of Lemma 3.3.4(c). The implication (i)\(\Rightarrow\)(ii) is immediate since we can take \(\lambda=u/v^{d}\). Suppose that \(v\in\mathcal{O}(X)^{\times}\) and \(\lambda\in K^{\times}\) such that \(\pi_{T_{X}}(\lambda v^{d})=\pi_{T(X)}(u)\) and \(\pi_{T(X)}\circ\alpha=\pi_{T(X)}\circ\delta_{G}(v)\). Since \(\ker\pi_{T(X)}=\mathcal{O}(X)^{\times\times}\), using Lemma 4.3.2, we can find \(\varepsilon\in\mathcal{O}(X)^{\times\times}\) such that \(\varepsilon^{d}=\lambda v^{d}/u\). Setting \(v^{\prime}:=v/\varepsilon^{\prime}\), we have \[u/v^{\prime d}\in K^{\times}\quad\text{and}\quad\pi_{T(X)}\circ\delta_{G}(v^{ \prime})=\pi_{T(X)}\circ\delta_{G}(v)=\pi_{T(X)}\circ\alpha.\] To deduce that (i) holds it remains to prove that \(\alpha=\delta_{G}(v^{\prime})\). Since \(u/v^{\prime d}\in K^{\times}\), we have \(\delta_{G}(u)=\delta_{G}(v^{\prime})^{d}\). Because \(\alpha\in\mathcal{Z}^{G,X}_{u,d,e}\), \(\alpha^{de}=\delta_{G}(u^{e})=\delta_{G}(v^{\prime de})\) shows that \(\alpha/\delta_{G}(v^{\prime})\) takes values in \(\mu_{de}(K)\). However \(\alpha/\delta_{G}(v^{\prime})\) also takes values in \(\ker\pi_{T(X)}=\mathcal{O}(X)^{\times\times}\). We're now done because \(\mathcal{O}(X)^{\times\times}\cap\mu_{de}(K)\) is trivial. **Proposition 3.3.7**.: Assume the hypotheses of Proposition 3.3.6 hold. Suppose also that \(G\) is compact and that the exponent of every finite abelian \(p^{\prime}\)-quotient of \(G\) divides \(e\). For every \(\beta\in Z^{1}(G,\mathcal{O}(X)^{\times})\) and \(u\in\mathcal{O}(X)^{\times}\) such that \(\pi_{T(X)}\circ\left(\beta^{-d}\delta_{G}(u)\right)\) takes values in \(K^{\times}/K^{\times\times}\) there is a unique \(\alpha\in\mathcal{Z}^{G,X}_{u,d,e}\) such that \[\pi_{T(X)}\circ\alpha=\pi_{T(X)}\circ\beta\in Z^{1}(G,T(X)).\] Proof.: Let \(\eta\) be the \(1\)-cocycle \(\eta:=\delta_{G}(u)\beta^{-d}\). The assumption on \(\eta\) implies that \(\pi_{T(X)}\circ\eta\in Z^{1}(G,K^{\times}/K^{\times\times})\). Since \(G\) is compact and \(K^{\times}/K^{\times\times}\) is a discrete group with trivial \(G\)-action, \(\pi_{T(X)}\circ\eta\in\operatorname{Hom}(G,K^{\times}/K^{\times\times})\) has finite image and so takes values in the torsion subgroup of \(K^{\times}/K^{\times\times}\). Since \(K^{\times}/K^{\times\times}\) has no \(p\)-torsion, \(\pi_{T(X)}\circ\eta\) factors through a finite abelian \(p^{\prime}\)-quotient of \(G\) and thus \((\pi_{T(X)}\circ\eta)^{e}=1\) by our assumption on \(G\). That is \(\eta^{e}\) takes values in \(\mathcal{O}(X)^{\times\times}\). By Lemma 4.3.2(a), \(\eta^{e}\) has a \(de^{\mathrm{th}}\) root \(\gamma\) in \(Z^{1}(G,\mathcal{O}(X)^{\times\times})\): \(\gamma^{de}=\eta^{e}\). Now \(\delta_{G}(u)^{e}=(\eta\beta^{d})^{e}=(\gamma j)^{de}\), so \(\alpha:=\gamma\beta\) satisfies \(\alpha^{de}=\delta_{G}(u)^{e}\). Moreover \(\pi_{T(X)}\circ\alpha=\pi_{T(X)}\circ\beta\) as required. Suppose now that \(\alpha^{\prime}\in\mathcal{Z}_{u,d,e}^{G,X}\) satisfies \(\pi_{T(X)}\circ\alpha^{\prime}=\pi_{T(X)}\circ\beta=\pi_{T(X)}\circ\alpha\). Then \(\alpha/\alpha^{\prime}\) takes values in \(\mathcal{O}(X)^{\times\times}\) but also \((\alpha/\alpha^{\prime})^{de}=1\). Then \(\alpha^{\prime}=\alpha\) since \(p\nmid de\). ## 4. Applications to Drinfeld's upper half plane ### Subdomains of the rigid analytic affine line We will write \(\mathbb{A}:=\mathbb{A}_{K}:=\mathbb{A}_{K}^{1,\mathrm{an}}\) to denote the rigid \(K\)-analytic affine line, equipped with a fixed choice of local coordinate \(x\in\mathcal{O}(\mathbb{A})\). We write \(\mathbb{P}^{1}\) to denote the rigid \(K\)-analytic projective line. **Definition 4.1.1**.: A \(K\)_-cheese_ is an affinoid subdomain of \(\mathbb{A}\) of the form \[C_{K}(\alpha,\mathbf{s}):=\operatorname{Sp}K\left\langle\frac{x-\alpha_{0}}{s _{0}},\frac{s_{1}}{x-\alpha_{1}},\cdots,\frac{s_{g}}{x-\alpha_{g}}\right\rangle\] for some \(\alpha:=(\alpha_{0},\ldots,\alpha_{g})\in K^{g+1}\) and \(\mathbf{s}:=(s_{0},\ldots,s_{g})\in(K^{\times})^{g+1}\), which satisfy * \(|s_{i}|\leqslant|s_{0}|\) for all \(i=1,\ldots,g\), * \(|\alpha_{i}-\alpha_{0}|\leqslant|s_{0}|\) for all \(i=1,\ldots,g\), and * \(|\alpha_{i}-\alpha_{j}|\geqslant\max\{|s_{i}|,|s_{j}|\}\) whenever \(1\leqslant i<j\leqslant g\). When there is no risk of confusion, we will simplify the notation to \(C(\alpha,\mathbf{s})\). We call the open discs \[D_{\infty} := \{z\in\mathbb{P}^{1}(\mathbf{C}):|z-\alpha_{0}|>|s_{0}|\}\text{ and}\] \[D_{i} := \{z\in\mathbb{P}^{1}(\mathbf{C}):|z-\alpha_{i}|<|s_{i}|\}\text{ for }i=1, \ldots,g\] the _holes_ of \(C(\alpha,\mathbf{s})\) and we write \[h(C(\alpha,\mathbf{s})):=\{D_{1},\ldots,D_{g},D_{\infty}\}\] to denote the set of holes of \(C(\alpha,\mathbf{s})\). Of course the \(\mathbf{C}\)-points of \(C(\alpha,\mathbf{s})\) are obtained by removing the \(g+1\) holes from \(\mathbb{P}^{1}(\mathbf{C})\). The conditions on the parameters \(\alpha\) and \(\mathbf{s}\) are there to ensure that the holes are pairwise disjoint. We also require that \(\alpha\) and \(\mathbf{s}\) are defined over \(K\). **Remark 4.1.2**.: * Given two open discs \(D_{1},D_{2}\) in \(\mathbb{P}^{1}(\mathbf{C})\) with \(D_{1}\cap D_{2}\neq\emptyset\), it must necessarily be the case that either \(D_{1}\subseteq D_{2}\), or \(D_{2}\subseteq D_{1}\). * The union and the intersection of two cheeses \(C_{1},C_{2}\) are also cheeses, unless \(C_{1}\cap C_{2}=\emptyset\). **Lemma 4.1.3**.: Suppose that \(X\) and \(Y\) are \(K\)-cheeses and \(\varphi:\mathbb{P}^{1}\to\mathbb{P}^{1}\) is a \(K\)-analytic automorphism such that \(\varphi(Y)\subseteq X\). Then there is a unique function \[\varphi_{Y}^{X}\colon h(X)\to h(Y)\] such that for every \(D\in h(X)\), \(\varphi_{Y}^{X}(D)\) is the largest hole of \(Y\) containing \(\varphi^{-1}(D)\). Proof.: By [17, p. 33], the automorphism \(\varphi\) is necessarily a Mobius transformation. Hence \(\varphi\), as well as \(\varphi^{-1}\), maps open discs in \(\mathbb{P}^{1}(\mathbf{C})\) to open discs in \(\mathbb{P}^{1}(\mathbf{C})\). Let \(D\in h(X)\). Then \(\varphi^{-1}(D)\cap Y(\mathbf{C})\subseteq\varphi^{-1}(D\cap X(\mathbf{C}))=\emptyset\). Thus the open disc \(\varphi^{-1}(D)\) is contained in the union of the holes of \(Y\). Remark 4.1.2(a) then implies that \(\varphi^{-1}(D)\) is contained in a unique hole \(\varphi_{Y}^{X}(D)\) of \(Y\). **Notation 4.1.4**.: Suppose that \(X,Y\) are \(K\)-cheeses with \(Y\subseteq X\). We will denote the function \(\operatorname{id}_{Y}^{X}:h(X)\to h(Y)\) associated with the identity map \(\operatorname{id}:\mathbb{P}^{1}\to\mathbb{P}^{1}\) by \(\iota_{Y}^{X}:h(X)\to h(Y)\). The following lemma will be useful later. **Lemma 4.1.5**.: Suppose that \(X\) and \(Y\) are \(K\)-cheeses with non-empty intersection. Then there is a natural bijection \[\iota_{X}^{X\cup Y}\times\iota_{Y}^{X\cup Y}:h(X\cup Y)\quad\longrightarrow \quad h(X)\underset{h(X\cap Y)}{\times}h(Y)\] given by \(D\mapsto(\iota_{X}^{X\cup Y}(D),\iota_{Y}^{X\cup Y}(D))\). Proof.: Note that the map in the statement of the Lemma is well-defined because \[\iota_{X\cap Y}^{X}\circ\iota_{X}^{X\cup Y}=\iota_{X\cap Y}^{X\cup Y}=\iota_{ X\cap Y}^{Y}\circ\iota_{Y}^{X\cup Y}.\] De Morgan's laws imply that \(h(X\cup Y)\) is precisely the set of non-empty intersections \(A\cap B\) with \(A\in h(X)\) and \(B\in h(Y)\). Let \(D\in h(X\cup Y)\); hence there exist \(A\in h(X)\) and \(B\in h(Y)\) with \(D=A\cap B\). But then \(\iota_{X}^{X\cup Y}(D)=A\) and \(\iota_{Y}^{X\cup Y}(D)=B\), so \(D=A\) or \(D=B\) by Remark 4.1.2(a). Hence \(\iota_{X}^{X\cup Y}\times\iota_{Y}^{X\cup Y}\) is injective. We now show that \(\iota_{X}^{X\cup Y}\times\iota_{Y}^{X\cup Y}\) is surjective. Suppose that \(A\in h(X)\) and \(B\in h(Y)\) are such that \(\iota_{X\cap Y}^{X}(A)=\iota_{X\cap Y}^{X}(B)=:E\). This means that \(A\) and \(B\) are contained in the same hole \(E\) of \(X\cap Y\). Since \(E\) is a hole of \(X\cap Y\), by de Morgan's laws again we see that \(E\) is the union of the holes of \(X\) contained in \(E\) together with the holes of \(Y\) contained in \(E\). But no open disc in \(\mathbb{P}^{1}(\mathbf{C})\) is a finite union of proper open subdiscs; hence \(E\in h(X)\cup h(Y)\). It follows that either \(E=A\) or \(E=B\). Since \(E\) contains both \(A\) and \(B\), it follows that either \(A\subseteq B\) or \(B\subseteq A\). Hence \(D:=A\cap B\) is non-empty and is therefore a hole of \(X\cup Y\) as we saw above. It is now clear that \[(A,B)=\left(\iota_{X}^{X\cup Y}(D),\iota_{Y}^{X\cup Y}(D)\right)\] lies in the image of \(\iota_{X}^{X\cup Y}\times\iota_{Y}^{X\cup Y}\). We are interested in these cheeses because _every_ connected affinoid subdomain of the affine line \(\mathbb{A}\) is a \(K\)-cheese whenever \(K\) is algebraically closed by [18, Corollary 2.4.7]. We will prove Theorem 4.1.8 below which carries out a Galois descent of this statement down to our base field \(K\) which may fail to be algebraically closed. **Lemma 4.1.6**.: Let \(\mathcal{G}_{K}:=\operatorname{Gal}(\overline{K}/K)\) and let \(A\) be a \(K\)-affinoid algebra. Then the natural map \(A\to(A\otimes\mathbf{C})^{\mathcal{G}_{K}}\) is an isomorphism of \(K\)-Banach algebras. Proof.: Since \(A\) is a quotient of a Tate algebra, \(A\) is of countable type as a \(K\)-Banach space: it has a dense \(K\)-linear subspace of countable dimension. Assume first that \(\dim_{K}A=\infty\). By [12, Proposition 1.2.1(3)], we can find a \(K\)-Banach space isomorphism \(\varphi:A\to c_{0}(K)\); this means that \(\varphi\) is a bounded \(K\)-linear map which has a bounded \(K\)-linear inverse. Now consider the commutative diagram The arrow in the bottom row is an isomorphism because \[(c_{0}(K)\widehat{\otimes}{\bf C})^{\mathcal{G}_{K}}=c_{0}({\bf C})^{\mathcal{G}_ {K}}=c_{0}({\bf C}^{\mathcal{G}_{K}})=c_{0}(K)\] by the Ax-Sen-Tate theorem -- see, for example, [7, Proposition 2.1.2]; the proof given there works for any complete non-Archimedean field of characteristic zero. The case \(\dim_{K}A<\infty\) is handled in a similar manner. **Definition 4.1.7**.: When \(X\) is an affinoid subdomain of \(\mathbb{A}\) and \(K^{\prime}\) is a finite extension of \(K\) we say that \(X\) is _split over \(K^{\prime}\)_ if \(X_{K^{\prime}}\) is a finite union of pairwise disjoint cheeses. We will write \(\sqrt{|K|^{\times}}\) to denote the divisible subgroup of \(\mathbb{R}^{\times}\) generated by \(|K^{\times}|\); this is the same as \(|\overline{K}^{\times}|\). **Theorem 4.1.8**.: Let \(X\) be an affinoid subdomain of the affine line \(\mathbb{A}\). Then there is a finite extension \(K^{\prime}\) of \(K\) such that \(X\) splits over \(K^{\prime}\). Proof.: Recall that \(X\) is _geometrically connected_ if the base change \(X_{\bf C}\) is connected. Suppose first that \(X\) is geometrically connected. Then \(X_{\bf C}\), being connected, is a cheese \(C_{\bf C}(\alpha,{\bf s})\) by [18, Corollary 2.4.7]. Since \(\overline{K}\) is dense in \({\bf C}\), we can choose the centres \(\alpha_{0},\alpha_{1},\dots,\alpha_{g}\) to be first in \(\overline{K}\), and then find a large enough finite extension \(K^{\prime}\) of \(K\) such that \(\alpha_{i}\in K^{\prime}\) for all \(i\). Since \(|\overline{K}^{\times}|=|{\bf C}^{\times}|=\sqrt{|K^{\times}|}\), we may enlarge \(K^{\prime}\) if necessary and arrange that \(s_{i}\in K^{\prime}\) for all \(i\) as well. Let \(Z:=C_{K^{\prime}}(\alpha,{\bf s})\) be the same cheese but defined over \(K^{\prime}\). Choose a large enough closed disc \(D\) defined over \(K^{\prime}\) which contains both \(X^{\prime}:=X_{K^{\prime}}\) and \(Z\), and fix a coordinate \(y\) on \(D\). Then there is an isomorphism of \({\bf C}\)-affinoid varieties \(X^{\prime}\times_{K^{\prime}}{\bf C}\cong Z\times_{K^{\prime}}{\bf C}\) compatible with the inclusions \(X^{\prime}\hookrightarrow D\) and \(Z\hookrightarrow D\). Now consider the induced \({\bf C}\)-algebra isomorphism \[\psi:\mathcal{O}(X^{\prime})\widehat{\otimes}_{K^{\prime}}{\bf C}=\mathcal{ O}(X^{\prime}\times_{K^{\prime}}{\bf C})\stackrel{{\cong}}{{ \longrightarrow}}\mathcal{O}(Z\times_{K^{\prime}}{\bf C})=\mathcal{O}(Z) \widehat{\otimes}_{K^{\prime}}{\bf C}.\] Because \(X^{\prime}\) is an affinoid subdomain of \(D\), the \({\bf C}\)-algebra \(\mathcal{O}(X^{\prime})\widehat{\otimes}_{K^{\prime}}{\bf C}\) contains a dense \({\bf C}\)-subalgebra generated by rational functions in \(y\) with coefficients in \(K^{\prime}\), and \(\psi\) must send these rational functions to \(\mathcal{G}_{K^{\prime}}\)-invariants in the target. Hence \(\psi\) respects the natural \(\mathcal{G}_{K^{\prime}}:=\operatorname{Gal}(\overline{K}/K^{\prime})\)-actions on both sides. Taking \(\mathcal{G}_{K^{\prime}}\)-invariants and applying Lemma 4.1.6 we deduce a \(\mathcal{O}(D)\)-algebra isomorphism \(\mathcal{O}(X^{\prime})\cong\mathcal{O}(Z)\), so that \(X^{\prime}=Z\) is a cheese. Returning to the general case, it will now be enough to show that there is some finite extension \(K^{\prime\prime}\) of \(K\) such that every connected component of \(X_{K^{\prime\prime}}\) is geometrically connected. To see this, consider again \(X_{\bf C}\), and let \(\{e_{1},\dots,e_{n}\}\) be the primitive idempotents of \(\mathcal{O}(X_{\bf C})\). Since \(\mathcal{G}_{K}\) acts continuously on \(\mathcal{O}(X_{\bf C})\) the stabiliser \(H_{i}\) in \(\mathcal{G}_{K}\) of each \(e_{i}\) is closed. On the other hand, \(\mathcal{G}_{K}\) preserves \(\{e_{1},\dots,e_{n}\}\) so each \(H_{i}\) has finite index in \(\mathcal{G}_{K}\). Hence each \(H_{i}\) is also open in \(\mathcal{G}_{K}\). We can therefore find a finite extension \(K^{\prime\prime}\) of \(K\) such that \(\mathcal{G}_{K^{\prime\prime}}\) fixes each \(e_{i}\) pointwise. Then \(e_{i}\in\mathcal{O}(X_{\bf C})^{\mathcal{G}_{K^{\prime\prime}}}=(\mathcal{O}(X )\widehat{\otimes}{\bf C})^{\mathcal{G}_{K^{\prime\prime}}}=\mathcal{O}(X) \widehat{\otimes}K^{\prime\prime}=\mathcal{O}(X_{K^{\prime\prime}})\) again by Lemma 4.1.6. It follows that every connected component of \(X_{K^{\prime\prime}}\) is geometrically connected as required. **Proposition 4.1.9**.: Let \(C=C_{K}(\alpha;\mathbf{s})\) be a \(K\)-rational cheese and \(\xi=\xi_{0}\) a coordinate on \(\mathbb{A}^{1}\) such that \(D_{\infty}=\{z\in\mathbb{P}^{1}(\mathbf{C}):|\xi(z)|>1\}\). For \(i=1,\dots,g\) let \(\xi_{i}=\frac{c_{i}}{\xi-\xi(\alpha_{i})}\) with \(c_{i}\in K^{\times}\) such that \(|\xi_{i}|=1\). Then the set \(\{1,\xi_{i}^{j}:j\geqslant 1,0\leqslant i\leqslant g\}\) is an orthonormal Schauder basis for the \(K\)-Banach space \(\mathcal{O}(C)\), in the sense of [4, SS2.7.2]. Proof.: This is a straightforward rephrasing of [18, Proposition 2.4.8(a)]. **Proposition 4.1.10**.: Let \(X=C(\alpha,\mathbf{s})\) be a cheese. Then the map \[\mathbb{Z}^{g}\longrightarrow\frac{\mathcal{O}(X)^{\times}}{K^{\times}\cdot \mathcal{O}(X)^{\times\times}}\] defined by \[(n_{1},\dots,n_{g})\quad\mapsto\quad(x-\alpha_{1})^{n_{1}}\cdots(x-\alpha_{g} )^{n_{g}}\cdot K^{\times}\cdot\mathcal{O}(X)^{\times\times}\] is an isomorphism of abelian groups. Proof.: This is [18, Proposition 2.4.8(b)]. **Proposition 4.1.11**.: For every cheese \(X\), \(\operatorname{Pic}(X)=0\). Proof.: This follows from [12, Proposition 8.2.3(1)] and [34, Corollary 3.8]. Recall5 that \(X\) is said to be _quasi-Stein_ if there is an admissible affinoid covering \((X_{n})_{n=0}^{\infty}\) of \(X\) with \(X_{0}\subseteq X_{1}\subseteq X_{2}\subseteq\cdots\) such that for each \(n\geqslant 0\), the restriction map \(\mathcal{O}(X_{n+1})\to\mathcal{O}(X_{n})\) has dense image. If \(X\) is quasi-Stein, then the global sections functor \(\Gamma(X,-)\) gives a fully faithful embedding from the category of coherent \(\mathcal{O}\)-modules on \(X\) into the category \(\mathcal{O}(X)\)-modules; the essential image is the category of _coadmissible \(\mathcal{O}(X)\)-modules_ in the sense of [27]. Footnote 5: see [16, Definition 2.3] **Proposition 4.1.12**.: Suppose that \(X\) is an admissible subdomain of \(\mathbb{A}\) and \(\{X_{n}\}_{n=0}^{\infty}\) is an admissible cover of \(X\) by cheeses such that \(X_{n}\subset X_{n+1}\) for all \(n\) and each map \(\iota_{X_{n+1}}^{X_{n}}:h(X_{n+1})\to h(X_{n})\) is surjective. Then \(X\) is geometrically connected, smooth and quasi-Stein. Proof.: Since \(X_{\mathbf{C}}\) has an admissible cover by the cheeses \(X_{n,\mathbf{C}}\), \(X\) is geometrically connected and smooth. By [12, Exercise 2.6.2] all the maps \(\mathcal{O}(X_{n+1})\to\mathcal{O}(X_{n})\) have dense image and so \(X\) is quasi-Stein. One argument to complete the exercise is as follows. For any cheese \(C:=C(\alpha,\mathbf{s})\) the sub-\(K\)-algebra of rational functions \[\mathcal{O}_{\operatorname{rat}}(C):=K[x,(x-\alpha_{1})^{-1},\dots,(x-\alpha_ {d})^{-1}]\] is dense in \(\mathcal{O}(C)\), by Proposition 4.1.9. Moreover, the condition \(h(X_{n+1})\to h(X_{n})\) is surjective guarantees that the centres of the holes in \(X_{n}\) can all be chosen to not lie in \(X_{n+1}\) so that \(\mathcal{O}_{\operatorname{rat}}(X_{n})\subset\mathcal{O}(X_{n+1})\). ### Drinfeld's upper half-plane In this section we study \(\Omega_{F}\) the _Drinfeld upper half plane_, which is a rigid \(F\)-analytic space whose underlying set consists of the \(\operatorname{Gal}(\overline{F}/F)\)-orbits in \(\Omega_{F}(\overline{F})=\mathbb{P}^{1}(\overline{F})\backslash\mathbb{P}^{1}(F)\). It is straightforward to see that for any finite extension \(L\) of \(F\), \(\Omega_{F}(L)\) can be identified with \(L\backslash F\) and we will often silently make this identification. The rigid space \(\Omega_{F}\) comes naturally equipped with an action of \(GL_{2}(F)\) by Mobius transformations: \[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot z=\frac{az+b}{cz+d}\] and the same formula induces an action on each set \(\Omega_{F}(L)\). We recall from [6, SSI.1.2] the \(GL_{2}(F)\)-equivariant reduction map \(\lambda\colon\Omega_{F}(\mathbf{C})\to|\mathcal{BT}|\) from \(\Omega_{F}(\mathbf{C})=\mathbb{P}^{1}(\mathbf{C})\backslash\mathbb{P}^{1}(F)\) to the geometric realisation \(|\mathcal{BT}|\) of the Bruhat-Tits tree \(\mathcal{BT}\) associated with \(PGL_{2}(F)\). Since \(\lambda\circ\sigma=\lambda\) for any \(\sigma\in\operatorname{Gal}(\overline{F}/F)\), this map factors through the rigid \(F\)-analytic space \(\Omega_{F}\), giving us a map \[\lambda:\Omega_{F}\to|\mathcal{BT}|.\] Any point in \(\Omega_{F}\) is the \(\operatorname{Gal}(\overline{F}/F)\)-orbit \([z]\) of some \(z\in\Omega_{F}(\overline{F})\); then we have \(\lambda([z])=\lambda(z)\). We abuse notation and also call \(\lambda:\Omega_{F}\to|\mathcal{BT}|\) the _reduction map_. **Lemma 4.2.1**.: Let \(z\in\Omega_{F}(\overline{F})\). Then \(GL_{2}(F)_{z}\leqslant GL_{2}(F)_{[z]}\leqslant GL_{2}(F)_{\lambda([z])}\). Proof.: This is a consequence of the \(GL_{2}(F)\)-equivariance of \(z\mapsto[z]\) and \(\lambda\). **Proposition 4.2.2**.: Suppose that \(\mathcal{T}\) is a finite subtree of \(\mathcal{BT}\). Then \(\lambda^{-1}(|\mathcal{T}|)\) is an \(F\)-cheese contained in \(\Omega_{F}\). Proof.: Since the union of two non-disjoint \(F\)-cheeses is an \(F\)-cheese and \(\mathcal{T}\) is connected, by an induction on the number of edges of \(\mathcal{T}\), it suffices to prove the result when \(\mathcal{T}\) is a single vertex or has two vertices connected by an edge. Both of these cases can be deduced from the discussion in [6, SSI.2.3]. **Definition 4.2.3**.: For every finite subtree \(\mathcal{T}\) of \(\mathcal{BT}\), we define \[C_{\mathcal{T}}:=\lambda^{-1}(|\mathcal{T}|)\times_{F}K.\] Note that \(C_{\mathcal{T}}\) is a \(K\)-cheese contained in \(\Omega:=\Omega_{F}\times_{F}K\), by Proposition 4.2.2. **Definition 4.2.4**.: Suppose \(\mathcal{T}\) is a finite subtree of \(\mathcal{BT}\). * The _neighbourhood of_\(\mathcal{T}\) is the subset \(N(\mathcal{T})\) of the set of edges of \(\mathcal{BT}\) with precisely one vertex in \(\mathcal{T}\): \[N(\mathcal{T}):=\{(ss^{\prime})\in E(\mathcal{BT}):s\in\mathcal{T},s^{\prime} \not\in\mathcal{T}\}.\] * For \(e\in N(\mathcal{T})\) we write \(s_{\mathcal{T}}(e)\) to denote the vertex of \(e\) in \(\mathcal{T}\) and \(t_{\mathcal{T}}(e)\) to denote the vertex of \(e\) not in \(\mathcal{T}\); \[s_{\mathcal{T}}((ss^{\prime})):=s\text{ and }t_{\mathcal{T}}((ss^{\prime})):=s^{ \prime}\text{ for }s\in\mathcal{T},s^{\prime}\not\in\mathcal{T}\] **Lemma 4.2.5**.: Let \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) be finite subtrees of \(\mathcal{BT}\) and let \(e\in N(\mathcal{T})\). Then there is a unique \(f\in N(\mathcal{T}^{\prime})\) such that the unique path in \(\mathcal{BT}\) from \(t_{\mathcal{T}}(e)\) to \(t_{\mathcal{T}^{\prime}}(f)\) contains no vertices of \(\mathcal{T}^{\prime}\). Proof.: Let \(w\) be any vertex of \(\mathcal{T}^{\prime}\). Since \(\mathcal{BT}\) is a tree it contains a unique path from \(t_{\mathcal{T}}(e)\) to \(w\). Since \(w\) is a vertex of \(\mathcal{T}^{\prime}\) and \(t_{\mathcal{T}}(e)\) is not, there is precisely one edge \(f\) in this path contained in \(N(\mathcal{T}^{\prime})\). We can then truncate the path to a path from \(t_{\mathcal{T}}(e)\) to \(t_{\mathcal{T}^{\prime}}(f)\) that contains no vertices of \(\mathcal{T}^{\prime}\). If \(f^{\prime}\) is an element of \(N(\mathcal{T})\backslash\{f\}\) then the unique path from \(t_{\mathcal{T}^{\prime}}(f)\) to \(t_{\mathcal{T}^{\prime}}(f^{\prime})\) must pass through a vertex of \(\mathcal{T}^{\prime}\) so there is no path from \(t_{\mathcal{T}}(e)\) to \(t_{\mathcal{T}^{\prime}}(f^{\prime})\) that contains no vertices of \(\mathcal{T}^{\prime}\). **Definition 4.2.6**.: If \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) are finite subtrees of \(\mathcal{BT}\), then \[\iota_{\mathcal{T}^{\prime}}^{\mathcal{T}}\colon N(\mathcal{T})\to N( \mathcal{T}^{\prime})\] is the map that sends \(e\in N(\mathcal{T})\) to \(f\in N(\mathcal{T}^{\prime})\) given by Lemma 4.2.5. **Example 4.2.7**.: Suppose that \(\mathcal{S}\) is a subtree of \(\mathcal{BT}\) consisting of two vertices \(s\) and \(s^{\prime}\) and the single edge \((ss^{\prime})\), and \(\{s\}\) is the subtree of \(\mathcal{S}\) with \(s\) as its only vertex. Then there exist \(2q\) edges \(e_{1},\ldots,e_{q},f_{1},\ldots,f_{q}\in E(\mathcal{BT})\) such that: 1. \(N(\mathcal{S})=\{e_{1},\ldots,e_{q},f_{1},\ldots,f_{q}\}\), 2. \(s_{\mathcal{S}}(e_{i})=s\) and \(s_{\mathcal{S}}(f_{i})=s^{\prime}\) for each \(i=1,\ldots,q\), 3. \(N(\{s\})=\{e_{1},\ldots,e_{q},(ss^{\prime})\}\), and 4. \(\iota_{\{s\}}^{\mathcal{S}}(e_{i})=e_{i}\) and \(\iota_{\{s\}}^{\mathcal{S}}(f_{i})=(ss^{\prime})\) for each \(i=1,\ldots,q\). **Lemma 4.2.8**.: Suppose that \(\mathcal{T}_{1}\subseteq\mathcal{T}_{2}\subseteq\mathcal{T}_{3}\) are finite subtrees of \(\mathcal{BT}\). 1. \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{3}}=\iota_{\mathcal{T}_{1}}^{\mathcal{ T}_{2}}\circ\iota_{\mathcal{T}_{2}}^{\mathcal{T}_{3}}\). 2. If \(e\in N(\mathcal{T}_{2})\) and \(s_{\mathcal{T}_{2}}(e)\in V(\mathcal{T}_{1})\), then \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e)=e\). 3. If \(e\in N(\mathcal{T}_{2})\) and \(s_{\mathcal{T}_{2}}(e)\not\in V(\mathcal{T}_{1})\), then \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e)\in E(\mathcal{T}_{2})\backslash E (\mathcal{T}_{1})\). Proof.: (a) If \(e\in N(\mathcal{T}_{3})\) then the path \(\mathcal{P}\) in \(\mathcal{BT}\) from \(t_{\mathcal{T}_{3}}(e)\) to \(t_{\mathcal{T}_{1}}\left(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{3}}(e)\right)\) that contains no vertices of \(\mathcal{T}_{1}\) can be decomposed as a union of two subpaths with a single vertex in common (and possibly no edges): one of these subpaths goes from \(t_{\mathcal{T}_{3}}(e)\) to the last vertex \(s\) in \(\mathcal{P}\) that does not lie in \(\mathcal{T}_{2}\) and the other goes from \(s\) to \(t_{\mathcal{T}_{1}}\left(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{3}}(e)\right)\). Then these paths show that \(\iota_{\mathcal{T}_{3}}^{\mathcal{T}_{2}}(e)\) is the unique element \(f\) of \(N(\mathcal{T}_{2})\) such that \(t_{\mathcal{T}_{2}}(f)=s\) and \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(f)\) is \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{3}}(e)\) as required. (b) Since \(\mathcal{T}_{1}\subseteq\mathcal{T}_{2}\), the condition \(s_{\mathcal{T}_{2}}(e)\in\mathcal{T}_{1}\) gives that \(e\in N(\mathcal{T}_{1})\) and the path from \(t_{\mathcal{T}_{2}}(e)\) to \(t_{\mathcal{T}_{1}}(e)=t_{\mathcal{T}_{2}}(e)\) has no edges and so contains no vertices of \(\mathcal{T}_{1}\). (c) First \(N(\mathcal{T}_{1})\cap E(\mathcal{T}_{1})=\emptyset\) so \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e)\not\in E(\mathcal{T}_{1})\). Since \(s_{1}:=s_{\mathcal{T}_{2}}(e)\in\mathcal{T}_{2}\) and \(s_{2}:=s_{\mathcal{T}_{1}}\left(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e) \right)\in\mathcal{T}_{1}\subseteq\mathcal{T}_{2}\), the unique path in \(\mathcal{BT}\) from \(s_{1}\) to \(s_{2}\) lies in \(\mathcal{T}_{2}\). The condition \(s_{1}\not\in\mathcal{T}_{1}\) ensures this path contains at least one edge \(f:=\left(t_{\mathcal{T}_{1}}\left(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e) \right)s_{2}\right)\). Moreover \(f\in N(\mathcal{T}_{1})\). Adding the edge \((t_{\mathcal{T}_{2}}(e)s_{1})\) to the start of the path and removing \(f\) from its end gives the path that shows that \(\iota_{\mathcal{T}_{1}}^{\mathcal{T}_{2}}(e)=f\). **Lemma 4.2.9**.: Suppose that \(\mathcal{S}\) and \(\mathcal{T}\) are finite subtrees of \(\mathcal{BT}\) such that \[E(\mathcal{S})=\{(ss^{\prime})\}\text{ and }V(\mathcal{S})\cap V(\mathcal{T})=\{s\}.\] There is a natural bijection \[\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}\times\iota_{\mathcal{T}}^{ \mathcal{S}\cup\mathcal{T}}:N(\mathcal{S}\cup\mathcal{T})\to N(\mathcal{S}) \underset{N(\mathcal{S}\cap\mathcal{T})}{\times}N(\mathcal{T})\] given by \(e\mapsto(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e),\iota_{\mathcal{T} }^{\mathcal{S}\cup\mathcal{T}}(e))\). Proof.: The map \(\xi:=\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}\times\iota_{\mathcal{T} }^{\mathcal{S}\cup\mathcal{T}}\) in the statement is well-defined, because \[\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}\circ\iota_{ \mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}=\iota_{\mathcal{S}\cap\mathcal{T}}^ {\mathcal{S}\cup\mathcal{T}}=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}\cup \mathcal{T}}\circ\iota_{\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}\] by Lemma 4.2.8(a). Next we show that \(\xi\) is injective. To this end, suppose that \(e_{1},e_{2}\) are two elements of \(N(\mathcal{S}\cup\mathcal{T})\) such that \(\xi(e_{1})=\xi(e_{2})\). Let \(v_{i}=s_{\mathcal{S}\cup\mathcal{T}}(e_{i})\) for \(i=1,2\). Suppose first that both of \(v_{1},v_{2}\) lie in \(\mathcal{S}\). In this case, \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{1})=e_{1}\) and \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{2})=e_{2}\), by Lemma 4.2.8(b), and we deduce by looking at the first component of \(\xi(e_{1})=\xi(e_{2})\) that \(e_{1}=e_{2}\). The case where both \(v_{1},v_{2}\) lie in \(\mathcal{T}\) is entirely similar. Suppose for a contradiction that \(e_{1}\neq e_{2}\). Then without loss of generality, we can now assume that \(v_{1}\) lies in \(V(\mathcal{T})\backslash V(\mathcal{S})\) and \(v_{2}\) lies in \(V(\mathcal{S})\backslash V(\mathcal{T})\). Since \(\mathcal{S}\) is a single leaf with \(V(\mathcal{S})\cap V(\mathcal{T})=\{s\}\), this forces \(v_{2}=s^{\prime}\). Therefore since \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{2})=e_{2}\) by Lemma 4.2.8(b), the only vertex of \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{2})\) in \(\mathcal{S}\) is \(s^{\prime}\). On the other hand, because \(v_{1}\notin V(\mathcal{S})\), \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{1})\in E(\mathcal{T})\) by Lemma 4.2.8(c). This contradicts \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e_{2})=\iota_{\mathcal{S}}^{ \mathcal{S}\cup\mathcal{T}}(e_{1})\) because \(s^{\prime}\notin V(\mathcal{T})\). Finally we show that \(\xi\) is surjective. Suppose that \((e,f)\in N(\mathcal{S})\underset{N(\mathcal{S}\cap\mathcal{T})}{\times}N( \mathcal{T})\). We first consider the case where \(s_{\mathcal{T}}(f)\neq s\), so that \(t_{\mathcal{T}}(f)\not\in\mathcal{S}\cup\mathcal{T}\). It follows that \(f\in N(\mathcal{S}\cup\mathcal{T})\) and we claim \(\xi(f)=(e,f)\). That \(\iota_{\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}(f)=f\) follows from Lemma 4.2.8(b) because \(s_{\mathcal{T}}(f)\in\mathcal{T}\). Consider the following element \(g\) of \(N(\mathcal{S}\cap\mathcal{T})\): \[g:=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(e)=\iota_{\mathcal{S}\cap \mathcal{T}}^{\mathcal{T}}(f).\] Since \(s_{\mathcal{T}}(f)\not\in\mathcal{S}\cap\mathcal{T}\), \(g\in E(\mathcal{T})\) by Lemma 4.2.8(c). In particular \(g\neq(ss^{\prime})\). Since \(g=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(e)\), this implies that \(g=e\) by Example 4.2.7(d). Now \(s_{\mathcal{S}\cup\mathcal{T}}(f)\not\in\mathcal{S}\) so \(h:=\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(f)\in E(\mathcal{T}) \backslash E(\mathcal{S})\) by Lemma 4.2.8(c) again. Hence \(\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(h)=h\) by Example 4.2.7(d). Using Lemma 4.2.8(a) several times, we now see that \[\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(f)=h=\iota_{\mathcal{S}\cap \mathcal{T}}^{\mathcal{S}}(h)=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S} \cup\mathcal{T}}\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(f)=\iota_{ \mathcal{S}\cap\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}(f)=\iota_{\mathcal{S} \cap\mathcal{T}}^{\mathcal{T}}\iota_{\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}} (f)=g.\] Hence \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(f)=g=e\) as required. Next we consider the case where \(s_{\mathcal{T}}(f)=s\) so that, by Lemma 4.2.8(b), \(\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{T}}(f)=f\), and hence \(f=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{T}}(f)=\iota_{\mathcal{S}\cap \mathcal{T}}^{\mathcal{S}}(e)\). This splits into two subcases. Suppose first that \(f=(ss^{\prime})\). Then \(\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(e)=f=(ss^{\prime})\) implies by Example 4.2.7(d) that \(s_{\mathcal{S}}(e)=s^{\prime}\). Therefore \(t_{\mathcal{S}}(e)\notin V(\mathcal{T})\) which means that \(e\in N(\mathcal{S}\cup\mathcal{T})\). Then \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(e)=e\) by Lemma 4.2.8(b) and \(\iota_{\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}(e)=(ss^{\prime})=f\) by Lemma 4.2.8(c), so \(\xi(e)=(e,f)\) as required. Finally, suppose that \(f\neq(ss^{\prime})\). Then \(t_{\mathcal{T}}(f)\notin V(\mathcal{S})\), so \(f\in N(\mathcal{S}\cup\mathcal{T})\). Then \(\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(e)=f\neq(ss^{\prime})\) implies that \(e=\iota_{\mathcal{S}\cap\mathcal{T}}^{\mathcal{S}}(e)=f\) by Example 4.2.7(d). Hence \(\iota_{\mathcal{T}}^{\mathcal{S}\cup\mathcal{T}}(f)=f\) and \(\iota_{\mathcal{S}}^{\mathcal{S}\cup\mathcal{T}}(f)=\iota_{\mathcal{S}}^{ \mathcal{S}\cup\mathcal{T}}(e)=e\), and so \(\xi(f)=(e,f)\) as required. **Proposition 4.2.10**.: Let \(\mathcal{T}\) be a finite subtree of \(\mathcal{BT}\). Then there is a \(G_{\mathcal{T}}^{0}\)-equivariant bijection \[h_{\mathcal{T}}\colon N(\mathcal{T})\to h\left(C_{\mathcal{T}}\right)\] such that following diagram is commutative for every substree \(\mathcal{T}^{\prime}\) of \(\mathcal{T}\): (13) Proof.: Suppose \(\mathcal{S}\) and \(\mathcal{T}\) are disjoint finite subtrees of \(\mathcal{BT}\). It follows from Proposition 4.2.2 that \(C_{\mathcal{S}}\) and \(C_{\mathcal{T}}\) are disjoint \(K\)-cheeses, so \(C_{\mathcal{S}}\) is contained in a unique hole of \(C_{\mathcal{T}}\). In particular, if \(e\in N(\mathcal{T})\), then \(C_{\{t_{\mathcal{T}}(e)\}}\) and \(C_{\mathcal{T}}\) are disjoint \(K\)-cheeses, so \(C_{\{t_{\mathcal{T}}(e)\}}\) is contained in a unique hole \(h_{\mathcal{T}}(e)\) of \(C_{\mathcal{T}}\). Since \(\lambda\) is \(G^{0}\)-equivariant, if \(g\in G^{0}_{\mathcal{T}}\) then \(t_{\mathcal{T}}(g\cdot e)=gt_{\mathcal{T}}(e)\) and so \[\lambda^{-1}(t_{\mathcal{T}}(ge))=g\lambda^{-1}(t_{\mathcal{T}}(e))\quad\text{ and}\quad h_{\mathcal{T}}(ge)=gh_{\mathcal{T}}(e).\] Thus \(e\mapsto h_{\mathcal{T}}(e)\) defines a \(G^{0}_{\mathcal{T}}\)-equivariant function. Suppose that \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) is a subtree and that \(\iota_{\mathcal{T}^{\prime}}^{\mathcal{T}}(e)=f\). Then the path from \(t_{\mathcal{T}}(e)\) to \(t_{\mathcal{T}^{\prime}}(f)\) in \(\mathcal{BT}\) is a tree, \(\mathcal{S}\) say, that is disjoint from \(\mathcal{T}^{\prime}\). Then \(C_{\{t_{\mathcal{T}}(e)\}}\) and \(C_{\mathcal{S}}\) are all contained in the same hole, \(D\) say, of \(C_{\mathcal{T}^{\prime}}\). It follows that \(h_{\mathcal{T}^{\prime}}\circ\iota_{\mathcal{T}^{\prime}}^{\mathcal{T}}(e)=D= \iota_{C_{\mathcal{T}}^{\prime}}^{C_{\mathcal{T}}}\circ h_{\mathcal{T}}(e)\) and that the diagram (13) is commutative. To show that \(h_{\mathcal{T}}\) is always a bijection, we induct on the number of edges of \(\mathcal{T}\). If \(\mathcal{T}\) consists of a single vertex (no edges), or a single edge, then the result is a simple consequence of [6, I.2.3]. In the general case, we decompose \(\mathcal{T}\) as \(\mathcal{S}\cup\mathcal{T}^{\prime}\) where \(\mathcal{S}\) is a single leaf of \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) is \(\mathcal{T}\) with \(\mathcal{S}\) removed. Using the diagram (13) twice, we obtain the following commutative diagram: Now, the horizontal arrows in this diagram are bijections by Lemma 4.2.9 and Lemma 4.1.5 respectively. Since \(h_{\mathcal{S}}\times h_{\mathcal{T}^{\prime}}\) is a bijection by the induction hypothesis, it follows that \(h_{\mathcal{T}}\) is a bijection as well. **Remark 4.2.11**.: We note that the bijectivity of \(h_{\mathcal{T}}\) in Proposition 4.2.10 is more conceptually clear than our proof suggests. If \(D\) is in \(h(C_{\mathcal{T}})\) then \(\overline{\lambda(D\cap\Omega_{F})}\) is a connected component \(X_{D}\) of \(|\mathcal{BT}|\backslash|\mathcal{T}|\). There is precisely one edge \(e_{D}\) in \(N(\mathcal{T})\) such that the interior of \(|e_{D}|\) is contained in \(X_{D}\). Then the inverse of \(h_{\mathcal{T}}\) sends \(D\) to \(e_{D}\). However it is not straightforward to make this argument rigorous in the context of this paper. **Definition 4.2.12**.: Let \(s_{0}\) be the vertex of \(\mathcal{BT}\) fixed by \(GL_{2}(\mathcal{O}_{F})\) and let \(n\geqslant 0\). * \(\mathcal{T}_{n}\subset\mathcal{BT}\) is the subtree whose vertices have distance at most \(n\) from \(s_{0}\). * \(\Omega_{n}\) is the cheese \(\Omega_{n}:=C_{\mathcal{T}_{n}}\). **Remark 4.2.13**.: Since \(G^{0}_{\mathcal{T}_{n}}=G^{0}_{s_{0}}=GL_{2}(\mathcal{O}_{F})\) for all \(n\geqslant 0\), \(\Omega_{n}\) is \(GL_{2}(\mathcal{O}_{F})\)-stable for all \(n\geqslant 0\). **Remark 4.2.14**.: For any family \(\{\mathcal{T}_{j}\}_{j\in J}\) of finite subtrees of \(\mathcal{BT}\) such that \(\bigcup_{j\in J}|\mathcal{T}_{j}|=|\mathcal{BT}|\), the family of cheeses \(\{C_{\mathcal{T}_{j}}\}_{j\in J}\) forms an admissible cover of \(\Omega\). **Lemma 4.2.15**.: Let \(n\geqslant 0\). * \(GL_{2}(\mathcal{O}_{F})\) acts transitively on \(h(\Omega_{n})\). * The fibres of the maps \(\iota_{\Omega_{n}}^{\Omega_{n+1}}\colon h(\Omega_{n+1})\to h(\Omega_{n})\) all have size \(q\). Proof.: (a) By Proposition 4.2.10 and Remark 4.2.13, it suffices to prove that \(GL_{2}(\mathcal{O}_{F})\) acts transitively on \(N(\mathcal{T}_{n})\) for each \(n\geqslant 0\). But \(N(\mathcal{T}_{n})\) consists of all edges between vertices of distance \(n\) from \(s_{0}\) and vertices of distance \(n+1\) from \(s_{0}\). This holds because \(GL_{2}(\mathcal{O}_{F})\) acts transitively on the set of vertices of distance \(n+1\) from \(s_{0}\). (b) Note that \(|h(\Omega_{n})|=|N(\mathcal{T}_{n})|=(q+1)q^{n}\) since \(\mathcal{BT}\) is a \((q+1)\)-regular tree. The fibres of \(\iota_{\Omega_{n}}^{\Omega_{n+1}}\colon h(\Omega_{n+1})\to h(\Omega_{n})\) all have the same size, by part (a). We introduce some other admissible covers of \(\Omega\) by \(K\)-cheeses, for later use. Recall that \(w=\begin{pmatrix}0&1\\ \pi_{F}&0\end{pmatrix}\in GL_{2}(F)\) and \(w\cdot s_{0}\) is a vertex of \(\mathcal{BT}\) adjacent to \(s_{0}\). **Definition 4.2.16**.: Let \(n\geqslant 0\). 1. Let \(e_{0}\) be the unique edge of \(\mathcal{BT}\) with vertices \(s_{0}\) and \(w\cdot s_{0}\). 2. Let \(\mathcal{S}_{n}\) be the subtree of \(\mathcal{BT}\) consisting of vertices a distance at most \(n\) from either \(s_{0}\) or \(ws_{0}\). 3. Let \(\Psi_{n}\) be the cheese \(\Psi_{n}:=C_{\mathcal{S}_{n}}\). **Lemma 4.2.17**.: For each \(n\geqslant 1\), 1. \(\Psi_{n}=\Omega_{n}\cup w\Omega_{n}\), and 2. \(\Psi_{n-1}=\Omega_{n}\cap w\Omega_{n}\). Proof.: Let \(n\geqslant 1\). It is clear that \(\mathcal{S}_{n}=\mathcal{T}_{n}\cup w\mathcal{T}_{n}\). We claim that \(\mathcal{S}_{n-1}=\mathcal{T}_{n}\cap w\mathcal{T}_{n}\). For the forward inclusion, because \(w^{2}\) acts trivially on \(\mathcal{BT}\), it is enough to show that \(\mathcal{T}_{n-1}\subseteq w\mathcal{T}_{n}\). Let \(d\) be the distance function on \(V(\mathcal{BT})\) and let \(x\in V(\mathcal{T}_{n-1})\). Then \(d(x,s_{0})\leqslant n-1\), so \(d(x,ws_{0})\leqslant d(x,s_{0})+d(s_{0},ws_{0})\leqslant(n-1)+1=n\) and hence \(x\in V(w\mathcal{T}_{n})\). For the reverse inclusion, it is enough to show that \(\mathcal{T}_{n}\cap w\mathcal{T}_{n}\subseteq\mathcal{T}_{n-1}\). Suppose that \(x\in V(\mathcal{T}_{n}\cap w\mathcal{T}_{n})\) so that \(d(x,s_{0})\leqslant n\) and \(d(x,ws_{0})\leqslant n\). By considering the unique path in \(\mathcal{BT}\) passing through \(x,s_{0}\) and \(ws_{0}\), we see that we must have either \(d(x,s_{0})\leqslant n-1\) or \(d(x,ws_{0})\leqslant n-1\), and hence \(x\in V(\mathcal{T}_{n-1})\). Both parts now follow easily. **Remark 4.2.18**.: Since, for each \(n\geqslant 0\), \(G^{0}_{\mathcal{S}_{n}}=G^{0}_{e_{0}}=I\) is the Iwahori subgroup from Notation 2.2.1(b), each cheese \(\Psi_{n}=C_{\mathcal{S}_{n}}\) is \(I\)-stable. **Lemma 4.2.19**.: Suppose that \(n\geqslant 0\). 1. \(h(\Psi_{n})\) has precisely two \(I\)-orbits, each of size \(q^{n+1}\). 2. The map \(\iota^{\Psi_{n+1}}_{\Psi_{n}}\colon h(\Psi_{n+1})\to h(\Psi_{n})\) is surjective, with all fibres of size \(q\). Proof.: (a) By Proposition 4.2.10 it suffices to show that \(N(\mathcal{S}_{n})\) has precisely two \(I\)-orbits each of size \(q+1\). But \(N(\mathcal{S}_{n})\) consists of those edges of \(\mathcal{S}_{n+1}\) that are not edges of \(\mathcal{S}_{n}\). These fall into those that connect vertices of distance \(n\) and \(n+1\) from \(s_{0}\) (and distance \(n+1\) and \(n+2\) from \(w\cdot s_{0}\)) and those that connect vertices of distance \(n\) and \(n+1\) from \(w\cdot s_{0}\) (and distance \(n+1\) and \(n+2\) from \(s_{0}\)). These two sets of edges in \(N(\mathcal{S}_{n})\) are its \(I\)-orbits. (b) Using Proposition 4.2.10 again, it suffices to prove the same thing about the fibres of the \(I\)-equivariant function \(N(\mathcal{S}_{n+1})\to N(\mathcal{S}_{n})\). This is straightforward to verify since \(\mathcal{BT}\) is \(q+1\)-regular. **Lemma 4.2.20**.: The following collections of affinoid subdomains form admissible covers of \(\Omega\): 1. \(\{\Omega_{n}\}_{n\geqslant 0}\); 2. \(\{w\Omega_{n}\}_{n\geqslant 0}\); 3. \(\{\Psi_{n}\}_{n\geqslant 0}\). Proof.: Each part is an easy consequence of Remark 4.2.14. **Proposition 4.2.21**.: \(\Omega\) is a smooth, geometrically connected, quasi-Stein rigid \(K\)-analytic space. Proof.: We've seen that the chain \(\Omega_{0}\subseteq\Omega_{1}\subseteq\cdots\) is an admissible cover of \(\Omega\) by an increasing union of cheeses. Moreover the maps \(h(\Omega_{n+1})\to h(\Omega_{n})\) are all surjective by Lemma 4.2.15. Thus \(\Omega\) is a smooth, geometrically connected, quasi-Stein rigid \(K\)-analytic space by Proposition 4.1.12. ### Units, measures and flat connections on \(\Omega\) Recall, for \(\varphi\in\operatorname{Aut}(\mathbb{P}^{1})\) and cheeses \(X\) and \(Y\) with \(\varphi(Y)\subseteq X\), the map \(\varphi_{Y}^{X}\colon h(X)\to h(Y)\) from Lemma 4.1.3 together with the notation \(D_{\infty}\) to denote the element of \(h(X)\) containing the point \(\infty\in\mathbb{P}^{1}(\mathbf{C})\). **Proposition 4.3.1**.: Let \(X=C(\alpha,\mathbf{s})\) and \(Y\) be cheeses and \(\varphi\in\operatorname{Aut}(\mathbb{P}^{1})\) with \(\varphi(Y)\subseteq X\). Then there is commutative diagram whose rows are short exact sequences of abelian groups and whose non-labelled vertical arrows are induced by the composite of the restriction \(\mathcal{O}(X)\to\mathcal{O}(\varphi(Y))\) and \(\varphi^{\sharp}\colon\mathcal{O}(\varphi(Y))\to\mathcal{O}(Y)\). The map \(\mu_{X}\) is characterised by \(\mu_{X}(x-\alpha)=\delta_{D}-\delta_{D_{\infty}}\) for \(D\in h(X)\) and \(\alpha\in D(K)\). Proof.: For each \(i=1,\dots,g:=g_{X}\), let \(D_{i}\in h(X)\) be the open disc containing \(\alpha_{i}\). Given \(u\in\mathcal{O}(X)^{\times}\), use Proposition 4.1.10 to find integers \(n_{1},\dots,n_{g}\) such that \[u\equiv(x-\alpha_{1})^{n_{1}}\cdots(x-\alpha_{g})^{n_{g}}\mod\quad K^{\times} \cdot\mathcal{O}(X)^{\times\times}\] and define the measure \(\mu_{X}(u)\in M_{0}(h(X),\mathbb{Z})\) by \[\mu_{X}(u):=\sum_{i=1}^{g}n_{i}(\delta_{D_{i}}-\delta_{D_{\infty}}).\] The top row is then exact by Proposition 4.1.10. We note that \(\mu_{X}\) does not depend on the choice of the centres \(\alpha_{1},\dots,\alpha_{g}\) of the holes of the cheese \(X\). Since \(Y\) is also a cheese the bottom row is also exact. The commutativity of the left-hand square is clear. To see the right-hand square commutes it suffices by the argument just given to show that for all \(i=1,\cdots,g\), we have \[\varphi_{Y,*}^{X}\mu_{X}(x-\alpha_{i})=\mu_{Y}(\varphi^{\sharp}(x-\alpha_{i})).\] Now, \(\varphi_{Y,*}^{X}\mu_{X}(x-\alpha_{i})=\delta_{\varphi_{Y}^{X}(D_{i})}-\delta _{\varphi_{Y}^{X}(D_{\infty})}\), and \(\varphi^{\sharp}(x-\alpha_{i})\) is a rational function with divisor \((\varphi^{-1}(\alpha_{i}))-(\varphi^{-1}(\infty))\). So, since \(\varphi_{Y}^{X}(D_{i})\in h(Y)\) contains \(\varphi^{-1}(\alpha_{i})\in\varphi^{-1}(D_{i})\) and \(\varphi_{Y}^{X}(D_{\infty})\in h(Y)\) contains \(\varphi^{-1}(\infty)\in\varphi^{-1}(D_{\infty})\), we see that \[\mu_{Y}(\varphi^{\sharp}(x-\alpha_{i}))=\delta_{\varphi_{Y}^{X}(D_{i})}- \delta_{\varphi_{Y}^{X}(D_{\infty})}=\varphi_{Y,*}^{X}\mu_{X}(x-\alpha_{i}).\qed\] Because of Proposition 3.1.13, we are interested in the groups \(\frac{\mathcal{O}(X)^{\times}}{K^{\times}}\underset{\mathbb{Z}}{\otimes} \frac{\frac{1}{2}\mathbb{Z}}{\mathbb{Z}}\) for positive integers \(d\). After a preparatory Lemma, we will explain in Corollary 4.3.3 below how Proposition 4.3.1 helps us to calculate these groups. **Lemma 4.3.2**.: Let \(\varpi:=p^{-\frac{1}{p-1}}\in\mathbb{R}_{>0}\) and \(X\) be a reduced \(K\)-affinoid variety. 1. If \(d\) is an integer such that \(p\nmid d\), then the \(d\)th power map \[(-)^{d}\colon\mathcal{O}(X)^{\times\times}\to\mathcal{O}(X)^{\times\times}\] is an isomorphism of topological groups. 2. If \(r\in(0,\varpi/p)\) then every element of \(\mathcal{O}(X)_{r}^{\times\times}\) has a \(p\)th root in \(\mathcal{O}(X)^{\times\times}\). Proof.: (a) If \(a\in\mathcal{O}(X)^{\circ\circ}\) then the binomial expansion \[(1+a)^{1/d}=\sum\limits_{n=0}^{\infty}\binom{1/d}{n}a^{n}\] converges to an element of \(\mathcal{O}(X)^{\times\times}\) because \(|a|<1\) and because \(\binom{1/d}{n}\in\mathbb{Z}_{p}\subset K^{\circ}\) for all \(n\geqslant 0\) as a consequence of the assumption that \(p\nmid d\). That the map \(a\mapsto\sum\limits_{n=0}^{\infty}\binom{1/d}{n}a^{n}\) is continuous is evident. (b) Similary if \(|a|\leqslant r<\varpi/p\), the binomial expansion \[(1+a)^{1/p}=\sum\limits_{n=0}^{\infty}\binom{1/p}{n}a^{n}\] converges to an element of \(\mathcal{O}^{\times\times}\) since \[v_{p}\left(p^{n}\binom{1/p}{n}\right)=-v_{p}(n!)\geqslant-\frac{n}{p-1}\] so that for \(n\geqslant 1\) \[\left|\binom{1/p}{n}a^{n}\right|_{X}\leqslant(pr/\varpi)^{n}\] and \(pr/\varpi<1\). Thus \(\sum\limits_{n=0}^{\infty}\binom{1/p}{n}a^{n}\) is the required \(p\)th root of \(1+a\). **Corollary 4.3.3**.: Let \(X\) be a cheese and let \(d\) be an integer. 1. The map \(\mu_{X}\) induces a surjective homomorphism \[\mu_{X,d}:\frac{\mathcal{O}(X)^{\times}}{K^{\times}\mathcal{O}(X)^{\times d} }\twoheadrightarrow M_{0}\left(h(X),\mathbb{Z}/d\mathbb{Z}\right).\] 2. If \(G\to\operatorname{Aut}(\mathbb{P}^{1})_{X}\) is a group homomorphism, then \(\mu_{X,d}\) is \(G\)-equivariant. 3. If \(p\nmid d\) then \(\mu_{X,d}\) is an isomorphism. Proof.: (a) Proposition 4.3.1 gives us an exact sequence of abelian groups \[1\to\mathcal{O}(X)^{\times\times}/K^{\times\times}\to\mathcal{O}(X)^{ \times}/K^{\times}\stackrel{{\mu_{X}}}{{\longrightarrow}}M_{0}(h( X),\mathbb{Z})\to 0.\] Tensoring this sequence with \(\mathbb{Z}/d\mathbb{Z}\) gives an exact sequence \[\frac{\mathcal{O}(X)^{\times\times}}{K^{\times\times}}\underset{\mathbb{Z}}{ \otimes}\mathbb{Z}/d\mathbb{Z}\to\frac{\mathcal{O}(X)^{\times}}{K^{\times}} \underset{\mathbb{Z}}{\otimes}\mathbb{Z}/d\mathbb{Z}\overset{\mu_{X}\otimes 1}{ \longrightarrow}M_{0}(h(X),\mathbb{Z})\underset{\mathbb{Z}}{\otimes}\mathbb{Z} /d\mathbb{Z}\to 0. \tag{14}\] The second term is \(\mathcal{O}(X)^{\times}/K^{\times}\mathcal{O}(X)^{\times d}\) and the third term is \(M_{0}(h(X),\mathbb{Z}/d\mathbb{Z})\) by Lemma 2.1.6. (b) This part follows easily from Proposition 4.3.1. (c) Since \(p\nmid d\), the first term in (14) vanishes by Lemma 4.3.2, **Corollary 4.3.4**.: Let \(X\) be a cheese, \(d\) is an integer such that \(p\nmid d\) and suppose that \(G\to\operatorname{Aut}(\mathbb{P}^{1})_{X}\) is a group homomorphism. Then \[\mu_{X,d}\circ\theta_{d}\colon\operatorname{Con}(X)^{G}[d]\to M_{0}\left(h(X), \mathbb{Z}/d\mathbb{Z}\right)^{G}\] is an isomorphism. Proof.: This follows immediately from Proposition 3.1.13 and Corollary 4.3.3. We will now use Corollary 4.3.4 to investigate how the group \(\operatorname{Con}^{G}(X)[d]\) changes when we vary \(X\) and \(G\). More precisely, we have the following **Proposition 4.3.5**.: Let \(Y\subseteq X\) be cheeases such that \(\iota_{Y}^{X}:h(X)\to h(Y)\) is surjective and let \(d\geqslant 1\) be an integer such that \(p\nmid d\). 1. Suppose that 1. each fibre of \(\iota_{Y}^{X}\colon h(X)\to h(Y)\) has size coprime to \(d\), and 2. the \(G\)-orbits in \(h(X)\) are unions of these fibres. Then the following restriction map is injective: \[\operatorname{Con}(X)[d]^{G}\hookrightarrow\operatorname{Con}(Y)[d].\] 2. Suppose that additionally to the assumptions in (a), 3. \(H\) is a closed subgroup of \(G_{Y}\), and 4. the restriction map \(\operatorname{Hom}(G,\mu_{d}(K))\to\operatorname{Hom}(H,\mu_{d}(K))\) is injective. Then the following restriction map is injective: \[\operatorname{Con}^{G}(X)[d]\hookrightarrow\operatorname{Con}^{H}(Y)[d].\] 3. Suppose that additionally to the assumptions in (b), 4. \(\iota_{Y}^{X}\colon h(X)\to h(Y)\) induces a bijection between the \(G\)-orbits in \(h(X)\) and the \(H\)-orbits in \(h(Y)\), 5. the map \(\operatorname{Hom}(G,\mu_{d}(K))\to\operatorname{Hom}(H,\mu_{d}(K))\) is surjective, and 6. \(\omega\colon\operatorname{Con}^{G}(X)[d]\to\operatorname{Con}(X)[d]^{G}\) is surjective. Then the following restriction map is an isomorphism: \[\operatorname{Con}^{G}(X)[d]\stackrel{{\cong}}{{\longrightarrow}} \operatorname{Con}^{H}(Y)[d].\] Proof.: (a) By Corollary 4.3.4 there is a commutative diagram (15) whose left-vertical arrow is restriction and whose horizontal arrows are isomorphisms. Thus it suffices to prove that \(\iota_{Y,*}^{X}\colon M_{0}\left(h(X),\mathbb{Z}/d\mathbb{Z}\right)^{G}\to M_{ 0}\left(h(Y),\mathbb{Z}/d\mathbb{Z}\right)\) is injective. Suppose that \(\nu\) is in the kernel. Then for \(D\in h(Y)\), \[0=\iota_{Y,*}^{X}\nu(\{D\})=\nu((\iota_{Y}^{X})^{-1}\{D\}).\] Since \(\iota_{Y}^{X}\) is surjective by assumption, we may choose some \(D^{\prime}\in h(X)\) such that \(\iota_{Y}^{X}(D^{\prime})=D\). Because \(\nu\) is \(G\)-invariant, assumption (ii) implies that \[\nu\left((\iota_{Y}^{X})^{-1}\{D\}\right)=|(\iota_{Y}^{X})^{-1}(D)|\cdot\nu( \{D^{\prime}\}).\] Then assumption (i) gives \(\nu(\{D^{\prime}\})=0\), so \(\nu=0\) as required. (b) Using Lemma 3.2.14 together with Proposition 4.1.11 and (iii), we have the commutative diagram (16) with exact rows, whose vertical arrows are given by restriction. Using (iv), part (a) and the Four Lemma, we see that the middle arrow is injective. (c) Assumption (v) implies that the right vertical map in (15) has image equal to \(M_{0}\left(h(Y),\mathbb{Z}/d\mathbb{Z}\right)^{H}\), so the right vertical arrow in (16) is an isomorphism. Using this together with (vii) gives that both of the rightmost horizontal arrows in diagram (16) are surjective. We can now use (vi) and the Five Lemma to finish the proof. The following technical Lemma is needed for the important Corollary 4.3.9 below. Recall the \(K\)-cheeses \(C_{\mathcal{T}}\) from Definition 4.2.3. **Lemma 4.3.6**.: Suppose that \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) are finite subtrees of \(\mathcal{BT}\) with \(N(\mathcal{T}^{\prime})\subseteq\mathcal{T}\). Then for every \(n\geqslant 1\) and \(f\in K^{\times}\mathcal{O}(C_{\mathcal{T}})_{|\pi_{F}|^{n}}^{\times\times}\), we have \(f|_{C_{\mathcal{T}^{\prime}}}\in K^{\times}\mathcal{O}(C_{\mathcal{T}^{\prime }})_{|\pi_{F}|^{n+1}}^{\times\times}\). Proof.: We claim first that for each \(a\in F\) and \(n\in\{\pm 1\}\), \[|(x-a)^{n}|_{C_{\mathcal{T}^{\prime}}}\leqslant|\pi_{F}|\,|(x-a)^{n}|_{C_{ \mathcal{T}}}\,. \tag{17}\] By a change of coordinate induced by an element of \(GL_{2}(F)\) we may reduce the proof of this claim to proving \(|\pi_{F}|^{-1}\leqslant|x|_{C_{\mathcal{T}}}\) in the particular case \(|x|_{C_{\mathcal{T}^{\prime}}}=1\). Now if \(|x|_{C_{\mathcal{T}^{\prime}}}=1\), then \(s_{0}\in\mathcal{T}^{\prime}\) and so, by hypothesis, \[\mathcal{T}_{1}\subseteq\mathcal{T}^{\prime}\cup N(\mathcal{T}^{\prime}) \subseteq\mathcal{T}.\] Then because \(\Omega_{1}=C_{\mathcal{T}_{1}}\), we have \(|x|_{C_{\mathcal{T}}}\geqslant|x|_{\Omega_{1}}\geqslant|\pi_{F}|^{-1}\) which proves the claim. Now suppose that \(f\in K^{\times}\mathcal{O}(C_{\mathcal{T}})_{|\pi_{F}|^{n}}^{\times\times}\) so that \(f=\lambda(1+h)\) for some \(\lambda\in K^{\times}\) and \(h\in\pi_{F}^{n}\mathcal{O}(C_{\mathcal{T}})^{\circ}\). We have to show that \(1+h\in K^{\times}\mathcal{O}(C_{\mathcal{T}^{\prime}})_{|\pi_{F}|^{n+1}}^{ \times\times}\). By Proposition 4.1.9, we can write \[1+h=(1+\lambda_{0})+\sum_{i=0}^{g}\sum_{j\geqslant 1}\lambda_{ij}\xi_{i}^{j}\] with \(\lambda_{0},\lambda_{ij}\in\pi_{F}^{n}\mathcal{O}_{K}\) and \(\xi_{0},\dots,\xi_{g}\) each of the form \(c(x-a)\) or \(\frac{c}{x-a}\) with \(a,c\in F\) and \(c\neq 0\) and \(|\xi_{i}|_{C_{\mathcal{T}}}=1\). Since \((1+\lambda_{0})\in K^{\times\times}\), by considering \((1+\lambda_{0})^{-1}(1+h)\) we may further assume that \(\lambda_{0}=0\) and then it suffices to prove that \(|h|_{C_{\mathcal{T}^{\prime}}}\leqslant|\pi_{F}^{n+1}|\). Now by (17), for all suitable \(i,j\) we have \[|\xi_{i}^{j}|_{C_{\mathcal{T}^{\prime}}}=|\xi_{i}|_{C_{\mathcal{T}^{\prime}}} ^{j}\leqslant|\pi_{F}|^{j}\leqslant|\pi_{F}|\] so the result follows by the ultrametric inequality. Recall the \(K\)-cheeses \(\Omega_{n}\) from Definition 4.2.12. **Corollary 4.3.7**.: Suppose that \(n,m\geqslant 0\). Then for all \(f\in K^{\times}\mathcal{O}(\Omega_{n+m})^{\times\times}\), \[f|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})_{|\pi_{F}|^{m}}^{\times \times}.\] Proof.: Since \(N(\mathcal{T}_{n+k})\subseteq\mathcal{T}_{n+k+1}\) for all \(n,k\geqslant 0\), this follows from Lemma 4.3.6 by a straightforward induction on \(m\) **Proposition 4.3.8**.: Write \(A:=GL_{2}(\mathcal{O}_{F})\). 1. For all \(n\geqslant 0\), the restriction map \[\operatorname{Con}(\Omega_{n+1})^{A}[p^{\prime}]\to\operatorname{Con}(\Omega_{ n})^{A}[p^{\prime}]\] is an isomorphism. These groups are cyclic of order \(q+1\). 2. There is \(m\geqslant 1\) such that the restriction maps \[\operatorname{Con}(\Omega_{m+n})^{A}[p]\to\operatorname{Con}(\Omega_{n})^{A}[p]\] are zero for all \(n\geqslant 0\). Proof.: (a) Suppose \(d\) is an integer coprime to \(p\) and that \(d\) is a multiple of \((q+1)=|h(\Omega_{0})|\). Then by Corollary 4.3.4 and Proposition 4.3.1, for each \(n\geqslant 1\), there is a commutative diagram whose horizontal maps are isomorphisms. Since \(A\) acts transitively on each \(h(\Omega_{n})\), by Lemma 4.2.15(a), we see by Proposition 2.1.8(b) that \(M_{0}(h(\Omega_{n}),\mathbb{Z}/d\mathbb{Z})^{A}\) is cyclic of order \(\gcd(d,|h(\Omega_{n})|)=q+1\) and generated by the image of \(\frac{d}{q+1}\Sigma_{h(\Omega_{n})}\). Moreover by Lemma 4.2.15(b) together with Proposition 2.1.8(c), the right-hand vertical map sends the image of \(\frac{d}{q+1}\Sigma_{h(\Omega_{n})}\) in \(M_{0}(h(\Omega_{n}),\mathbb{Z}/d\mathbb{Z})^{A}\) to the image of \(\frac{qd}{q+1}\Sigma_{h(\Omega_{n-1})}\) in \(M_{0}(h(\Omega_{n-1}),\mathbb{Z}/d\mathbb{Z})^{A}\). Since \(q\) is coprime to \(d\), it follows that the map is an isomorphism. Part (a) now follows easily. (b) We take \(m\geqslant 1\) such that \(|\pi_{F}^{m-1}|<\varpi\) and let \(N=n+m\). Suppose that \[[\mathscr{L}]\in\operatorname{Con}(\Omega_{N})^{A}[p].\] We will show that \([\mathscr{L}|_{\Omega_{n}}]=[\mathcal{O}]\in\operatorname{Con}(\Omega_{n})\). By Proposition 3.1.13 there is \(u\in\mathcal{O}(\Omega_{N})^{\times}\) such that \[\theta_{p}([\mathscr{L}])=uK^{\times}\mathcal{O}(\Omega_{N})^{\times p}\in \left(\frac{\mathcal{O}(\Omega_{N})^{\times}}{K^{\times}\mathcal{O}(\Omega_{N })^{\times p}}\right)^{A}.\] It suffices to show that \(u|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})^{\times p}\). Now \(\mu_{\Omega_{N},p}(u)\in M_{0}(h(\Omega_{N}),\mathbb{Z}/p\mathbb{Z})^{A}\) by Corollary 4.3.3(b), and by Proposition 2.1.8(b,c) and Lemma 4.2.15(b), the natural map induced by the inclusion \(\Omega_{N}\subset\Omega_{N-1}\) \[M_{0}(h(\Omega_{N}),\mathbb{Z}/p\mathbb{Z})^{A}\to M_{0}(h(\Omega_{N-1}), \mathbb{Z}/p\mathbb{Z})\] is zero. It follows, using Proposition 4.3.1, that there is \(v\in\mathcal{O}(\Omega_{N-1}^{\times})\) such that \(\mu_{\Omega_{N-1}}\left(u|_{\Omega_{N-1}}v^{p}\right)=0\). Writing \[w:=u|_{\Omega_{N-1}}v^{p},\] to prove \(u|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})^{\times p}\) it suffices to show that \(w|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})^{\times p}\). Since \(\mu_{\Omega_{N-1}}(w)=0\), Proposition 4.3.1 now implies that \(w\in K^{\times}\mathcal{O}(\Omega_{n+m-1})^{\times\times}\). Now by Corollary 4.3.7, \[w|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})_{|\pi_{F}|^{m-1}}^{\times \times}.\] Our assumption that \(|\pi_{F}^{m-1}|<\varpi\) now allows us to deduce from Lemma 4.3.2(b) that \(w|_{\Omega_{n}}\in K^{\times}\mathcal{O}(\Omega_{n})^{\times p}\) as required. We can now compute \(\operatorname{PicCon}(\Omega)^{GL_{2}(\mathcal{O}_{F})}_{\operatorname{tors}}\). **Corollary 4.3.9**.: The group \(\operatorname{PicCon}(\Omega)^{GL_{2}(\mathcal{O}_{F})}_{\operatorname{tors}}\) is cyclic of order \(q+1\). Proof.: By Proposition 4.2.21, Corollary 3.1.6, and Proposition 4.1.11 we have \[\operatorname{PicCon}(\Omega)\cong\varprojlim\operatorname{Con}(\Omega_{n}).\] Since each \(\Omega_{n}\) is \(A:=GL_{2}(\mathcal{O}_{F})\)-stable by Remark 4.2.13, and the functors taking \(A\)-invariants and taking the \(d\)-torsion subgroup each commute with limits it follows that for each \(d\geqslant 1\) we have \[\operatorname{PicCon}(\Omega)^{A}[d]\cong\varprojlim\operatorname{Con}(\Omega _{n})^{A}[d].\] By Proposition 4.3.8(b), \(\varprojlim\operatorname{Con}(\Omega_{n})^{A}[p]=0\). So \(\operatorname{PicCon}(\Omega)^{A}\) has no \(p\)-torsion. By Proposition 4.3.8(a), we can see that for each \(d\) that is a multiple of \(q+1\), \(\varprojlim\operatorname{Con}(\Omega_{n})^{A}[d]\) is cyclic of order \(q+1\). The result follows. ### Proof of Theorem A We now return to the setting of SS3.3, and start working towards our proof of Theorem A. Recall the cheeses \(\Omega_{n}\) from Definition 4.2.12(b) and the map \(\phi_{z}\) from Proposition 3.2.7. \(\Omega_{F,n}\) will denote the version of \(\Omega_{n}\) obtained when \(K=F\). **Theorem 4.4.1**.: Let \(L\) be an unramified quadratic extension of \(F\). Then for every \(z\in\Omega_{F,0}(L)\) and every \(n\geqslant 0\), the map \[\phi_{z}\colon\operatorname{Con}^{GL_{2}(\mathcal{O}_{F})}(\Omega_{n})[p^{ \prime}]\to\operatorname{Hom}(GL_{2}(\mathcal{O}_{F})_{z},K(z)^{\times})[p^{ \prime}]\] is an isomorphism. Proof.: Note that because \(z\in\Omega_{F,0}(L)\), we may view it as a point of \(\Omega_{F,0}(K(z))=\Omega_{0}(K(z))\subseteq\Omega_{n}(K(z))\). Hence the map \(\phi_{z}\) from Proposition 3.2.7 makes sense in this setting. Write \(A:=\operatorname{GL}_{2}(\mathcal{O}_{F})\). By Proposition 3.2.14 together with the left exactness of the endofunctor \((-)[p^{\prime}]\) on abelian groups, there is an exact sequence \[0\to\operatorname{Hom}(A,K^{\times})[p^{\prime}]\to\operatorname{Con}^{A}( \Omega_{n})[p^{\prime}]\to\operatorname{Con}(\Omega_{n})^{A}[p^{\prime}].\] The group \(\operatorname{Con}(\Omega_{n})^{A}[p^{\prime}]\) is cyclic of order \(q+1\) by Proposition 4.3.8(a), whereas \(\operatorname{Hom}(A,K^{\times})[p^{\prime}]\) is cyclic of order \(q-1\) by Lemma 2.2.3(a). Thus, \[\left|\operatorname{Con}^{A}(\Omega_{n})[p^{\prime}]\right|\leqslant q^{2}-1.\] Since \(z\in\mathcal{O}_{F,0}(L)\subseteq\Omega_{F}(\overline{F})\) by assumption, we can apply Lemma 4.2.1 to see that \(A_{z}=G^{0}_{z}\). Also, \(z\in\Omega_{F,0}(L)\subseteq\Omega_{F}(L)=L\backslash F\) implies that \(F(z)=L\) is a quadratic extension of \(F\), so Lemma 2.2.8 can be applied to deduce that \(\operatorname{Hom}(A_{z},K(z)^{\times})[p^{\prime}]\) is cyclic of order \(q^{2}-1\). Now it suffices to show that the image of \(\phi_{z}\) contains a generator of this cyclic group. To this end we will construct an element \([\mathscr{L}]\) of \(\operatorname{Con}^{A}(\Omega_{n})[p^{\prime}]\) such that, in the notation of Lemma 2.2.8, \(\phi_{z}([\mathscr{L}])=\widehat{j_{z}}^{q^{n}}\). To do this, we will first construct a suitable unit \(u\in\mathcal{O}(\Omega_{n})^{\times}\), then an appropriate \(1\)-cocycle \(\alpha\in\mathcal{Z}^{A,\Omega_{n}}_{u,q+1,q-1}\) and then the required equivariant line bundle \(\mathscr{L}\) is given by an application of Lemma 3.3.4(a). Consider the function \(j\colon A\to\mathcal{O}(\Omega_{n})^{\times}\) given by \[j\left(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\right)=a-cx.\] We compute that \[j\left(\begin{pmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{pmatrix}\right)\begin{pmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{pmatrix}\cdot j\left(\begin{pmatrix}a_{2}&b_{2}\\ c_{2}&d_{2}\end{pmatrix}\right) = (a_{1}-c_{1}x)\left(a_{2}-c_{2}\frac{d_{1}x-b_{1}}{a_{1}-c_{1}x}\right)\] \[= (a_{1}a_{2}+b_{1}c_{2})-(c_{1}a_{2}+c_{2}d_{2})x\] \[= j\left(\begin{pmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{pmatrix}\begin{pmatrix}a_{2}&b_{2}\\ c_{2}&d_{2}\end{pmatrix}\right)\] and see that \(j\in Z^{1}(A,\mathcal{O}(\Omega_{n})^{\times})\). The reason for considering this \(1\)-cocycle \(j\) is that \[\mu_{\Omega_{n}}(j(g))=\delta_{gD_{\infty}}-\delta_{D_{\infty}}\quad\text{ for all}\quad g\in A,\quad\text{and}\quad z\circ j|_{A_{z}}\equiv\widehat{j_{z}}\mod \quad L^{\times\times}.\] Now we define \(\nu:=|h(\Omega_{n})|\delta_{D_{\infty}}-\Sigma_{\Omega_{n}}\in M_{0}(h(\Omega _{n}),\mathbb{Z})\). Applying Proposition 4.3.1, we find \(u\in\mathcal{O}(\Omega_{n})^{\times}\) such that \(\mu_{\Omega_{n}}(u)=\nu\). Then we calculate that \[\delta_{A}(\nu)(g)=g\cdot\nu-\nu=|h(\Omega_{n})|(\delta_{gD_{\infty}}-\delta_ {D_{\infty}})\quad\text{for all}\quad g\in A.\] Therefore inside \(Z^{1}(A,M_{0}(h(\Omega_{n}),\mathbb{Z}))\) we have the equality \[\mu_{\Omega_{n}}\circ j^{|h(\Omega_{n})|}=\delta_{A}(\nu)=\mu_{\Omega_{n}} \circ\delta_{A}(u).\] Since \(|h(\Omega_{n})|=q^{n}(q+1)\), this means that \(j^{-q^{n}(q+1)}\delta_{A}(u)\) takes values in \(\ker\mu_{\Omega_{n}}\). Now Proposition 4.3.1 tells us that \(\ker\mu_{\Omega_{n}}=K^{\times}\cdot\mathcal{O}(\Omega_{n})^{\times\times}\). So we may rephrase this as saying that \[\pi_{T(\Omega_{n})}\circ(j^{-q^{n}(q+1)}\delta_{G}(u))\] takes values in \(K^{\times}/K^{\times\times}\). Since \(\Omega_{n}\) is geometrically connected, \(A\) is compact and every finite abelian \(p^{\prime}\)-quotient of \(A\) has exponent dividing \(q-1\), by Remark 2.2.4, we may apply Proposition 3.3.7 with \((d,e,u,\beta)=(q+1,q-1,u,j^{q^{n}})\) to deduce that there exists an \(\alpha\in\mathcal{Z}_{u,q+1,q-1}^{A,\Omega_{n}}\) such that \[\pi_{T(\Omega_{n})}\circ\alpha=\pi_{T(\Omega_{n})}\circ j^{q^{n}}. \tag{18}\] By Lemma 3.3.4(a), there is a \((q^{2}-1)\)-torsion \(A\)-equivariant line bundle with connection \(\mathscr{L}_{u,q+1}^{\alpha}\) on \(\Omega_{n}\), such that \(\phi_{\Omega_{n}}^{A}([\mathscr{L}_{u,q+1}^{\alpha}])=[\alpha]\) inside \(H^{1}(A,\mathcal{O}(\Omega_{n})^{\times})\). To see what \(\phi_{z}\) does to this \([\mathscr{L}_{u,q+1}^{\alpha}]\), we apply Proposition 3.3.2(b) to find that \[\phi_{z}([\mathscr{L}_{u,q+1}^{\alpha}])=z\circ(\operatorname{res}_{A_{z}}^{A }\phi_{\Omega_{n}}^{A}([\mathscr{L}_{u,q+1}^{\alpha}]))=z\circ\alpha|_{A_{z}}.\] Applying the functor \(T(-)\) from Notation 3.3.5 to the morphism of affinoid varieties \(z:\operatorname{Sp}L\hookrightarrow\Omega_{n}\) and using equation (18), we see that \[z\circ\alpha|_{A_{z}}\equiv z\circ j^{q^{n}}|_{A_{z}}\operatorname{mod}K(z)^ {\times\times}.\] But \(z\circ j\left(\begin{pmatrix}a&-cN(z)\\ c&a-c\operatorname{tr}(z)\end{pmatrix}\right)=a-cz\), so as \(z\circ\alpha|_{A_{z}}\) takes values in \(\mu_{q^{2}-1}(K(z)^{\times})\), we conclude that inside \(\operatorname{Hom}(A_{z},K(z)^{\times})[p^{\prime}]\) we have \[\phi_{z}([\mathscr{L}_{u,q+1}^{\alpha}])=z\circ\alpha|_{A_{z}}=\widehat{j_{z} }^{q^{n}}\] as claimed earlier. This is a generator because \(q^{n}\) is coprime to \(q^{2}-1\) Using our next Lemma, we will be able to use Theorem 4.4.1 to shed light on our main group of interest, namely \(\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\). See Corollary 4.4.5 below for a description of the \(p^{\prime}\)-torsion part of \(\operatorname{PicCon}^{G_{0}}(\Omega)\). **Lemma 4.4.2**.: Let \(A=\operatorname{GL}_{2}(\mathcal{O}_{F})\) and \(B={}^{w}\operatorname{GL}_{2}(\mathcal{O}_{F})\). 1. \(\operatorname{PicCon}^{A}(\Omega)\quad\cong\quad\varprojlim\operatorname{Con }^{A}(\Omega_{n})\). 2. \(\operatorname{PicCon}^{G^{0}}(\Omega)\quad\cong\quad\operatorname{PicCon}^{A} (\Omega)\underset{\operatorname{PicCon}^{I}(\Omega)}{\times}\operatorname{ PicCon}^{B}(\Omega)\). Proof.: We note by Proposition 4.2.21 and Lemma 4.2.20(a), \(\Omega\) is a smooth, geometrically connected, quasi-Stein space with admissible cover \(\{\Omega_{n}\}\). Thus (a) follows from Lemma 3.2.16 together with Remark 4.2.13 and Proposition 4.1.11. (b) This follows from Proposition 3.2.15 and Theorem 2.2.2. Recall the \(K\)-cheeses \(\Psi_{n}\) from Definition 4.2.16. Since \(\operatorname{Pic}(\Psi_{n})=0\) by Proposition 4.1.11, there are restriction maps \(\operatorname{PicCon}^{I}(\Omega)\to\operatorname{Con}^{I}(\Psi_{n})\) for all \(n\geqslant 0\). **Corollary 4.4.3**.: The restriction map \(\operatorname{PicCon}^{I}(\Omega)[p^{\prime}]\to\operatorname{Con}^{I}(\Psi_{ 0})[p^{\prime}]\) is injective. Proof.: By Lemma 3.2.16 and Lemma 4.2.20(c), it suffices to show that the restriction map \(\operatorname{Con}^{I}(\Psi_{n+1})[p^{\prime}]\to\operatorname{Con}^{I}(\Psi_ {n})[p^{\prime}]\) is injective for all \(n\geqslant 1\). Fixing \(n\geqslant 1\), this is equivalent to \(\operatorname{Con}^{I}(\Psi_{n+1})[d]\to\operatorname{Con}^{I}(\Psi_{n})[d]\) being injective for each \(d\) coprime to \(p\). We will prove this using Proposition 4.3.5(b). Condition (i) of Proposition 4.3.5 follows from Lemma 4.2.19(c). Condition (ii) holds since the induced map on \(I\)-orbits \(h(\Psi_{n+1})/I\to h(\Psi_{n})/I\) is surjective and hence injective by Lemma 4.2.19(c),(a). Conditions (iii) and (iv) are trivial since in this case \(G=H=I\). Thus \(\operatorname{Con}^{I}(\Psi_{n})[d]\to\operatorname{Con}^{I}(\Psi_{n-1})[d]\) is injective. **Proposition 4.4.4**.: For every \([\mathscr{L}]\in\operatorname{PicCon}^{GL_{2}(\mathcal{O}_{F})}(\Omega)[p^{ \prime}]\) there is an integer \(k\) such that the restriction \(\mathscr{L}|_{I}\) satisfies \[[\mathscr{L}|_{I}]\cdot w[\mathscr{L}|_{I}]=[\mathcal{O}_{\widetilde{ \det}^{k}}]\quad\text{in}\quad\operatorname{PicCon}^{I}(\Omega).\] Proof.: We restrict \(\mathscr{L}|_{I}\) further to \(\Psi_{0}\), forming \([\mathscr{L}|_{I,\Psi_{0}}]\in\operatorname{Con}^{I}(\Psi_{0})[p^{\prime}]\). By Corollary 4.4.3, it suffices to show that inside in \(\operatorname{Con}^{I}(\Psi_{0})\) we have \[[\mathscr{L}|_{I,\Psi_{0}}]\cdot w[\mathscr{L}|_{I,\Psi_{0}}]=[\mathcal{O}_{ \widetilde{\det}^{k}}]\quad\text{for some}\quad k\in\frac{\mathbb{Z}}{(q-1) \mathbb{Z}}.\] We consider the exact sequence coming from Lemma 3.2.14 \[1\to\operatorname{Hom}(I,K^{\times})[p^{\prime}]\to\operatorname{Con}^{I}( \Psi_{0})[p^{\prime}]\overset{\cong}{\to}\operatorname{Con}(\Psi_{0})^{I}[p^ {\prime}]. \tag{19}\] Note that \(\omega([\mathscr{L}])\in\operatorname{PicCon}^{GL_{2}(\mathcal{O}_{F})}( \Omega)_{\operatorname{tors}}\) is killed by \(q+1\) by Corollary 4.3.9. Therefore the image \(\omega([\mathscr{L}|_{I,\Psi_{0}}])\) of this class in \(\operatorname{Con}(\Psi_{0})^{I}\) is also killed by \(q+1\). Since \(w\) normalises \(I\) and \(\Psi_{0}\) is \(\langle w,I\rangle\)-stable, Corollary 4.3.3 and Proposition 3.1.13 gives us an isomorphism of groups with \(\langle w\rangle\)-action \[\mu_{\Psi_{0},q+1}\circ\theta_{q+1}\colon\operatorname{Con}(\Psi_{0})^{I}[q+1 ]\overset{\cong}{\to}M_{0}\left(h(\Psi_{0}),\frac{\mathbb{Z}}{(q+1)\mathbb{ Z}}\right)^{I}.\] Next, \(h(\Psi_{0})\) has two \(I\)-orbits \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) of size \(q\) by Lemma 4.2.19(a). Hence \(M_{0}\left(h(\Psi_{0}),\frac{\mathbb{Z}}{(q+1)\mathbb{Z}}\right)^{I}\) is generated by the image of \(\Sigma_{\mathcal{O}_{1}}-\Sigma_{\mathcal{O}_{2}}\). Since \(w\) swaps the two orbits, it acts on \(M_{0}\left(h(\Psi_{0}),\frac{\mathbb{Z}}{(q+1)\mathbb{Z}}\right)^{I}\) by negation. Hence \(w\) acts on \(\operatorname{Con}(\Psi_{0})^{I}[q+1]\) by inversion, so that \[\omega\left([\mathscr{L}|_{I,\Psi_{0}}]\cdot w[\mathscr{L}|_{I,\Psi_{0}}]\right) =\omega\left([\mathscr{L}|_{I,\Psi_{0}}]\right)\cdot w\omega\left([\mathscr{L} |_{I,\Psi_{0}}]\right)=[\mathcal{O}]\] is the trivial element of \(\operatorname{Con}(\Psi_{0})^{I}[q+1]\). The exact sequence (19) above now implies that \([\mathscr{L}|_{I,\Psi_{0}}]\cdot w[\mathscr{L}|_{I,\Psi_{0}}]=[\mathcal{O}_{ \chi}]\) for some \(\chi\in\operatorname{Hom}(I,\mu_{p^{\prime}}(K))\). Finally, since \(w^{2}\in Z(GL_{2}(F))\) acts trivially on \(\Psi_{0}\) and \(\operatorname{Hom}(I,\mu_{p^{\prime}}(K))\), \[w\cdot([\mathscr{L}|_{I,\Psi_{0}}]\cdot w[\mathscr{L}|_{I,\Psi_{0}}])=[ \mathscr{L}|_{I,\Psi_{0}}]\cdot w[\mathscr{L}|_{I,\Psi_{0}}].\] Hence \([\mathcal{O}_{w\chi}]=w\cdot[\mathcal{O}_{\chi}]=[\mathcal{O}_{\chi}]\), which implies that \[\chi\in\operatorname{Hom}(I,\mu_{p^{\prime}}(K))^{\langle w\rangle}.\] Now Lemma 2.2.3(c) completes the proof. **Corollary 4.4.5**.: The following restriction map is an isomorphism of groups: \[\operatorname{PicCon}^{G^{0}}(\Omega)[p^{\prime}]\to\operatorname{PicCon}^{ GL_{2}(\mathcal{O}_{F})}(\Omega)[p^{\prime}].\] Proof.: Let \(A:=GL_{2}(\mathcal{O}_{F})\) and \(B={}^{w}A\). The commutative diagram maps given by restriction is a pullback square by Lemma 4.4.2(b). Since taking \(p^{\prime}\)-torsion preserves limits in the category of abelian groups it follows that is also a pullback square and so the diagram is a pullback square as well. Since pullbacks preserve isomorphisms, it suffices to see that in the last diagram, we have \(\operatorname{im}q_{2}\subseteq\operatorname{im}q_{1}\) and that \(q_{1}\) is injective. We consider the commutative diagram whose rows are exact by Proposition 3.2.14. The left vertical map is injective by Lemma 2.2.3(a,b) and the right vertical map is an inclusion map. Now the injectivity of \(q_{2}\) follows from the Snake Lemma. Therefore \(q_{1}\) is also injective, because \(q_{2}(w[\mathscr{L}])=wq_{1}([\mathscr{L}])\) for every \([\mathscr{L}]\in\operatorname{PicCon}^{B}(\Omega)\). Finally, by Proposition 4.4.4, \(q_{2}([\mathscr{L}])=q_{1}(w[\mathscr{L}]^{-1}\otimes\mathcal{O}_{\widetilde {\operatorname{det}}^{k}})\) for some integer \(k\). Hence the image of \(q_{2}\) is contained in the image of \(q_{1}\). **Remark 4.4.6**.: One may wonder if it might be possible to strengthen the statement of Corollary 4.4.5 to give a similar description of _all_ torsion elements in \(\operatorname{PicCon}^{G_{0}}(\Omega)\). However when \(q=2\), the restriction map \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\to\operatorname{ PicCon}^{GL_{2}(\mathcal{O}_{F})}(\Omega)_{\operatorname{tors}}\] is not an isomorphism in general, because the homomorphism from the abelianization of \(GL_{2}(\mathcal{O}_{F})\) to the abelianization of \(G^{0}\) induced by inclusion has a kernel of order \(2\) and so particular the restriction map \[\operatorname{Hom}(G^{0},K^{\times})_{\operatorname{tors}}\to\operatorname{ Hom}(GL_{2}(\mathcal{O}_{F}),K^{\times})_{\operatorname{tors}}\] then may not be surjective. Next we pass to the limit as \(n\to\infty\) to deduce the consequences of Theorem 4.4.1 for the \(p^{\prime}\)-torsion part of our main group of interest, namely \(\operatorname{PicCon}^{G^{0}}(\Omega)\). First we recall the Sylow pro-\(p\) subgroup \(P_{z}\) of \(SL_{2}(F)_{z}\) from Lemma 2.2.5. **Lemma 4.4.7**.: Suppose that \(K\) contains the quadratic unramified extension \(L\) of \(F\) and let \(z\in\Omega_{F,0}(L)\). Then the homomorphism \[\phi_{z}[p^{\prime}]\colon\operatorname{PicCon}^{G^{0}}(\Omega)[p^{\prime}] \quad\longrightarrow\quad\operatorname{Hom}(G^{0}_{z},K^{\times})[p^{\prime}]\] is an isomorphism. Moreover, every \(p^{\prime}\)-torsion character \(\chi:G^{0}_{z}\to K^{\times}\) kills \(P_{z}\). Proof.: By Lemma 4.4.2(a) and Corollary 4.4.5, restriction maps induce an isomorphism \[\operatorname{PicCon}^{G^{0}}(\Omega)[p^{\prime}]\stackrel{{ \cong}}{{\longrightarrow}}\varprojlim\operatorname{Con}^{GL_{2}( \mathcal{O}_{F})}(\Omega_{n})[p^{\prime}].\] Using Proposition 3.3.2(d) together with Theorem 4.4.1, we deduce that map \(\phi_{z}[p^{\prime}]\) in the statement of the Lemma is an isomorphism. The last statement holds because \(P_{z}\) is a (normal) pro-\(p\) subgroup of \(G^{0}_{z}\). With the last result in hand, it is natural to wonder about the \(p\)-torsion part of \(\operatorname{PicCon}^{G^{0}}(\Omega)\). The following description of this group does not require the full force the methods employed in the proof of Theorem 4.4.1. **Lemma 4.4.8**.: Suppose that \(K\) contains the quadratic unramified extension \(L\) of \(F\) and let \(z\in\Omega_{F}(L)\). The homomorphism \[\phi_{z}[p^{\infty}]\colon\operatorname{PicCon}^{G^{0}}(\Omega)[p^{\infty}] \quad\longrightarrow\quad\operatorname{Hom}(G^{0}_{z},K^{\times})[p^{\infty}]\] is injective with image \(\operatorname{Hom}(G^{0}_{z}/P_{z},K^{\times})[p^{\infty}]\). Proof.: Since \(K\supseteq L\) and \(z\in\Omega_{F}(L)\) by assumption, we see that \(z\in\Omega(K)\). Hence the map \(\phi_{z}:\operatorname{Pic}^{G^{0}}(\Omega)\to\operatorname{Hom}(G^{0}_{z},K ^{\times})\) exists by Proposition 3.2.7. Now consider the following triangle: \[\operatorname{Hom}(G^{0},K^{\times})[p^{\infty}]\] \[\operatorname{Hom}(G^{0}_{z},K^{\times})[p^{\infty}]\] Here, the horizontal map sends the character \(\chi\) to \([\mathcal{O}_{\chi}]\), and the diagonal arrow res on the left is restriction of characters. The triangle is commutative by Lemma 3.3.2(c), and the horizontal arrow is an isomorphism by Proposition 3.2.14 and Corollary 4.3.9. Hence it suffices to show that \(\operatorname{res}\) is injective, and that its image is \(\operatorname{Hom}(G^{0}_{z}/P_{z},K^{\times})[p^{\infty}]\). Note that \(SL_{2}(F)\) is a perfect subgroup of \(G^{0}\) and \(K^{\times}\) is abelian. Hence \(\chi|_{SL_{2}(F)}=1\) for any character \(\chi:G^{0}\to K^{\times}\). In particular, \(\operatorname{res}(\chi)\) vanishes on the subgroup \(P_{z}\) of \(SL_{2}(F)\). Now if \(\chi:G^{0}\to K^{\times}\) is a character such that \(\operatorname{res}(\chi)=\chi|_{G^{0}_{z}}=1\), then Corollary 2.2.6 immediately implies that \(\chi=1\). Therefore \(\operatorname{res}\) is injective as required. **Proposition 4.4.9**.: Suppose that \(K\) contains the quadratic unramified extension \(L\) of \(F\). Then for all \(g\in GL_{2}(F)\) and \(z\in\Omega_{F}(L)\), there is a commutative diagram whose arrows are all isomorphisms of abelian groups. Proof.: The diagram commutes by Proposition 3.2.13(b), and its vertical arrows are isomorphisms with inverses \(g^{-1}\) and \(c^{*}_{g^{-1}}\) respectively. By Lemma 4.4.7 and Lemma 4.4.8, the top horizontal arrow is an isomorphism in the case when \(z\in\Omega_{F,0}(L)\). But since \(L\) is quadratic over \(F\), \(GL_{2}(F)\) acts transitively on \(\Omega_{F}(L)=L\backslash F\), so we may choose \(g\in GL_{2}(F)\) such that \(g\cdot z\in\Omega_{F,0}(L)\) and then \(\phi_{g\cdot z}\) is an isomorphism. The commutativity of the diagram now ensures that \(\phi_{z}\) is always an isomorphism. We can finally give our proof of Theorem A. **Theorem 4.4.10**.: Suppose that \(K\) contains the quadratic unramified extension \(L\) of \(F\). Then there is an isomorphism of abelian groups \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\to\operatorname{ Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}\] that descends to a natural bijection \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}/G\to\operatorname{ Hom}(\mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}/D^{\times}.\] Proof.: Choose \(z\in\Omega_{F}(L)\) as well as an \(F\)-algebra homomorphism \(\iota:L\hookrightarrow D\). By Lemma 2.2.5(d,b,c), \(j_{z}(P_{z})\) is the Sylow pro-\(p\) subgroup of \(\ker N_{L/F}\cap\mathcal{O}_{L}^{\times}\). In view of Definition 2.3.5, we see that \(j_{z}(P_{z})=P_{L}^{1}\). Now Proposition 4.4.9 together with Lemma 2.2.5(b) shows that \[j_{z}\circ\phi_{z}\colon\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{ tors}}\to\operatorname{Hom}(\mathcal{O}_{L}^{\times}/P_{L}^{1},K^{\times})_{ \operatorname{tors}}\] is an isomorphism. We can now post-compose \(j_{z}\circ\phi_{z}\) with the inverse of the isomorphism \(\overline{\varrho\circ\iota}^{*}\) from Corollary 2.3.7(a) to obtain the required isomorphism \[(\overline{\varrho\circ\iota}^{*})^{-1}\circ j_{z}\circ\phi_{z}:\operatorname {PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}\quad\stackrel{{ \cong}}{{\longrightarrow}}\quad\operatorname{Hom}(\mathcal{O}_{D}^{\times},K ^{\times})_{\operatorname{tors}}.\] Although this does depend on the choice of \(z\in\Omega(L)\) as well as the choice of the \(F\)-algebra embedding \(\iota:L\hookrightarrow D\), using Proposition 4.4.9, Remark 2.2.7 and Corollary 2.3.7(c) we see that it descends to a well-defined bijection \[\operatorname{PicCon}^{G^{0}}(\Omega)_{\operatorname{tors}}/G\quad \stackrel{{\cong}}{{\longrightarrow}}\quad\operatorname{Hom}( \mathcal{O}_{D}^{\times},K^{\times})_{\operatorname{tors}}/D^{\times}\] which does not depend on any of the choices.
2309.13566
Probing the non-Abelian fusion of a pair of Majorana zero modes
In this work, we perform real time simulations for probing the non-Abelian fusion of a pair of Majorana zero modes (MZMs). The nontrivial fusion outcomes can be either a vacuum, or an unpaired fermion, which reflect the underlying non-Abelian statistics. The two possible outcomes can cause different charge variations in the nearby probing quantum dot (QD), while the charge occupation in the dot is detected by a quantum point contact. In particular, we find that gradual fusion and gradual coupling of the MZMs to the QD (in nearly adiabatic switching-on limit) provide a simpler detection scheme than sudden coupling after fusion to infer the coexistence of two fusion outcomes, by measuring the occupation probability of the QD. For the scheme of sudden coupling (after fusion), we propose and analyze continuous weak measurement for the quantum oscillations of the QD occupancy. From the power spectrum of the measurement currents, one can identify the characteristic frequencies and infer thus the coexistence of the fusion outcomes.
Jing Bai, Qiongyao Wang, Luting Xu, Wei Feng, Xin-Qi Li
2023-09-24T06:43:19Z
http://arxiv.org/abs/2309.13566v2
# Probing the non-Abelian fusion of a pair of Majorana zero modes ###### Abstract In this work we perform real time simulations for probing the non-Abelian fusion of a pair of Majorana zero modes (MZMs). The nontrivial fusion outcomes can be either a vacuum, or an unpaired fermion, which reflect the underlying non-Abelian statistics. The two possible outcomes can cause different charge variations in the nearby probing quantum dot (QD), while the charge occupation in the dot is detected by a quantum point contact. In particular, we find that gradual fusion and gradual coupling of the MZMs to the QD (in nearly adiabatic switching-on limit) provide a simpler detection scheme than sudden coupling after fusion to infer the coexistence of two fusion outcomes, by measuring the occupation probability of the QD. For the scheme of sudden coupling (after fusion), we propose and analyze continuous weak measurement for the quantum oscillations of the QD occupancy. From the power spectrum of the measurement currents, one can identify the characteristic frequencies and infer thus the coexistence of the fusion outcomes. _Introduction_. -- The nonlocal nature of the Majorana zero modes (MZMs) and non-Abelian statistics obeyed provide an elegant paradigm of topological quantum computation [1; 2; 3; 4; 5; 6]. In the past decade, after great efforts, considerable progress has been achieved for realizing the MZMs in various experimental platforms. Yet, the main experimental evidences are largely associated with the zero-bias conductance peaks (see the recent review [7] and references therein), which cannot ultimately confirm the realization of MZMs (even a stable quantized conductance cannot also). Therefore, an essential milestone step is to identify the MZMs by probing the underlying non-Abelian statistics, via either braiding or fusion experiments. Braiding MZMs in real space can result in quantum state evolution in the manifold of highly degenerate ground states [8; 9; 10; 11], while fusing the MZMs can yield outcomes of either a vacuum, or an unpaired fermion (resulting in an extra charge) [12; 13; 14; 15; 16; 17]. The latter is owing to the fact that the MZMs essentially realize "Ising" non-Abelian anyons, which obey a particularly simple fusion rule as [12; 13] \[\gamma\times\gamma=I+\psi\,. \tag{1}\] This means that a pair of MZMs can either annihilate or combine into a fermion \(\psi\). These two "fusion channels" correspond to the regular fermion being empty or filled. The presence of multiple fusion channels is essentially related to non-Abelian statistics (actually is commonly used to define non-Abelian anyons). More specifically, there exist two types of fusion design [12; 13]. The "trivial" fusion corresponds to the fused pair of MZMs with a defined parity within the same pair. In this case, the outcome is deterministic; it leads to unchanged parity with no extra charge. Of more interest is the case of "nontrivial" fusion, where the fused pair of MZMs are from different pairs with parities (e.g. even parity) being defined in advance. In this case, the fusion yields probabilistic outcomes as shown above. While directly probing non-Abelian statistics of MZMs is a milestone towards topological quantum computation, probing fusion should be simpler than demonstrating braiding [12; 13]. However, so far there is not yet report of nontrivial fusion experiment [14]. The basic idea of probing fusion is bringing a pair of MZMs together to remove energy degeneracy between the two possible outcomes of fusion (i.e., \(I\) and \(\psi\)), owing to overlap of the two MZMs. Then, a measurement to distinguish the fermion parity can reveal the stochastic result being \(I\) or \(\psi\), with equal probability. This type of nontrivial fusion demonstration is actually equivalent to demonstrating the underlying non-Abelian statistics [12; 13]. In practice, demonstrating nontrivial fusion of MZMs should require preparation of initial pair states of MZMs with definite fermion parities and nonadiabatic moving when bringing the MZMs together to fuse. In this work, along the line proposed in Ref. [14] (as schematically shown here in Fig. 1), we consider fusing a pair of MZMs from two topological superconducting (TSC) wires (each wire accommodating two Figure 1: Schematic setup of probing the non-Abelian fusion of a pair of MZMs, say, \(\gamma_{2}\) and \(\gamma_{3}\), from different Majorana pairs with parities (e.g. even parity) being defined in advance. Based on the fusion rule \(\gamma\times\gamma=I+\psi\), the fused MZMs would yield probabilistic outcomes of vacuum \(I\) and a regular fermion \(\psi\). A quantum dot is introduced to couple to the fusing MZMs for probing the fusion outcomes, and a nearby point-contact detector is introduced to detect the charge occupation of the quantum dot. MZMs at the ends). This model setup can correspond to the platform of mini-gate controlled planar Josephson junctions [18; 19]. The two TSC segments can be formed from a single junction wire, by making them separated by a topologically trivial segment, via gating technology. A quantum dot is introduced to couple to the central part of the coupled wires, for use of probing the fusion outcomes when bringing the MZMs together to fuse at the central part. Moreover, a nearby point-contact (PC) detector is introduced to detect the charge occupation of the quantum dot. All the ingredients in this proposal are within the reach of nowadays state-of-the-art experiments [14]. For fusion experiments based on this proposal, one may encounter some practical complexities, such as the interplay of charge fluctuations in the quantum dot caused by the two fusion outcomes, which is relevant to the control of the dot energy level and its coupling strengths to the MZMs, and the effect on the dot occupation pattern caused by the speed of fusion and coupling of the MZMs to the quantum dot. Detection schemes accounting for these issues will be analyzed in this work, and are expected to be useful for future experiments. _Setup and Basic Consideration.--_ The setup proposed in Ref. [14] can be modeled as Fig. 1, where the two TSC quantum wires are formed by interrupting a single TSC wire at the center via mini-gate-voltage control. For each TSC wire, a pair of MZMs are emergent at the ends, i.e., (\(\gamma_{1},\gamma_{2}\)) in the left wire and (\(\gamma_{3},\gamma_{4}\)) in the right wire. The coupling between the central modes \(\gamma_{2}\) and \(\gamma_{3}\) is described by \(H^{\prime}_{M}=i\epsilon_{M}\gamma_{2}\gamma_{3}\), with the coupling energy \(\epsilon_{M}\) changeable when \(\gamma_{2}\) and \(\gamma_{3}\) are separated away in space by mini-gate-voltage control. Most naturally, one can combine (\(\gamma_{1},\gamma_{2}\)) as a regular fermion \(f_{12}\) with occupation \(n_{12}=0\) or 1, and (\(\gamma_{3},\gamma_{4}\)) as another regular fermion \(f_{34}\) with occupation \(n_{34}=0\) or 1. For fusion experiment, one can prepare the specific initial state \(|0_{12}0_{34}\rangle\) as proposed in Ref. [14]. That is, by means of mini-gate-voltage control, move \(\gamma_{2}\) and \(\gamma_{3}\) to the ends of the two wires, close to \(\gamma_{1}\) and \(\gamma_{4}\), respectively; then, empty the possible occupations of the regular fermions \(f_{12}\) and \(f_{34}\) by introducing tunnel-coupled side quantum dots and modulating the dot energies (while the quantum dots are also tunnel-coupled to outside reservoirs). Starting with \(|0_{12}0_{34}\rangle\), consider simultaneously moving \(\gamma_{2}\) and \(\gamma_{3}\) from the two terminal sides back to the central part to fuse (to couple each other such that \(\epsilon_{M}\neq 0\)), as shown in Fig. 1. For the final state, in the representation of \(n_{12}\) and \(n_{34}\) occupations, it is still \(|0_{12}0_{34}\rangle\). However, in the representation of \(n_{23}\) and \(n_{14}\), i.e., the occupations of the regular fermions \(f_{23}\) and \(f_{14}\) associated with the Majorana pairs (\(\gamma_{2},\gamma_{3}\)) and (\(\gamma_{1},\gamma_{4}\)), we can reexpress this state as (for derivation of this transformation rule, or the so-called _fusion rule_, see Appendix A) \[|0_{12}0_{34}\rangle=\frac{1}{\sqrt{2}}(|0_{23}0_{14}\rangle+i|1_{23}1_{14} \rangle)\,. \tag{2}\] We find that, in the new representation, the occupation of the \(f_{23}\) fermion can be empty or occupied, i.e., \(|0_{23}\rangle\) or \(|1_{23}\rangle\). This is nothing but the two possible outcomes \(I\) and \(\psi\) of the nontrivial fusion of Ising anyons, as shown by Eq. (1). The fusion coupling between the Majorana modes \(\gamma_{2}\) and \(\gamma_{3}\) would lift the energy degeneracy of the states \(|0_{23}\rangle\) and \(|1_{23}\rangle\), thus allowing to identify the fusion outcomes \(I\) and \(\psi\). Following Ref. [14], we consider to introduce a nearby quantum dot (QD) to couple to the central segment of the quantum wire, as shown in Fig. 1, where the Majorana modes \(\gamma_{2}\) and \(\gamma_{3}\) are located. The QD is assumed to have a single relevant energy level, described by \(H_{D}=\epsilon_{D}d^{\dagger}d\). We thus expect different charge occupation patterns of the QD, for the different fusion outcomes \(I\) and \(\psi\). In this context, it would be more convenient to describe the coupling between \((\gamma_{2},\gamma_{3})\) and the QD using the regular fermion \(f_{23}\) picture, as follows \[H^{\prime}_{DF}=(\lambda_{N}d^{\dagger}f_{23}+\lambda_{A}d^{\dagger}f_{23}^{ \dagger})+\mathrm{h.c.}\,. \tag{3}\] Here we have used the definition \(f_{23}=(\gamma_{2}+i\gamma_{3})/2\). Physically, the first term describes the usual normal tunneling process and the second term describes the Andreev process owing to Cooper pair splitting and recombination. The coupling amplitudes are associated with the couplings of \(\gamma_{2}\) and \(\gamma_{3}\) to the QD, say, \(\lambda_{2}\) and \(\lambda_{3}\) as shown in Fig. 1, as \(\lambda_{N,A}=\lambda_{2}\pm i\lambda_{3}\). Also following the proposal of Ref. [14], the charge fluctuation in the quantum dot (occupied or unoccupied) is measured by a point-contact (PC) detector, as schematically shown in Fig. 1. PC detectors with sensitivity at single electron level have been experimentally demonstrated and broadly applied in practice [20; 21; 22; 23]. Actually, the measurement dynamics of a charge qubit by a PC detector has been a long standing theoretical problem and has attracted intensive interest in the community of quantum and mesoscopic physics [24; 25; 26; 27]. In this work, we will perform real-time simulations for probing the non-Abelian fusion of a pair of MZMs. In particular, within the scheme of continuous quantum weak measurement, we will carry out results of individual quantum trajectories and power spectrum of the measurement currents. The characteristic frequencies in the power spectrum indicate the quantum oscillations of charge transfer associated with the two outcomes of fusion. The coexistence of two characteristic frequencies should be a promising evidence for the non-Abelian fusion of MZMs. _QD occupations caused by the two fusion outcomes.--_ Let us consider first the detection scheme of switching on the coupling of the QD to the Majorana modes \(\gamma_{2}\) and \(\gamma_{3}\), after their fusion from the deterministic initial state \(|0_{12}0_{34}\rangle\). This can be realized by initially setting the QD energy level much higher than the final coupling energy \(\epsilon_{M}\) between \(\gamma_{2}\) and \(\gamma_{3}\). Then, during the moving and fusion process, the QD level is effectively decoupled with \(\gamma_{2}\) and \(\gamma_{3}\), owing to the large mismatch of energies. After fusion, switch on the coupling by lowering the QD energy level such that \(\epsilon_{D}=\epsilon_{M}\). For this scheme, the time dependent charge occupation in the dot is shown in Fig. 2 (the result of the green curve). To understand this result, we should notice the coexistence of two channels of charge transfer oscillations between the QD and the MZMs \(\gamma_{2}\) and \(\gamma_{3}\). One channel is governed by normal tunneling between the states \(|1_{23}0_{d}\rangle\) and \(|0_{23}1_{d}\rangle\), resulting in a quantum oscillation state as \(\alpha_{N}(t)|1_{23}0_{d}\rangle+\beta_{N}(t)|0_{23}1_{d}\rangle\) with the dot occupation probability \(p_{d}^{(N)}(t)=|\beta_{N}(t)|^{2}\) plotted in Fig. 2 by the full Rabi-type oscillating red curve. The other channel is governed by the Andreev process between \(|0_{23}0_{d}\rangle\) and \(|1_{23}1_{d}\rangle\), resulting in a quantum oscillation given by \(\alpha_{A}(t)|0_{23}0_{d}\rangle+\beta_{A}(t)|1_{23}1_{d}\rangle\), with the dot occupation probability \(p_{d}^{(A)}(t)=|\beta_{A}(t)|^{2}\) plotted in Fig. 2 by the smaller amplitude blue curve. These two channels are independent to each other. Thus the electron occupation in the dot is simply an equal probability weighted sum, based on Eq. (2), as \[p_{d}(t)=\left[|\beta_{N}(t)|^{2}+|\beta_{A}(t)|^{2}\right]/2\,. \tag{4}\] Actually, the quantum oscillations in the two channels have simple analytic solutions, with dot occupation probabilities given by \[|\beta_{N,A}(t)|^{2}=\left(|\lambda_{N,A}|^{2}/\Omega_{N,A}^{2}\right)\sin^{2 }\left(\Omega_{N,A}t\right)\,, \tag{5}\] where \(\Omega_{N,A}=\sqrt{\Delta_{N,A}^{2}+|\lambda_{N,A}|^{2}}\) and \(\Delta_{N,A}=|\epsilon_{D}\mp\epsilon_{M}|/2\). We then understand that, when \(\epsilon_{D}\simeq\epsilon_{M}>>|\lambda_{N,A}|\), the quantum oscillation associated with the fusion outcome \(\psi\) is dominant, while the Andreev process following the \(I\) outcome is largely suppressed. However, viewing that the coupling energy \(\epsilon_{M}\) between \(\gamma_{2}\) and \(\gamma_{3}\) is small (might be comparable with \(\lambda_{2}\) and \(\lambda_{3}\) in practice, see Fig. 1), one may encounter the complexity that both channels coexist during detection of the charge variations in the quantum dot, having thus the result as shown in Fig. 2 by the green curve. Next, let us consider an alternative detection scheme of gradually coupling the QD with the Majorana modes \(\gamma_{2}\) and \(\gamma_{3}\). The initial state preparation and moving of the Majorana modes \(\gamma_{2}\) and \(\gamma_{3}\) are the same as above (the first scheme). However, we consider now initially setting the QD level in resonance with the final Majorana coupling energy, i.e., \(\epsilon_{D}=\epsilon_{M}\). Then, in this scheme, modulation of the QD energy level after Majorana fusion is not needed. When \(\gamma_{2}\) and \(\gamma_{3}\) are somehow slowly brought to close to each other, they also couple to the QD gradually. We may model the gradual coupling as follows \[\lambda_{2} =\lambda_{3}=\lambda\left[\frac{t}{\tau}\Theta(1-\frac{t}{\tau}) +\Theta(\frac{t}{\tau}-1)\right]\,,\] \[\epsilon_{23} =\epsilon_{M}\left[\frac{t}{\tau}\Theta(1-\frac{t}{\tau})+ \Theta(\frac{t}{\tau}-1)\right]\,, \tag{6}\] where \(\Theta(\cdots)\) is the step function. In this simple model, \(\tau\) is introduced to characterize the speed of moving \(\gamma_{2}\) and \(\gamma_{3}\). Here we assume that the coupling of \(\gamma_{2}\) and \(\gamma_{3}\) to the QD (nonzero \(\lambda_{2}\) and \(\lambda_{3}\)) and the coupling between them (nonzero \(\epsilon_{23}\)) are started at same time. We also use it as the initial time for latter state evolution, which is associated with the fusion and detection. Before \(\gamma_{2}\), \(\gamma_{3}\) and the QD start to couple to each other, the moving of \(\gamma_{2}\) and \(\gamma_{3}\) can be performed at different speed (faster speed). However, the moving should satisfy the adiabatic condition determined by the energy gap of the TSC wire, which is much larger than the coupling energies \(\epsilon_{M}\), \(\lambda_{2}\) and \(\lambda_{3}\), and the dot energy \(\epsilon_{D}\). We may remark that the modeling, in terms of Eq. (6), might not be very accurate, but it captures the underlying physics and can predict valid behavior of electron occupation in the quantum dot, in comparison with more accurate simulation based on the more realistic lattice model. In Fig. 3 we show results of different coupling speeds, which are characterized by the parameter \(\tau\). For fast coupling, the result (green curve) is similar to that shown in Fig. 2. However, if decreasing the speed of coupling, the charge occupation pattern in the quantum dot becomes different. The most prominent feature is that the quantum oscillations tend to disappear (see the blue and red curves in Fig. 2). This can be understood as follows. In this scheme of fusion and detection, there exist also two charge transfer channels associated with, respectively, the fusion outcomes \(I\) and \(\psi\). However, in Figure 2: Dot occupation probability \(p_{d}\) (green curve) associated with the detection scheme of sudden coupling of the MZMs after fusion to the probing QD. According to the fusion rule, Eq. (2), \(p_{d}\) is half of the sum of \(p_{d}^{(N)}\) (red curve, labeled by \(\psi\)) and \(p_{d}^{(A)}\) (blue curve, labeled by \(I\)). The meaning of \(p_{d}^{(N)}\) and \(p_{d}^{(A)}\) is referred to the main text. Parameters are assumed as \(\lambda_{2}=\lambda_{3}=\lambda=1\), and \(\epsilon_{D}=\epsilon_{M}=1.5\). In this work, we use the arbitrary system of units, taking \(\lambda\) as the unit of energy and \(\lambda^{-1}\) as the unit of time. Figure 3: Dot occupation probability \(p_{d}\) associated with the detection scheme of gradual fusion and coupling to the probing QD, as modeled by Eq. (6). Results of different coupling speeds are shown for \(\tau=1\) (green curve), 10 (blue curve), and 80 (red curve), respectively. Prominent feature is that the quantum oscillations tend to disappear when decreasing the speed of coupling, i.e., when approaching the adiabatic limit of switching on the coupling. Parameters are the same as assumed in Fig. 2. the limit of adiabatically switching on the coupling, in each channel the state will largely follow an instantaneous eigenstate. For instance, for the \(\psi\)-related dominant channel (governed by the normal tunneling between the states \(|1_{23}0_{d}\rangle\) and \(|0_{23}1_{d}\rangle\)), the state can be expressed also as \(\alpha_{N}(t)|1_{23}0_{d}\rangle+\beta_{N}(t)|0_{23}1_{d}\rangle\). Yet, in the adiabatic limit, the superposition coefficients in the instantaneous eigenstate do not reveal the feature of Rabi-type quantum oscillations. Actually, we can easily obtain \(|\beta_{N}(t\geq\tau)|^{2}=p_{d}^{(N)}=1/2\). Similarly, for the outcome-\(I\)-related channel, the instantaneous eigenstate can be expressed as \(\alpha_{A}(t)|0_{23}0_{d}\rangle+\beta_{A}(t)|1_{23}1_{d}\rangle\). In the adiabatic limit, \(|\beta_{A}(t)|^{2}\) does not oscillate with time and, when \(t\geq\tau\), we obtain \[p_{d}^{(A)}=|\beta_{A}|^{2}=\frac{(\Omega_{A}-\epsilon_{M})^{2}}{(\Omega_{A}- \epsilon_{M})^{2}+2|\lambda|^{2}}\,. \tag{7}\] Here, \(\Omega_{A}=\sqrt{\epsilon_{M}^{2}+2|\lambda|^{2}}\), under the conditions \(\epsilon_{D}=\epsilon_{M}\) and \(\lambda_{2}=\lambda_{3}=\lambda\). Based on the fusion rule Eq. (2), we expect the overall occupation probability of an electron in the QD to be \(p_{d}=(p_{d}^{(N)}+p_{d}^{(A)})/2\). Indeed, this is the asymptotic result observed in Fig. 3, in the adiabatic limit. The QD occupations predicted in Figs. 2 and 3 can be measured by using a charge-sensitive PC-detector as shown in Fig. 1. The standard method is performing the so-called single shot projective measurement to infer the QD being occupied or not by an electron. After a large number of ensemble measurements, the occupation probability can be obtained. However, the result in Fig. 2 (the overall pattern plotted by the green curve) does not very directly reveal the coexistence of the two fusion outcomes. Also, measuring this oscillation pattern and the subsequent analyzing will be more complicated than handling the result from the adiabatic coupling as shown in Fig. 3. That is, importantly, measuring the final single constant occupation probability is much simpler than measuring the oscillation pattern in Fig. 2, while the result can more directly inform us the coexistence of the two fusion outcomes, by using the formula \(p_{d}=(p_{d}^{(N)}+p_{d}^{(A)})/2\) and the result \(p_{d}^{(N)}=0.5\). Therefore, the second detection scheme proposed above is expected to be useful in practice, by adiabatically switching on the probe coupling. _Continuous weak measurements.--_ Besides the usual strong (projective) measurements, as discussed above for measuring the QD occupation probability, continuous quantum weak measurement is an interesting and different type of choice, suitable in particular for measuring quantum oscillations. For instance, the problem of continuous weak measurement of charge qubit oscillations by a PC detector has attracted strong interest for intensive studies [24; 25; 26]. In the following, we consider continuous weak measurement for the quantum oscillations displayed in Fig. 2 (by the green curve). Specifically, for the setup shown in Fig. 1, the PC detector is switched on (applied bias voltage) from the beginning of state evolution after fusion, owing to tunnel-coupling with the QD. The noisy output current in the PC detector can be expressed as [24; 25; 27] \[I_{c}(t)=n_{d,c}(t)+\frac{1}{\sqrt{4\kappa}}\,\xi(t)\,. \tag{8}\] This (rescaled) expression of current is valid up to a constant factor (with current dimension). Then, the first term is simply the quantum average occupation of an electron in the quantum dot, i.e., \(n_{d,c}(t)=\mathrm{Tr}[\hat{n}_{d}\rho_{c}(t)]\), with \(\rho_{c}(t)\) the PC-current-conditioned state of the measured system. The second term describes the deviation of the real noisy current from the quantum average occupation \(n_{d,c}(t)\), owing to classical events of random tunneling of electrons in the PC detector. The rate parameter \(\kappa\) characterizes the measurement strength and \(\xi(t)\) is the Gaussian white noise. For completeness, we also present here the quantum trajectory (QT) equation for the conditional state as [25] \[\dot{\rho}_{c}=\mathcal{L}\rho_{c}+\sqrt{\kappa}\mathcal{H}[\hat{n}_{d}]\rho_ {c}\xi(t)\,. \tag{9}\] The first deterministic part is given by \(\mathcal{L}\rho_{c}=-i\,[H,\rho_{c}]+\kappa\mathcal{D}[\hat{n}_{d}]\rho_{c}\), with the Lindblad superoperator defined as \(\mathcal{D}[x]\rho_{c}=x\rho_{c}x^{\dagger}-\frac{1}{2}\{x^{\dagger}x,\rho_{ c}\}\). The second noisy term stems from measurement backaction owing to information gain in the single realization of continuous weak measurement, while the superoperator is defined as \(\mathcal{H}[x]\rho_{c}=x\rho_{c}+\rho_{c}x^{\dagger}-\mathrm{Tr}\left[(x+x^{ \dagger})\rho_{c}\right]\rho_{c}\). Jointly simulating the evolution of Eqs. (8) and (9) we can obtain \(\rho_{c}(t)\), \(n_{d,c}(t)\), and \(I_{c}(t)\). From Eq. (8), we understand that the measurement current does encode the information of the QD occupation. However, owning to measurement backaction, the measurement-current-conditioned occupation \(n_{d,c}(t)\) is different from the occupation probability \(p_{d}(t)\) shown in Fig. 2 (in the absence of measurement). For considerably weak measurement, \(n_{d,c}(t)\) should be quite Figure 4: Quantum trajectories of continuous weak measurement, associated with the detection scheme of sudden coupling of the MZMs after fusion to the probing QD. The measurement-current-conditioned occupation \(n_{d,c}(t)\) and the current \(\bar{I}_{c}(t)\) (low-pass-filtered with a time window \(T=1\)) are shown for measurement strengths \(\kappa=0.8\) in (a), 0.2 in (b), and 0.05 in (c), respectively. Other parameters are the same as in Fig. 2. close to \(p_{d}(t)\), yet the noisy term in Eq. (8) will hide the informational term, thus preventing us inferring the dot occupation. In contrast, if increasing the measurement strength, the noisy term will decrease, but the measurement backaction will make \(n_{d,c}(t)\) more seriously deviate from the original result \(p_{d}(t)\). In Fig. 4, we show the results of \(n_{d,c}(t)\) and \(I_{c}(t)\), for a couple of measurement strengths. In addition to properly choosing the measurement strengths, we also applied the so-called low-pass-filtering technique. That is, we averaged the current over a sliding time window of duration \(T\), \(\bar{I}_{c}(t)=\frac{1}{T}\int_{t-T/2}^{t+T/2}dI_{c}(\tau)\), which gives a smoothed current for better reflecting the dot occupation. However, even after making these efforts, we find that from the noisy output current \(I_{c}(t)\), it is hard to track the quantum oscillations of the QD occupation shown in Fig. 2. Actually, in continuous weak measurements, a useful technique is extracting information from the power spectrum of the measurement currents [24; 25; 26]. The steady-state current correlation function is obtained through the ensemble average \(S_{I}(\tau)={\rm E}[I_{c}(t+\tau)I_{c}(t)]-{\rm E}[I_{c}(t+\tau)]{\rm E}[I_{c }(t)]\), at long time limit (large \(t\) limit for achieving steady state). From the power spectrum, \(S_{I}(\omega)=2\int_{-\infty}^{\infty}S_{I}(\tau)e^{i\omega\tau}\), one can identify the characteristic frequencies and infer thus quantum coherent oscillations inside the system under measurement. For the problem under study, the goal is to identify the quantum oscillations shown in Fig. 2, which are associated with the two fusion outcomes. Based on the result of Eq. (8), it can be proved [24; 25] that \[S_{I}(\tau) = S_{d}(\tau)+\frac{1}{4\kappa}\delta(\tau)-\frac{1}{4}\,. \tag{10}\] In this result, the correlation function of the dot occupation is given by \(S_{d}(\tau)={\rm E}[n_{d,c}(t+\tau)n_{d,c}(t)]={\rm Tr}[\hat{n}_{d}e^{\mathcal{ L}|\tau|}(\hat{n}_{d}\rho_{st})]\), with \(\rho_{st}\) the reduced density matrix of steady state. Then, we know the structure of the current power spectrum as \(S_{I}(\omega)=S_{0}+S_{d}(\omega)\), with \(S_{0}\) the frequency-free background noise and \(S_{d}(\omega)\) the information-contained part. Based on the master equation \(\dot{\rho}=\mathcal{L}\rho\), which is the ensemble average of Eq. (9), using the so-called quantum regression theorem we obtain \[S_{d}^{(j)}(\omega) \tag{11}\] \[= \frac{2\kappa|\lambda_{j}|^{2}(16\Delta_{j}^{2}+\kappa^{2}+4 \omega^{2})}{\omega^{2}(16\Omega_{j}^{2}+\kappa^{2}-4\omega^{2})^{2}+16\kappa ^{2}(2|\lambda_{j}|^{2}-\omega^{2})^{2}}.\] Here we use \(j=N,A\) to denote the two charge transfer channels, say, the Andreev process and normal tunneling, with coupling amplitudes \(\lambda_{N,A}=\lambda_{2}\pm i\lambda_{3}\). Since the two channels are independent, we obtained the above results independently for each process. The overall spectrum \(S_{d}(\omega)\) is the weight-averaged sum of \(S_{d}^{(N)}(\omega)\) and \(S_{d}^{(A)}(\omega)\), i.e., \(S_{d}(\omega)=[S_{d}^{(N)}(\omega)+S_{d}^{(A)}(\omega)]/2\), owing to the fusion rule of Eq. (2). Moreover, under the condition of weak-coupling measurement, we can further approximate the result as \[S_{d}^{(j)}(\omega)\simeq\frac{\Delta_{j}^{2}}{4\Omega_{j}^{2} }\frac{\kappa R_{j}/2}{\omega^{2}+(\kappa R_{j}/2)^{2}}\] \[+\frac{|\lambda_{j}|^{2}}{8\Omega_{j}^{2}}\frac{\frac{\kappa}{2} (1-\frac{R_{j}}{2})}{(\omega-2\Omega_{j})^{2}+[\frac{\kappa}{2}(1-\frac{R_{j }}{2})]^{2}}\,. \tag{12}\] Here we have introduced \(R_{j}\equiv|\lambda_{j}|^{2}/\Omega_{j}^{2}\). From this standard Lorentzian form, one can extract the characteristic frequencies \(\Omega_{N}\) and \(\Omega_{A}\), which reflect in essence the quantum oscillations given by Eq. (5). In Fig. 5, we plot the result of \(2S_{d}(\omega)\) from numerically solving the full master equation, which includes the two charger transfer channels. We find that it is indeed the sum of \(S_{d}^{N}(\omega)\) and \(S_{d}^{A}(\omega)\), while in this plot, for the purpose of self-consistence verification, we use their analytic solutions given by Eq. (11). We also compare the results with the approximate Lorentzian form solutions and find satisfactory agreement. Very importantly, the coexistence of two Lorentzian peaks (at \(2\Omega_{N}\) and \(2\Omega_{A}\)) in \(S_{d}(\omega)\) simply indicates the appearance of the two fusion outcomes, predicted by Eq. (1). We may remark that, within the scheme of continuous quantum weak measurement, from its output current power spectrum \(S_{I}(\omega)\) to infer the intrinsic quantum oscillations is a very useful scheme, which is much simpler than the ensemble single shot projective measurements of the dot occupation, in order to obtain the probabilities as shown in Figs. 2 and 3. This type of technique has been analyzed in Figure 5: Characteristic frequency spectrum, \(S_{d}(\omega)\), the Fourier transform of the QD-occupation correlation function, from the output currents of continuous weak measurement associated with the detection scheme of sudden coupling of the MZMs after fusion to the probing QD. Theoretically, owing to the fusion rule Eq. (2), \(S_{d}(\omega)=[S_{d}^{(N)}(\omega)+S_{d}^{(A)}(\omega)]/2\), where \(S_{d}^{(N)}(\omega)\) and \(S_{d}^{(A)}(\omega)\) are the spectrums related to the fusion outcomes \(\psi\) and \(I\). Exact results of Eq. (11) are displayed by the solid-red and solid-blue curves, while approximate results of the Lorentzian form Eq. (12) are plotted by the dashed-yellow and dashed-blue curves. \(S_{d}(\omega)\) is from numerically solving the full master equation, which includes the two fusion outcomes caused charge transfer channels. Satisfactory agreement between all these results is demonstrated. The coexistence of two Lorentzian peaks (at \(2\Omega_{N}\) and \(2\Omega_{A}\)) in \(S_{d}(\omega)\) indicates the appearance of the two fusion outcomes, predicted by Eq. (1). Measurement strength \(\kappa=0.4\) is assumed, while other parameters are the same as in Fig. 2. detail in the context of charge-qubit measurements [24; 25; 26]. The present proposal is an extension along this line, hopefully to be employed to identify the non-Abelian fusion of MZMs, through the two different characteristic frequencies of quantum oscillations, which are associated with the two fusion outcomes. _Summary.--_ We have analyzed two schemes of detecting the nontrivial fusion of a pair of MZMs. The two possible stochastic fusion outcomes reflect the non-Abelian statistics nature of the MZMs, whose experimental demonstration will be a milestone for ultimately identifying the MZMs and paving the way to topological quantum computation. One scheme, the most natural choice, is to switch on sudden coupling of the fused MZMs to the probing QD, with the subsequent QD oscillating occupation being monitored by a PC detection in terms of continuous weak measurement. From the power spectrum of the measurement currents, one can identify two characteristic frequencies of quantum oscillations and infer thus the two fusion outcomes of the pair of MZMs. The other scheme is to switch on, almost adiabatically, gradual fusion coupling between the MZMs and their coupling to the probing QD. This type of detection scheme will result in the QD occupation not oscillating with time, thus allowing a simpler way to measure the single value of the QD occupation probability and using it to infer the two outcomes of nontrivial fusion. We expect that both detection schemes analyzed in this work can be useful for future fusion experiments. _Acknowledgements.--_ This work was supported by the NNSF of China (Grants No. 11974011 and No. 11904261). ## Appendix A Derivation of the Fusion Rule In this Appendix let us prove the following transformation rule (fusion rule): \[|0_{12}0_{34}\rangle=\frac{1}{\sqrt{2}}(|0_{23}0_{14}\rangle+i|1_{23}1_{14} \rangle)\,.\] Under the constraint of fermion parity, in general, we may first construct the transformation ansatz as \(|0_{12}0_{34}\rangle=a|0_{23}0_{14}\rangle+b|1_{23}1_{14}\rangle)\). Then, we express the operator \(f_{12}=(\gamma_{1}+i\gamma_{2})/2\) in terms of the regular fermion operators \(f_{14}\) and \(f_{23}\) as \[f_{12}=\frac{1}{2}\left(f_{14}+f_{14}^{\dagger}+if_{23}+if_{23}^{\dagger} \right)\,. \tag{10}\] This result is obtained by simply associating the Majorana fermions \(\gamma_{1}\) and \(\gamma_{4}\) with the regular fermion \(f_{14}\), and \(\gamma_{2}\) and \(\gamma_{3}\) with \(f_{23}\), respectively. Thus we have \(\gamma_{1}=f_{14}+f_{14}^{\dagger}\) and \(\gamma_{2}=f_{23}+f_{23}^{\dagger}\). Further, acting the annihilation operator \(f_{12}\) on both sides of the ansatz equation, we have \[0 =a(|0_{23}1_{14}\rangle+i|1_{23}0_{14}\rangle)\] \[+b(i|0_{23}1_{14}\rangle-|1_{23}0_{14}\rangle)\,. \tag{11}\] During the algebra, one should notice the difference of a minus sign between \(f_{23}|1_{23}1_{14}\rangle=|0_{23}1_{14}\rangle\) and \(f_{14}|1_{23}1_{14}\rangle=-|1_{23}0_{14}\rangle\). From this result, we obtain \(ia-b=0\) and \(a=1/\sqrt{2}\), which finishes the proof of the above formula of transformation. Applying the same method outlined above, one can carry out all the transformation formulas between the two sets of basis states, \(|n_{12}n_{34}\rangle\) and \(|n_{23}n_{14}\rangle\).
2303.17983
Integral constraints in multiple scales problems with a slowly varying microstructure
Asymptotic homogenisation is considered for problems with integral constraints imposed on a slowly-varying microstructure; an insulator with an array of perfectly dielectric inclusions of slowly varying size serves as a paradigm. Although it is well-known how to handle each of these effects (integral constraints, slowly-varying microstructure) independently within multiple scales analysis, additional care is needed when they are combined. Using the flux transport theorem, the multiple scales form of an integral constraint on a slowly varying domain is identified. The proposed form is applied to obtain a homogenised model for the electric potential in a dielectric composite, where the microstructure slowly varies and the integral constraint arises due to a statement of charge conservation. A comparison with multiple scales analysis of the problem with established approaches provides validation that the proposed form results in the correct homogenised model.
A. Kent, S. L. Waters, J. Oliver, S. J. Chapman
2023-03-31T11:41:11Z
http://arxiv.org/abs/2303.17983v1
# Integral Constraints in Multiple Scales Problems with a Slowly Varying Microstructure ###### Abstract Asymptotic homogenisation is considered for problems with integral constraints imposed on a slowly-varying microstructure; an insulator with an array of perfectly dielectric inclusions of slowly varying size serves as a paradigm. Although it is well-known how to handle each of these effects (integral constraints, slowly-varying microstructure) independently within multiple scales analysis, additional care is needed when they are combined. Using the flux transport theorem, the multiple scales form of an integral constraint on a slowly varying domain is identified. The proposed form is applied to obtain a homogenised model for the electric potential in a dielectric composite, where the microstructure slowly varies and the integral constraint arises due to a statement of charge conservation. A comparison with multiple scales analysis of the problem with established approaches provides validation that the proposed form results in the correct homogenised model. **Keywords:** asymptotic homogenisation; multiple scales; integral constraints; microstructural variation; perfect dielectric. **Mathematics subject classification:** 35B27, 78M40, 34E13. ## 1 Introduction Homogenisation via multiscale asymptotics is one coarse graining method that can be used to derive the effective properties of composite media [13]. Typically used for periodic microstructure, example applications of the technique include modelling flow in porous media, wave propagation in poroelastic materials, filtration and decontamination processes [4, 7, 13, 14]. The result of the homogenisation process is the reduction of a problem posed on a complicated domain, or with rapidly varying coefficients, to two simpler problems: one 'cell problem' describing the microscale variation; and a second 'homogenised model' describing the macroscale variation of variables across the whole domain. The technique can be extended to problems with a slowly varying geometry, albeit at the cost of having a cell problem which varies with the macroscale [8]. A mapping depending on the slow spatial scale can be applied to transform a heterogeneous microstructure to an exactly periodic reference configuration [12, 16]. Standard homogenisation can be performed in this reference configuration before inverting the mapping to obtain the homogenised equations featuring spatially-dependent coefficients which reflect microstructural variation. A similar approach can be used to treat microstructures with temporal and spatiotemporal variations. Examples of problems using a prescribed mapping include [3, 6, 18] and the mapping can be coupled to the macroscale variables [15]. When the domain is locally periodic and the unit cell has fixed size, transformation to a reference configuration is no longer required as the slow variable features as a parameter in the microscale problem [8]. Examples of this approach are found in [7, 11, 17]. When considering problems with a slowly varying domain, care must be taken in converting Neumann and Robin boundary conditions on microscopic inclusions into multiple scales form. Typically, a level set function is introduced to define the boundary of the inclusion, with the expansion of the normal to the boundary derived by writing the gradient of the level set function in multiple scales form [3, 11, 18]. A second extension of the standard method allows for homogenisation of problems featuring integral constraints [5]. These constraints generally appear as conservation conditions, for example, of charge or momentum, with applications in modelling nematic crystals, radiation in porous media and bubbly liquids [2, 5, 19]. Unlike standard multiple scales, where the macroscale coordinate can be assumed to take a constant value within a given unit cell, it is crucial to account for the small variation in macroscale coordinate along the integration path, since this variation causes a change in flux which affects the parameters in the homogenised model. In the present work we aim to combine these two extensions, developing an understanding of how to write integral constraints on a slowly varying domain in multiple scales form. Although this seems like a routine task, we will see that in fact that the answer is not obvious a priori. We use as a paradigm the problem of the electric potential in an insulator interspersed with a periodic array of perfectly dielectric inclusions of slowly varying size. This problem has the advantage that the perfectly dielectric limit can also be taken after a standard homogenisation procedure, so that we know what the homogenised model should be. Not all integral constraint problems can be recast in this way. ## 2 Paradigm Problem We consider the electric potential \(\phi\) in a dielectric material, which satisfies Poisson's equation \[\nabla\cdot(\varepsilon\nabla\phi)=-\rho, \tag{1}\] where \(\varepsilon\) is the permittivity and \(\rho\) is the charge density (which we suppose is given). We consider a material composite comprising an insulator \(\Omega_{\mathrm{e}}\) of constant permittivity \(\varepsilon_{\mathrm{e}}\) with an array of inclusions \(\Omega_{\mathrm{i}}\) of constant permittivity \(\varepsilon_{\mathrm{i}}\). At the boundary between the two regions \[\left[\mathbf{n}\cdot(\varepsilon\nabla\phi)\right]^{\mathrm{e} }_{\mathrm{i}} =0, \tag{2}\] \[\left[\phi\right]^{\mathrm{e}}_{\mathrm{i}} =0, \tag{3}\] where \(\mathbf{n}\) is the (outward-facing) normal to the boundary of the inclusion, and \([\cdot]^{\mathrm{e}}_{\mathrm{i}}\) represents the jump in the enclosed quantity across the interface. We suppose that the centres of the inclusions are arranged on a regular cubic lattice of side \(\delta\), and that the radius of each inclusion \(\delta a(\mathbf{x})\) varies slowly with (macroscopic) position (see Fig. 1). In the limit \(\varepsilon_{\mathrm{i}}\rightarrow\infty\) the inclusions are perfectly dielectric and the model becomes \[\nabla\cdot(\varepsilon_{\mathrm{e}}\nabla\phi) =-\rho\qquad\text{ in }\Omega_{\mathrm{e}}, \tag{4}\] \[\nabla\phi =0\qquad\text{ in }\Omega_{\mathrm{i}}, \tag{5}\] with boundary condition \[\left[\phi\right]_{\mathrm{i}}^{\mathrm{e}}=0. \tag{6}\] The potential \(\phi\) is constant on each inclusion, but may take different values on different inclusions. To close the problem we need to integrate (1) over each inclusion and use (2) to give the integral constraint \[\int_{\partial\Omega_{\mathrm{i}}}\varepsilon_{\mathrm{e}}\,\mathbf{n}\cdot \nabla\phi\big{|}_{\mathrm{e}}\,\,\mathrm{d}S=-\int_{\Omega_{\mathrm{i}}}\rho \,\mathrm{d}\mathbf{x}, \tag{7}\] where \(\Omega_{\mathrm{i}}\) is an individual inclusion. We will approach the limit \(\varepsilon_{\mathrm{i}}\to\infty\) in two different ways. We will first homogenise (1)-(3) following [3], before taking the limit \(\varepsilon_{\mathrm{i}}\to\infty\) in the homogenised model. We will then homogenise (4)-(7) directly, which will require us to determine how to cast (7) in multiple-scales form when the domain \(\Omega_{\mathrm{i}}\) is a function of (slow) position. ### Standard Multiple Scales We introduce the fast scale \(\mathbf{X}=\mathbf{x}/\delta\), and suppose that \(\phi=\phi(\mathbf{x},\mathbf{X})\), treating \(\mathbf{x}\) and \(\mathbf{X}\) as independent, with derivatives transforming according to the chain rule \[\nabla\to\nabla_{\mathbf{x}}+\frac{1}{\delta}\nabla_{\mathbf{X}}. \tag{8}\] We remove the indeterminancy that this introduces by imposing that \(\phi\) is \(\mathbf{1}\)-periodic in \(\mathbf{X}\). To describe the inclusions we introduce the function \[h(\mathbf{x},\mathbf{X})=|\mathbf{X}-\lfloor\mathbf{X}\rfloor|-a(\mathbf{x}), \tag{9}\] where \(\lfloor\mathbf{X}\rfloor\) represents the integer part of each component of \(\mathbf{X}\). This function is \(\mathbf{1}\)-periodic in \(\mathbf{X}\), and the level set \(h=0\) defines the boundary of the inclusion. The normal to the Figure 1: Schematic of the 2D composite. _Perfectly dielectric inclusions shown in grey lie on a periodic array within an insulator. The inclusions have a radius \(a(x)\) which slowly varies across the domain._ inclusion can then be written in multiple scales form as \[\mathbf{n}=\frac{\nabla h}{|\nabla h|}=\frac{\nabla_{\mathbf{X}}h+\delta\nabla_{ \mathbf{x}}h}{|\nabla_{\mathbf{X}}h+\delta\nabla_{\mathbf{x}}h|}=\mathbf{n}_{0} +\delta\mathbf{n}_{1}+O(\delta^{2}), \tag{10}\] where \[\mathbf{n}_{0}=\frac{\nabla_{\mathbf{X}}h}{|\nabla_{\mathbf{X}}h|},\qquad \mathbf{n}_{1}=\frac{\nabla_{\mathbf{x}}h}{|\nabla_{\mathbf{X}}h|}-\frac{( \nabla_{\mathbf{x}}h\cdot\nabla_{\mathbf{X}}h)\nabla_{\mathbf{X}}h}{|\nabla_{ \mathbf{X}}h|^{3}}. \tag{11}\] Within a given unit cell \(D\) we denote the region occupied by the inclusion as \(D_{\mathrm{i}}(\mathbf{x})\) and that occupied by the insulator as \(D_{\mathrm{e}}(\mathbf{x})\), that is \[D_{\mathrm{i}}=\{\mathbf{X}\in D:h(\mathbf{x},\mathbf{X})<0\},\qquad D_{ \mathrm{e}}=\{\mathbf{X}\in D:h(\mathbf{x},\mathbf{X})>0\}.\] Substituting (8) and (10) into (1)-(3), expanding \[\phi\sim\phi_{0}(\mathbf{x},\mathbf{X})+\delta\phi_{1}(\mathbf{x},\mathbf{X })+\cdots, \tag{12}\] and equating coefficients of \(\delta\), we find that at leading-order \[\nabla_{\mathbf{X}}\cdot(\varepsilon\nabla_{\mathbf{X}}\phi_{0}) =0, \tag{13}\] \[=0,\] (14) \[=0, \tag{15}\] with \(\phi_{0}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\). Thus, \(\phi_{0}\) is constant on the fast scale, so that \(\phi_{0}=\phi_{0}(\mathbf{x})\). At next order, we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon\nabla_{\mathbf{X}}\phi_{1}) =0, \tag{16}\] \[=0,\] (17) \[=0, \tag{18}\] with \(\phi_{1}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\), where we have used the fast scale independence of the leading-order potential. The solution is \[\phi_{1}=\mathbf{\Psi}\cdot\nabla_{\mathbf{x}}\phi_{0}+\overline{\phi}_{1}, \tag{19}\] where \(\overline{\phi}_{1}\) is independent of \(\mathbf{X}\) and \(\mathbf{\Psi}\) satisfies the cell problem \[\nabla_{\mathbf{X}}\cdot(\varepsilon\nabla_{\mathbf{X}}\mathbf{ \Psi}) =0, \tag{20}\] \[=0,\] (21) \[=0, \tag{22}\] with \(\mathbf{\Psi}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\), where \(I\) is the identity matrix, and uniqueness is achieved by imposing zero mean, for example. Finally, equating coefficients of \(\delta^{2}\), we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon(\nabla_{\mathbf{X}}\phi_{2} +\nabla_{\mathbf{x}}\phi_{1}))+\nabla_{\mathbf{x}}\cdot(\varepsilon(\nabla_{ \mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0})) =-\rho, \tag{23}\] \[=0,\] (24) \[=0. \tag{25}\] Integrating (23) over the unit cell \(D\), applying the divergence theorem to terms involving the fast divergence, and using (24) we find \[\int_{\partial D_{\mathrm{e}}}\varepsilon_{\mathrm{e}}\left( \nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0}\right)\cdot\mathbf{n} _{1}\mathrm{d}S_{\mathbf{X}}-\int_{\partial D_{\mathrm{i}}}\varepsilon_{ \mathrm{i}}\left(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0} \right)\cdot\mathbf{n}_{1}\mathrm{d}S_{\mathbf{X}}\\ +\int_{D_{\mathrm{e}}}\nabla_{\mathbf{x}}\cdot\left(\varepsilon_{ \mathrm{e}}(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0})\right) \mathrm{d}\mathbf{X}+\int_{D_{\mathrm{i}}}\nabla_{\mathbf{x}}\cdot\left( \varepsilon_{\mathrm{i}}(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_ {0})\right)\mathrm{d}\mathbf{X}=-\rho_{\mathrm{eff}}, \tag{26}\] where \(\partial D_{\rm i}\) and \(\partial D_{\rm e}\) denote the interior and exterior of the inclusion boundary in the unit cell respectively, and the effective charge is given by \[\rho_{\rm eff}=\int_{D}\rho\,{\rm d}{\bf X}. \tag{27}\] Taking the slow divergence outside the integral using the Reynolds transport theorem, we find \[\int_{\partial D_{\rm e}}\varepsilon_{\rm e}\left(\nabla_{\bf X} \phi_{1}+\nabla_{\bf x}\phi_{0}\right)\cdot{\bf n}_{1}\,{\rm d}S_{\bf X}-\int_ {\partial D_{\rm i}}\varepsilon_{\rm i}\left(\nabla_{\bf X}\phi_{1}+\nabla_{ \bf x}\phi_{0}\right)\cdot{\bf n}_{1}\,{\rm d}S_{\bf X}\\ +\int_{\partial D_{\rm e}}\varepsilon_{\rm e}\left(\nabla_{\bf X} \phi_{1}+\nabla_{\bf x}\phi_{0}\right)\cdot{\bf V}\cdot{\bf n}_{0}\,{\rm d}S_{ \bf X}-\int_{\partial D_{\rm i}}\varepsilon_{\rm i}\left(\nabla_{\bf X}\phi_{ 1}+\nabla_{\bf x}\phi_{0}\right)\cdot{\bf V}\cdot{\bf n}_{0}\,{\rm d}S_{\bf X} \\ +\nabla_{\bf x}\cdot\int_{D_{\rm e}}\left(\varepsilon_{\rm e}( \nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}+\nabla_ {\bf x}\cdot\int_{D_{\rm i}}\left(\varepsilon_{\rm i}(\nabla_{\bf X}\phi_{1}+ \nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}=-\rho_{\rm eff}, \tag{28}\] where the matrix \({\bf V}\) is the "velocity" of the boundary, i.e. the derivative of position on the boundary with respect to \({\bf x}\). Differentiating the equation \(h=0\) with respect to \({\bf x}\) gives \[{\bf V}\cdot\nabla_{\bf X}h+\nabla_{\bf x}h=0,\] so that \[{\bf V}\cdot{\bf n}_{0}=-\frac{\nabla_{\bf x}h}{|\nabla_{\bf X}h|},\qquad{\bf V }\cdot{\bf n}_{0}+{\bf n}_{1}=-\frac{(\nabla_{\bf x}h\cdot\nabla_{\bf X}h) \nabla_{\bf X}h}{|\nabla_{\bf X}h|^{3}}=-\frac{(\nabla_{\bf x}h\cdot\nabla_{ \bf X}h)}{|\nabla_{\bf X}h|^{2}}{\bf n}_{0}.\] Thus, using (17), the surface integrals cancel in (28), leaving \[\nabla_{\bf x}\cdot\int_{D_{\rm e}}\left(\varepsilon_{\rm e}(\nabla_{\bf X} \phi_{1}+\nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}+\nabla_{\bf x}\cdot \int_{D_{\rm i}}\left(\varepsilon_{\rm i}(\nabla_{\bf X}\phi_{1}+\nabla_{\bf x }\phi_{0})\right)\,{\rm d}{\bf X}=-\rho_{\rm eff}. \tag{29}\] Substituting (19), gives, finally, the homogenised problem \[\nabla_{\bf x}\cdot\left(\boldsymbol{\varepsilon}_{\rm eff}\nabla_{\bf x} \phi_{0}\right)=-\rho_{\rm eff}, \tag{30}\] where the effective permittivity \(\boldsymbol{\varepsilon}_{\rm eff}\) is given by \[\boldsymbol{\varepsilon}_{\rm eff}=\int_{D}\varepsilon\left(I+\nabla_{\bf X} \boldsymbol{\Psi}\right)\,{\rm d}{\bf X}. \tag{31}\] #### 2.1.1 The limit \(\varepsilon_{\rm i}\to\infty\) As \(\varepsilon_{\rm i}\to\infty\) in the cell problem (20)-(22) we find \[\nabla_{\bf X}^{2}\boldsymbol{\Psi} =0\quad\text{in}\quad D, \tag{32}\] \[{\bf n}_{0}\cdot(\nabla_{\bf X}\boldsymbol{\Psi}+I) =0\quad\text{on}\quad\partial D_{\rm i},\] (33) \[\left[\boldsymbol{\Psi}\right]_{\rm i}^{\rm e} =0. \tag{34}\] Thus \(\boldsymbol{\Psi}=-{\bf X}+\) constant in \(D_{\rm i}\), where the constant must be chosen so that \(\boldsymbol{\Psi}\) has zero mean. In the effective permittivity (31) this gives zero times infinity in the inclusion, so we must manipulate this expression into something more suitable before we take the limit. Switching to index notation, using (21), the divergence theorem, and (32), we find \[\varepsilon_{\mathrm{eff}\,ij} = \int_{D}\varepsilon\delta_{ij}\,\mathrm{d}\mathbf{X}+\int_{D} \varepsilon\frac{\partial}{\partial X_{k}}\left(X_{j}\frac{\partial\Psi_{i}}{ \partial X_{k}}\right)\,\mathrm{d}\mathbf{X} \tag{35}\] \[= \int_{D_{\mathrm{i}}}\varepsilon_{\mathrm{i}}\delta_{ij}\, \mathrm{d}\mathbf{X}+\int_{D_{\mathrm{e}}}\varepsilon_{\mathrm{e}}\delta_{ij} \,\mathrm{d}\mathbf{X}+\int_{\partial D_{\mathrm{i}}}\varepsilon_{\mathrm{i}}X _{j}\frac{\partial\Psi_{i}}{\partial X_{k}}n_{k}\,\mathrm{d}S_{\mathbf{X}}\] \[\quad-\int_{\partial D_{\mathrm{e}}}\varepsilon_{\mathrm{e}}X_{j} \frac{\partial\Psi_{i}}{\partial X_{k}}n_{k}\,\mathrm{d}S_{\mathbf{X}}+\int_{ \partial D}\varepsilon_{\mathrm{e}}X_{j}\frac{\partial\Psi_{i}}{\partial X_{k }}n_{k}\,\mathrm{d}S_{\mathbf{X}}\] \[= \int_{D_{\mathrm{i}}}\varepsilon_{\mathrm{i}}\delta_{ij}\, \mathrm{d}\mathbf{X}+\int_{D_{\mathrm{e}}}\varepsilon_{\mathrm{e}}\delta_{ij} \,\mathrm{d}\mathbf{X}-\int_{\partial D_{\mathrm{i}}}\varepsilon_{\mathrm{i}}X _{j}n_{i}\,\mathrm{d}S_{\mathbf{X}}\] \[\quad+\int_{\partial D_{\mathrm{e}}}\varepsilon_{\mathrm{e}}X_{j }n_{i}\,\mathrm{d}S_{\mathbf{X}}+\int_{\partial D}\varepsilon_{\mathrm{e}}X_{ j}\frac{\partial\Psi_{i}}{\partial X_{k}}n_{k}\,\mathrm{d}S_{\mathbf{X}}\] \[= \varepsilon_{\mathrm{e}}\left(\delta_{ij}+\int_{\partial D}X_{j} \frac{\partial\Psi_{i}}{\partial X_{k}}n_{k}\,\mathrm{d}S_{\mathbf{X}}\right).\] We can now safely take the limit \(\varepsilon_{\mathrm{i}}\to\infty\). ### Multiple Scales with Integral Constraints We now apply the method of multiple scales directly to the problem (4)-(7), hoping to retrieve (30) with (35). Substituting (8) into (4)-(6), expanding as in (12), and equating coefficients of \(\delta\) we find that at leading-order \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\mathrm{e}}\nabla_{\mathbf{ X}}\phi_{0}) =0\quad\mathrm{in}\quad D_{\mathrm{e}}, \tag{36}\] \[\nabla_{\mathbf{X}}\phi_{0} =\mathbf{0}\quad\mathrm{in}\quad D_{\mathrm{i}},\] (37) \[\left[\phi_{0}\right]_{\mathrm{i}}^{\mathrm{e}} =0, \tag{38}\] with \(\phi_{0}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\). Thus, as before, \(\phi_{0}=\phi_{0}(\mathbf{x})\). At first-order we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\mathrm{e}}\nabla_{\mathbf{ X}}\phi_{1}) =0\quad\mathrm{in}\quad D_{\mathrm{e}}, \tag{39}\] \[\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0} =\mathbf{0}\quad\mathrm{in}\quad D_{\mathrm{i}},\] (40) \[\left[\phi_{1}\right]_{\mathrm{i}}^{\mathrm{e}} =0, \tag{41}\] with \(\phi_{1}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\). As in Section 2.1, the solution is \(\phi_{1}=\boldsymbol{\Psi}\cdot\nabla_{\mathbf{x}}\phi_{0}+\overline{\phi}_{1}\) where \(\overline{\phi}_{1}\) is independent of \(\mathbf{X}\) and \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\mathrm{e}}\nabla_{\mathbf{ X}}\boldsymbol{\Psi}) =\mathbf{0}\quad\mathrm{in}\quad D_{\mathrm{e}}, \tag{42}\] \[\nabla_{\mathbf{X}}\boldsymbol{\Psi}+I =0\quad\mathrm{in}\quad D_{\mathrm{i}},\] (43) \[\left[\boldsymbol{\Psi}\right]_{\mathrm{i}}^{\mathrm{e}} =\mathbf{0}, \tag{44}\] with \(\boldsymbol{\Psi}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\), and we impose \[\int_{D}\boldsymbol{\Psi}\,\mathrm{d}\mathbf{X}=\mathbf{0}. \tag{45}\] Equating coefficients of \(\delta^{2}\) we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\mathrm{e}}(\nabla_{ \mathbf{X}}\phi_{2}+\nabla_{\mathbf{x}}\phi_{1}))+\nabla_{\mathbf{x}}\cdot( \varepsilon_{\mathrm{e}}(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{ 0})) =-\rho\quad\mathrm{in}\quad D_{\mathrm{e}}, \tag{46}\] \[\nabla_{\mathbf{X}}\phi_{2}+\nabla_{\mathbf{x}}\phi_{1} =\mathbf{0}\quad\mathrm{in}\quad D_{\mathrm{i}},\] (47) \[\left[\phi_{2}\right]_{\mathrm{i}}^{\mathrm{e}} =0. \tag{48}\] Integrating (46) over the exterior region and applying the divergence theorem to the first term, gives \[-\int_{\partial D_{\rm e}}\varepsilon_{\rm e}(\nabla_{\bf X}\phi_{2}+\nabla_{\bf x }\phi_{1})\cdot{\bf n}_{0}\,{\rm d}S_{\bf X}+\int_{D_{\rm e}}\nabla_{\bf x} \cdot(\varepsilon_{\rm e}(\nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0}))\,\,{ \rm d}{\bf X}=-\int_{D_{\rm e}}\rho\,{\rm d}{\bf X}, \tag{49}\] where the integral over the exterior boundary of the unit cell vanishes due to periodicity. To evaluate the surface integral in (49) we need to use the integral constraint (7). #### 2.2.1 Dealing with the integral As discussed in [5], it seems natural to write (7) in multiple scales form as \[\delta^{2}\int_{D_{\rm e}}\varepsilon_{\rm e}\,{\bf n}\cdot\left(\nabla_{\bf x }\phi+\frac{1}{\delta}\nabla_{\bf X}\phi\right)\,{\rm d}S_{\bf X}=-\delta^{3} \int_{\Omega_{i}}\rho\,{\rm d}{\bf X},\] but this is incorrect, as it neglects the small variation in the slow variation \({\bf x}\) around the boundary of the inclusion, which turns out to be crucial. Writing \({\bf Q}=\nabla\phi\), the approach taken in [5] was to recognise that on the interface \({\bf x}=\hat{\bf x}+\delta{\bf X}\), where \(\hat{\bf x}\) is the position of the bottom left corner of the unit cell and \({\bf X}\in[0,1]^{3}\), expanding \[{\bf Q}({\bf x},{\bf X})={\bf Q}(\hat{\bf x}+\delta{\bf X},{\bf X})={\bf Q}( \hat{\bf x},{\bf X})+\delta{\bf X}\cdot\nabla_{\bf x}{\bf Q}(\hat{\bf x},{\bf X })+\cdots \tag{50}\] in the integrand of (7). But how should we proceed when the domain and the normal, as well as the integrand, depend on the slow variable \({\bf x}\)? _i.) The naive approach_ An initial attempt to write the integral constraint on a slowly varying domain in multiple scales form may be to combine the normal expansion (10) with the expansion of the integrand given in (50), writing \[\int_{\partial\Omega_{\rm i}}{\bf Q}\cdot{\bf n}dS\to\delta^{2}\int_{\partial \Omega_{\rm i}}\left({\bf Q}_{0}+\delta({\bf Q}_{1}+{\bf X}\cdot\nabla_{\bf x }{\bf Q})+...\right)\cdot({\bf n}_{0}+\delta{\bf n}_{1}+...)\,{\rm d}S_{\bf X}. \tag{51}\] We will now highlight the issues that arise if the form (51) is used. Writing (46)-(47) in terms of the flux \({\bf Q}\), and integrating over the insulating region, we find \[\int_{D_{\rm e}}\nabla_{\bf X}\cdot{\bf Q}_{1}\,{\rm d}{\bf X}+\int_{D_{\rm e }}\nabla_{\bf x}\cdot{\bf Q}_{0}\,{\rm d}{\bf X}=0. \tag{52}\] Applying the divergence theorem to the first term and using the integral constraint, we find \[\int_{\partial D_{\rm e}}{\bf Q}_{0}\cdot{\bf n}_{1}{\rm d}S_{\bf X}+\int_{ \partial D_{\rm e}}{\bf X}\cdot\nabla_{\bf x}{\bf Q}_{0}\cdot{\bf n}_{0}{\rm d }S_{\bf X}+\int_{D_{\rm e}}\nabla_{\bf x}\cdot{\bf Q}_{0}{\rm d}{\bf X}=-\int _{D_{\rm e}}\rho{\rm d}{\bf X}-\int_{D_{\rm i}}\rho{\rm d}{\bf X}. \tag{53}\] Applying the flux transport theorem to the second term and Reynolds transport theorem to the final term of (53), we obtain \[\int_{\partial D_{\rm e}}{\bf Q}_{0}\cdot{\bf n}_{1}\,{\rm d}S_{\bf X}+\nabla _{\bf x}\cdot(\varepsilon_{\rm eff}\nabla_{\bf x}\phi_{0})=-\rho_{\rm eff}, \tag{54}\] after simplification, where we have defined the effective charge as in (27). Thus, we have the same homogenised problem as obtained with standard approaches (30), save the presence of an additional boundary term. _ii.) The correct approach_ In identifying the multiple scales form of integral constraints on a periodic domain, the integrand was expanded about fixed slow position as in (50). When the microstructure slowly varies, we must also expand the boundary position about a fixed slow position, _i.e._ we look to expand \[\int_{\partial\Omega}\mathbf{Q}\cdot\mathbf{n}\,\mathrm{d}S=\delta^{2}\int_{ \partial\Omega(\hat{\mathbf{x}}+\delta\mathbf{X})}\mathbf{Q}(\hat{\mathbf{x}}+ \delta\mathbf{X},\mathbf{X})\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}} \tag{55}\] For generality, we assume that the surface \(\partial\Omega\) is open as shown in figure 2. Expanding the integrand as in [5], we find \[\int_{\partial\Omega}\mathbf{Q}\cdot\mathbf{n}\,\mathrm{d}S=\delta^{2}\int_{ \partial\Omega(\hat{\mathbf{x}}+\delta\mathbf{X})}\big{(}\mathbf{Q}(\hat{\mathbf{ x}},\mathbf{X})+\delta\mathbf{X}\cdot\nabla_{\mathbf{x}}\mathbf{Q}(\hat{\mathbf{x}}, \mathbf{X}))\big{)}\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}}+\cdots. \tag{56}\] To project the boundary onto that at \(\hat{\mathbf{x}}\), we take a similar approach to heuristic derivations of the flux transport theorem, see for example [10]. We apply the divergence theorem to the volume \(\Omega_{\delta}\) swept out by the surface as we move from \(\hat{\mathbf{x}}\) to \(\hat{\mathbf{x}}+\delta\mathbf{X}\) (illustrated schematically in figure 2), writing \[\begin{split}\int_{\partial\Omega(\hat{\mathbf{x}}+\delta \mathbf{X})}\mathbf{Q}(\hat{\mathbf{x}},\mathbf{X})\cdot\mathbf{n}\,\mathrm{d}S_{ \mathbf{X}}=\int_{\Omega_{\delta}(\hat{\mathbf{x}})}\nabla_{\mathbf{X}}\cdot& \mathbf{Q}(\hat{\mathbf{x}},\mathbf{X})\mathrm{d}\mathbf{X}+\int_{ \partial\Omega(\hat{\mathbf{x}})}\mathbf{Q}(\hat{\mathbf{x}},\mathbf{X})\cdot \mathbf{n}\,\mathrm{d}S_{\mathbf{X}}\\ &-\int_{\partial\Omega_{\delta}(\hat{\mathbf{x}})}\mathbf{Q}(\hat{ \mathbf{x}},\mathbf{X})\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}},\end{split} \tag{57}\] where \(\partial\Omega_{\delta}\) is the volume enclosed by the dashed lines in figure 2. In the limit \(\delta\to 0\), we can write the volume and surface elements as \(\mathrm{d}\mathbf{X}=\delta\mathbf{X}\cdot\mathbf{V}\cdot\mathbf{n}\,dS_{X}\) and \(\mathbf{n}\mathrm{d}S_{\mathbf{X}}=-\delta\mathbf{X}\cdot\mathbf{V}\times d\mathbf{r}\) Figure 2: Schematic of the open surface \(\partial\Omega\) changing with slow coordinate. respectively, where \(\mathbf{V}=\nabla_{\mathbf{x}}\mathbf{R}^{b}\) is the'velocity' of the boundary for positions \(\mathbf{R}^{b}\) on \(\partial\Omega(\hat{\mathbf{x}})\) and \(\mathrm{d}\mathbf{r}\) is the line element of \(\Gamma(\hat{\mathbf{x}})\). Substituting the form for the volume element of \(\Omega_{\delta}\) and area element of \(\partial\Omega_{\delta}\) into (57), we have \[\begin{split}\int_{\partial\Omega(\hat{\mathbf{x}}+\delta\mathbf{ X})}\mathbf{Q}(\hat{\mathbf{x}},\mathbf{X})\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}}& =\int_{\partial\Omega(\hat{\mathbf{x}})}\delta(\nabla_{\mathbf{X}} \cdot\mathbf{Q})\mathbf{X}\cdot\mathbf{V}\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}}\\ &\quad+\int_{\partial\Omega(\hat{\mathbf{x}})}\mathbf{Q}\cdot \mathbf{n}\,\mathrm{d}S_{\mathbf{X}}+\int_{\Gamma(\hat{\mathbf{x}})}\delta\mathbf{ Q}\cdot(\mathbf{X}\cdot\mathbf{V})\times\mathrm{d}\mathbf{r},\end{split} \tag{58}\] Combining (58) with (56), we obtain the multiple scales form for integral constraints on a slowly varying domain \[\begin{split}\int_{\partial\Omega}\mathbf{Q}\cdot\mathbf{n}\, \mathrm{d}S\to\delta^{2}\int_{\partial\Omega}\left(\mathbf{Q}+\delta\mathbf{X }\cdot\nabla_{\mathbf{x}}\mathbf{Q}+\delta(\nabla_{\mathbf{X}}\cdot\mathbf{Q} )\mathbf{X}\cdot\mathbf{V}\right)\cdot\mathbf{n}\,\mathrm{d}S_{\mathbf{X}}\\ +\delta^{2}\int_{\Gamma}\delta\mathbf{Q}\cdot(\mathbf{X}\cdot\mathbf{V}) \times\mathrm{d}\mathbf{r}.\end{split} \tag{59}\] In (59), \(\mathbf{n}\) is the normal to the boundary at fixed \(\hat{\mathbf{x}}\); in the example of inclusions with a slowly varying radius this is given by \(\mathbf{n}_{0}\). Note that, unlike in Section 2.1 when approximating (2), and perhaps counter-intuitively, we do not need to expand the normal to introduce \(\mathbf{n}_{1}\), or to apply the operator \(\mathbf{X}\cdot\nabla_{\mathbf{x}}\) to \(\mathbf{n}_{0}\): the perturbation to the normal is already accounted for by the term involving \(\mathbf{V}\). Thus, an expansion of the normal will only appear in (59) when the function defining the boundary through its level sets is a function of \(\delta\). Thus, in multiple scales form, the integral constraint (7) is \[\int_{\partial D_{\mathrm{a}}}\left(\mathbf{Q}+\delta\mathbf{X}\cdot\nabla_{ \mathbf{x}}\mathbf{Q}+\delta(\nabla_{\mathbf{X}}\cdot\mathbf{Q})\mathbf{X} \cdot\mathbf{V}\right)\cdot\mathbf{n}_{0}\,\mathrm{d}S_{\mathbf{X}}=-\delta \int_{D_{\mathrm{i}}}\rho\,\mathrm{d}\mathbf{X}, \tag{60}\] where \[\mathbf{Q}=\frac{1}{\delta}\nabla_{\mathbf{X}}\phi+\nabla_{\mathbf{x}}\phi\] and the surface integral is over the exterior surface of the inclusion. Using the expansion (12) in (60) we find at leading-order that \[\int_{\partial D_{\mathrm{a}}}\nabla_{\mathbf{X}}\phi_{0}\cdot\mathbf{n}_{0} \,\mathrm{d}S_{\mathbf{X}}=0, \tag{61}\] consistent with \(\phi_{0}=\phi_{0}(\mathbf{x})\). At first-order we find \[\int_{\partial D_{\mathrm{a}}}\mathbf{Q}_{0}\cdot\mathbf{n}_{0}\,\mathrm{d}S_{ \mathbf{X}}=0,\qquad\qquad\mathbf{Q}_{0}=\nabla_{\mathbf{X}}\phi_{1}+\nabla_{ \mathbf{x}}\phi_{0}, \tag{62}\] which is consistent with (39). Finally, equating coefficients of \(\delta^{2}\), and noting that \(\nabla_{\mathbf{X}}\cdot\mathbf{Q}_{0}=0\), we obtain \[\int_{\partial D_{\mathrm{a}}}\left(\mathbf{Q}_{1}+\mathbf{X}\cdot\nabla_{ \mathbf{x}}\mathbf{Q}_{0}\right)\cdot\mathbf{n}_{0}\,\mathrm{d}S_{\mathbf{X}}= -\int_{D_{\mathrm{i}}}\rho\,\mathrm{d}\mathbf{X},\qquad\qquad\mathbf{Q}_{1}= \nabla_{\mathbf{X}}\phi_{2}+\nabla_{\mathbf{x}}\phi_{1}. \tag{63}\] Substituting into (49) gives \[\int_{\partial D_{\mathrm{a}}}\varepsilon_{\mathrm{e}}\mathbf{X}\cdot\nabla_{ \mathbf{x}}\mathbf{Q}_{0}\cdot\mathbf{n}_{0}\,\mathrm{d}S_{\mathbf{X}}+\int_{D_ {\mathrm{a}}}\nabla_{\mathbf{x}}\cdot\left(\varepsilon_{\mathrm{e}}\mathbf{Q} _{0}\right)\,\mathrm{d}\mathbf{X}=-\rho_{\mathrm{eff}}, \tag{64}\] where \[\rho_{\rm eff}=\int_{D}\rho\,{\rm d}{\bf X} \tag{65}\] is the effective charge as before. Using the transport theorem to take the slow derivatives outside the integral gives \[\int_{\partial D_{\rm i}}({\bf X}\cdot\nabla_{\bf x}){\bf Q}_{0} \cdot{\bf n}_{0}\,{\rm d}S_{\bf X} = \int_{\partial D_{\rm i}}X_{i}\frac{\partial Q_{0,j}}{\partial x_{i }}n_{0,j}\,{\rm d}S_{\bf X}\] \[= \frac{\partial}{\partial x_{i}}\int_{\partial D_{\rm i}}X_{i}Q_{0,j}n_{0,j}\,{\rm d}S_{\bf X}-\int_{\partial D_{\rm i}}Q_{0,i}V_{ij}n_{0,j}\,{ \rm d}S_{\bf X}\] \[= \nabla_{\bf x}\cdot\int_{\partial D_{\rm i}}{\bf X}({\bf Q}_{0} \cdot{\bf n}_{0})\,{\rm d}S_{\bf X}-\int_{\partial D_{\rm i}}{\bf Q}_{0} \cdot{\bf V}\cdot{\bf n}_{0}\,{\rm d}S_{\bf X},\] while \[\int_{D_{\rm e}}\nabla_{\bf x}\cdot(\varepsilon_{\rm e}{\bf Q}_{0})\,\,{\rm d }{\bf X} = \nabla_{\bf x}\cdot\int_{D_{\rm e}}\varepsilon_{\rm e}{\bf Q}_{0 }\,{\rm d}{\bf X}+\int_{\partial D_{\rm i}}{\bf Q}_{0}\cdot{\bf V}\cdot{\bf n }_{0}\,{\rm d}S_{\bf X}\] (since \({\bf n}_{0}\) is the outward normal to \(D_{\rm i}\)). Thus the two surface integrals cancel. Simplifying now as we did to obtain (35) we find that (64) becomes \[\nabla_{\bf x}\cdot(\varepsilon_{\rm eff}\nabla_{\bf x}\phi_{0})=-\rho_{\rm eff}, \tag{66}\] where the effective permittivity tensor is \[\varepsilon_{{\rm eff}\,ij}=\varepsilon_{\rm e}\left(\delta_{ij}+\int_{ \partial D}X_{i}\frac{\partial\Psi_{j}}{\partial X_{k}}n_{k}\,{\rm d}S_{\bf X}\right) \tag{67}\] in agreement with (35). ## 3 Paradigm Problem - Another Limit In section 2 we illustrated how to treat the multiple scales problem with integral constraints considered in [5] when the domain slowly varies. In this example, the divergence of the flux in the proposed multiple scales form (60) vanishes. Here we construct a problem where this term is non-zero by considering the limit of large charge density, rescaling (1)-(7) as follows. We consider \[\nabla\cdot(\varepsilon\nabla\phi)=-\frac{\rho}{\delta}, \tag{68}\] where \(\rho=O(1)\). The boundary conditions (2) and (3) remain unchanged. In the limit of perfectly dielectric inclusions, where \(\varepsilon_{\rm i}\to\infty\), we have \[\nabla\cdot(\varepsilon_{\rm e}\nabla\phi)=-\frac{\rho}{\delta} \qquad\mbox{ in }\Omega_{\rm e}, \tag{69}\] \[\nabla\phi=0\qquad\mbox{ in }\Omega_{\rm i}, \tag{70}\] with continuity (6) at the inclusion boundary. The rescaled integral constraint becomes \[\int_{\partial\Omega_{\rm i}}\varepsilon_{\rm e}\,{\bf n}\cdot\nabla\phi|_{ \rm e}\,\,{\rm d}S=-\frac{1}{\delta}\int_{\Omega_{\rm i}}\rho\,{\rm d}{\bf x}. \tag{71}\] We perform a similar analysis to section 2: first we take the limit \(\varepsilon_{\rm i}\to\infty\) in the standard multiple scales problem before comparing with the problem formulated with an integral condition. ### Standard Multiple Scales We substitute the multiple scales expansion (12) into (68) and compare coefficients at each order of \(\delta\). At leading-order, we find \(\phi_{0}\) independent of the fast scale. At next order, we have \[\nabla_{\mathbf{X}}\cdot(\varepsilon\nabla_{\mathbf{X}}\phi_{1}) =-\rho\quad\text{in}\quad D, \tag{72}\] \[=0,\] (73) \[=0, \tag{74}\] with \(\phi_{1}\)**1**-periodic in \(\mathbf{X}.\) Writing \[\phi_{1}=\boldsymbol{\Psi}\cdot\nabla_{\mathbf{x}}\phi_{0}+\xi+ \overline{\phi}_{1}, \tag{75}\] with \(\overline{\phi}_{1}=\overline{\phi}_{1}(\mathbf{x}),\) we obtain two microscale problems. We find \(\boldsymbol{\Psi}\) satisfies (20)-(22). The microscale problem for \(\xi\) is \[\nabla_{\mathbf{X}}\cdot(\varepsilon\nabla_{\mathbf{X}}\xi) =-\rho\quad\text{in}\quad D, \tag{76}\] \[=0,\] (77) \[=0, \tag{78}\] with \[\int_{D}\xi\,d\mathbf{X}=0. \tag{79}\] Equating coefficients of \(\delta^{2}\), we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon(\nabla_{\mathbf{X}}\phi_{2} +\nabla_{\mathbf{x}}\phi_{1}))+\nabla_{\mathbf{x}}\cdot(\varepsilon(\nabla_{ \mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0})) =0\quad\text{in}\quad D, \tag{80}\] \[=0,\] (81) \[=0. \tag{82}\] We integrate (80) over the unit cell \(D\), applying the divergence theorem to terms involving the fast divergence, the Reynolds transport theorem and use (81). Following similar analysis to section 2.1, we obtain the homogenised problem \[\nabla_{\mathbf{x}}\cdot(\boldsymbol{\varepsilon}_{\text{eff}} \nabla_{\mathbf{x}}\phi_{0})=-\rho_{\text{eff}}, \tag{83}\] with effective permittivity \(\boldsymbol{\varepsilon}_{\text{eff}}\) given by (31) and effective charge density, \[\rho_{\text{eff}}=\nabla_{\mathbf{x}}\cdot\int_{D}\varepsilon \nabla_{\mathbf{X}}\xi d\mathbf{X}. \tag{84}\] #### 3.1.1 Taking the limit \(\varepsilon_{\text{i}}\to\infty\) In the limit of perfectly dielectric inclusions, the effective permittivity takes the form (35). In the limit \(\varepsilon_{\text{i}}\to\infty,\) the cell problem for \(\xi\) becomes \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\text{e}}\nabla_{\mathbf{X }}\xi) =-\rho\quad\text{in}\quad D_{\text{e}}, \tag{85}\] \[\nabla_{\mathbf{X}}^{2}\xi =0\quad\text{in}\quad D_{\text{i}},\] (86) \[\mathbf{n}_{0}\cdot\nabla_{\mathbf{X}}\xi =0\quad\text{on}\quad\partial D_{\text{i}},\] (87) \[=0. \tag{88}\] We switch to index notation to establish the form of the effective charge \(\rho_{\rm eff}\) in the limit \(\varepsilon_{\rm i}\to\infty\), \[\begin{split}\rho_{\rm eff}&=\frac{\partial}{ \partial x_{i}}\int_{D}\varepsilon\frac{\partial\xi}{\partial X_{i}}\mathrm{d} \mathbf{X}\\ &=\frac{\partial}{\partial x_{i}}\int_{D}\varepsilon\frac{ \partial}{\partial X_{j}}\left(X_{i}\frac{\partial\xi}{\partial X_{j}}\right) \mathrm{d}\mathbf{X}-\frac{\partial}{\partial x_{i}}\int_{D}\varepsilon X_{i} \frac{\partial^{2}\xi}{\partial X_{j}\partial X_{j}}\mathrm{d}\mathbf{X}\\ &=\frac{\partial}{\partial x_{i}}\int_{\partial D}\varepsilon_{ \rm e}X_{i}\frac{\partial\xi}{\partial X_{j}}n_{j}\mathrm{d}S-\frac{\partial}{ \partial x_{i}}\int_{D}\varepsilon X_{i}\frac{\partial^{2}\xi}{\partial X_{j} \partial X_{j}}\mathrm{d}\mathbf{X}\\ &=\frac{\partial}{\partial x_{i}}\int_{\partial D}\varepsilon_{ \rm e}X_{i}\frac{\partial\xi}{\partial X_{j}}n_{j}\mathrm{d}S+\frac{\partial }{\partial x_{i}}\int_{D_{e}}X_{i}\rho\mathrm{d}\mathbf{X},\end{split} \tag{89}\] where we have used (77) in going from the second to third line and (85)-(85) in the third to fourth line. ### Multiple Scales with Integral Constraints In this section we treat (69)-(71) directly, writing (71) in multiple scales form using (59). Substituting (12) into (69)-(71) written in multiple scales form, we find that \(\phi_{0}=\phi_{0}(\mathbf{x})\) at leading-order. At first-order we find \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\rm e}\nabla_{\mathbf{X}} \phi_{1}) =-\rho\quad\text{in}\quad D_{\rm e}, \tag{90}\] \[\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0} =\mathbf{0}\quad\text{in}\quad D_{\rm i},\] (91) \[\left[\phi_{1}\right]_{\rm i}^{\rm e} =0,\] (92) \[\int_{\partial D_{\rm e}}\varepsilon_{\rm e}(\nabla_{\mathbf{X}} \phi_{1}+\nabla_{\mathbf{x}}\phi_{0})\!\cdot\!\mathbf{n}_{0}\mathrm{d}S_{ \mathbf{X}} =-\int_{D_{\rm i}}\rho\mathrm{d}\mathbf{X}, \tag{93}\] with \(\phi_{1}\)\(\mathbf{1}\)-periodic in \(\mathbf{X}\). As in Section 3.1, we write \(\phi_{1}=\mathbf{\Psi}\!\cdot\!\nabla_{\mathbf{x}}\phi_{0}+\xi\!+\!\overline{ \phi}_{1}\) where \(\overline{\phi}_{1}=\overline{\phi}_{1}(\mathbf{x})\) and \(\mathbf{\Psi}\) is the solution to (42)-(44). The second cell function \(\xi\) satisfies \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\rm e}\nabla_{\mathbf{X}} \xi) =-\rho\quad\text{in}\quad D_{\rm e}, \tag{94}\] \[\nabla_{\mathbf{X}}\xi =\mathbf{0}\quad\text{in}\quad D_{\rm i},\] (95) \[\left[\xi\right]_{\rm i}^{\rm e} =0, \tag{96}\] with \[\int_{D}\xi\mathrm{d}\mathbf{X}=0. \tag{97}\] Equating coefficients at next order, we have \[\nabla_{\mathbf{X}}\cdot(\varepsilon_{\rm e}(\nabla_{\mathbf{X}} \phi_{2}+\nabla_{\mathbf{x}}\phi_{1}))+\nabla_{\mathbf{x}}\cdot(\varepsilon_{ \rm e}(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0})) =0\quad\text{in}\quad D_{\rm e}, \tag{98}\] \[\nabla_{\mathbf{X}}\phi_{2}+\nabla_{\mathbf{x}}\phi_{1} =\mathbf{0}\quad\text{in}\quad D_{\rm i},\] (99) \[\left[\phi_{2}\right]_{\rm i}^{\rm e} =0,\] (100) \[\int_{\partial D_{\rm e}}\varepsilon_{\rm e}(\nabla_{\mathbf{X}} \phi_{2}+\nabla_{\mathbf{x}}\phi_{1})\!\cdot\!\mathbf{n}_{0}\mathrm{d}S_{ \mathbf{X}}+\int_{\partial D_{\rm e}}\varepsilon_{\rm e}\mathbf{X}\!\cdot\! \nabla_{\mathbf{x}}(\nabla_{\mathbf{X}}\phi_{1}\!+\!\nabla_{\mathbf{x}}\phi_{0} )\!\cdot\!\mathbf{n}_{0}\mathrm{d}S_{\mathbf{X}}\] \[+\int_{\partial D_{\rm e}}\varepsilon_{\rm e}\nabla_{\mathbf{X}} \!\cdot\!(\nabla_{\mathbf{X}}\phi_{1}+\nabla_{\mathbf{x}}\phi_{0})\mathbf{X} \!\cdot\!\mathbf{V}\!\cdot\!\mathbf{n}_{0}\mathrm{d}S_{\mathbf{X}} =0. \tag{101}\] Integrating (46) over the exterior region, applying the divergence theorem to the first term and substituting (101), we have \[\int_{\partial D_{\rm e}}\varepsilon_{\rm e}{\bf X}\!\cdot\!\nabla_{ \bf x}(\nabla_{\bf X}\phi_{1} +\nabla_{\bf x}\phi_{0})\!\cdot\!{\bf n}_{0}{\rm d}S_{\bf X}+\int_{ \partial D_{\rm e}}\varepsilon_{\rm e}\nabla_{\bf X}\!\cdot\!\left(\nabla_{\bf X }\phi_{1}+\nabla_{\bf x}\phi_{0}\right)\!{\bf X}\!\cdot\!{\bf V}\!\cdot\!{\bf n }_{0}{\rm d}S_{\bf X}\] \[+\int_{D_{\rm e}}\nabla_{\bf x}\cdot\left(\varepsilon_{\rm e}( \nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}=0.\] Applying the transport theorem to the first and final integrals gives \[\nabla_{\bf x}\!\cdot\!\int_{\partial D_{\rm e}}\varepsilon_{\rm e }{\bf X}(\nabla_{\bf X}\phi_{1} +\nabla_{\bf x}\phi_{0})\!\cdot\!{\bf n}_{0}{\rm d}S_{\bf X}-\int_{ \partial D_{\rm e}}\varepsilon_{\rm e}\nabla_{\bf X}\!\cdot\!\left({\bf X}( \nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\right)\!\cdot\!{\bf V}\!\cdot\! {\bf n}_{0}{\rm d}S_{\bf X}\] \[+\int_{\partial D_{\rm e}}\varepsilon_{\rm e}\nabla_{\bf X}\! \cdot\!\left(\nabla_{\bf X}\phi_{1} +\nabla_{\bf x}\phi_{0}\right)\!{\bf X}\!\cdot\!{\bf V}\!\cdot\!{\bf n}_{0}{ \rm d}S_{\bf X}+\nabla_{\bf x}\cdot\int_{D_{\rm e}}\left(\varepsilon_{\rm e}( \nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}\] \[+\int_{\partial D_{\rm e}}\varepsilon_{\rm e}(\nabla_{\bf X}\phi_ {1}+\nabla_{\bf x}\phi_{0})\!\cdot\!{\bf V}\!\cdot\!{\bf n}_{0}{\rm d}S_{\bf X }=0.\] Expanding the divergence in the second integral, we find some of the boundary terms cancel, leaving \[\nabla_{\bf x}\!\cdot\int_{\partial D_{\rm e}}\varepsilon_{\rm e}{\bf X}( \nabla_{\bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\!\cdot\!{\bf n}_{0}{\rm d}S_{ \bf X}+\nabla_{\bf x}\cdot\int_{D_{\rm e}}\left(\varepsilon_{\rm e}(\nabla_{ \bf X}\phi_{1}+\nabla_{\bf x}\phi_{0})\right)\,{\rm d}{\bf X}=0. \tag{102}\] We substitute (75), using the divergence theorem to take the first integral into the exterior region and use (94)-(96) to obtain \[\nabla_{\bf x}\cdot\left(\varepsilon_{\rm eff}\nabla_{\bf x}\phi_{0}\right)=- \rho_{\rm eff} \tag{103}\] where \[\varepsilon_{\rm effij}=\varepsilon_{\rm e}\left(\delta_{ij}+\int_{\partial D }X_{i}\frac{\partial\Psi_{j}}{\partial X_{k}}{\bf n}_{0k}{\rm d}S_{\bf X} \right). \tag{104}\] The effective charge is given by \[\rho_{\rm eff}=\nabla_{\bf x}\!\cdot\left(\int_{D_{\rm e}}\rho{\bf X}{\rm d}{ \bf X}+\int_{\partial D}\varepsilon_{\rm e}{\bf X}\nabla_{\bf X}\xi\cdot{\bf n }_{0}{\rm d}S_{\bf X}\right). \tag{105}\] Thus, we have recovered (83) in the limit of perfectly dielectric inclusions, confirming the need for the divergence term present in (60). ## 4 Discussion We have outlined how to combine the extension to the standard theory of multiple scales which deals with a slowly varying microstructure with that which deals with integral constraints. Our main result is equation (59), which shows how to write an integral constraint in multiple scales form when the (fast) domain of the integral is a function of the slow scale. Essentially the rest of the manuscript is a justification of this equation, showing that it leads to the correct homogenised model for an example in which that model can be identified using a more standard approach. Some problems involving integral constraints, especially those in which different physics holds in the inclusions, do not arise as a limit of a more standard problem, and such an approach is not available. These problems can be handled using equation (59). Acknowledgements A. K. thanks the BBSRC for support under grant BB/M011224/1.
2309.12678
QAL-BP: An Augmented Lagrangian Quantum Approach for Bin Packing
The bin packing is a well-known NP-Hard problem in the domain of artificial intelligence, posing significant challenges in finding efficient solutions. Conversely, recent advancements in quantum technologies have shown promising potential for achieving substantial computational speedup, particularly in certain problem classes, such as combinatorial optimization. In this study, we introduce QAL-BP, a novel Quadratic Unconstrained Binary Optimization (QUBO) formulation designed specifically for bin packing and suitable for quantum computation. QAL-BP utilizes the Augmented Lagrangian method to incorporate the bin packing constraints into the objective function while also facilitating an analytical estimation of heuristic, but empirically robust, penalty multipliers. This approach leads to a more versatile and generalizable model that eliminates the need for empirically calculating instance-dependent Lagrangian coefficients, a requirement commonly encountered in alternative QUBO formulations for similar problems. To assess the effectiveness of our proposed approach, we conduct experiments on a set of bin packing instances using a real Quantum Annealing device. Additionally, we compare the results with those obtained from two different classical solvers, namely simulated annealing and Gurobi. The experimental findings not only confirm the correctness of the proposed formulation but also demonstrate the potential of quantum computation in effectively solving the bin packing problem, particularly as more reliable quantum technology becomes available.
Lorenzo Cellini, Antonio Macaluso, Michele Lombardi
2023-09-22T07:37:20Z
http://arxiv.org/abs/2309.12678v2
# QAL-BP: An Augmented Lagrangian Quantum Approach for Bin Packing Problem ###### Abstract The bin packing is a well-known NP-Hard problem in the domain of artificial intelligence, posing significant challenges in finding efficient solutions. Conversely, recent advancements in quantum technologies have shown promising potential for achieving substantial computational speedup, particularly in certain problem classes, such as combinatorial optimization. In this study, we introduce QAL-BP, a novel Quadratic Unconstrained Binary Optimization (QUBO) formulation designed specifically for bin packing and suitable for quantum computation. QAL-BP utilizes the augmented Lagrangian method to incorporate the bin packing constraints into the objective function while also facilitating an analytical estimation of heuristic, but empirically robust, penalty multipliers. This approach leads to a more versatile and generalizable model that eliminates the need for empirically calculating instance-dependent Lagrangian coefficients, a requirement commonly encountered in alternative QUBO formulations for similar problems. To assess the effectiveness of our proposed approach, we conduct experiments on a set of bin-packing instances using a real Quantum Annealing device. Additionally, we compare the results with those obtained from two different classical solvers, namely simulated annealing and Gurobi. The experimental findings not only confirm the correctness of the proposed formulation but also demonstrate the potential of quantum computation in effectively solving the bin-packing problem, particularly as more reliable quantum technology becomes available. ## Introduction Bin packing is a well-established [1] combinatorial optimization problem with wide-ranging applications in domains such as logistics, resource allocation, and scheduling. Its primary objective is to minimize the number of fixed-capacity bins required to pack a set of items, each with a distinct size. Despite extensive research efforts, bin packing remains a challenging problem due to the exponential growth of solution possibilities as the number of items and bins increases [2]. On the other hand, Quantum Computing has recently emerged as a promising alternative to solving various AI problems, including coalition formation in multi-agent systems [3, 4, 5] and supervised learning [6, 7], though a real practical quantum advantage has yet to be found considering near-term quantum technology. The standard approach in quantum computing involves reformulating the original problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem and employing quantum annealers (QAs) or parametrized quantum circuits, such as QAOA [8], to find the optimal solution. These approaches possess distinctive strengths and weaknesses. QAOA, for example, enables theoretical solutions to any QUBO problem with arbitrary precision by increasing the depth of the associated quantum circuit. Nonetheless, QAs are specifically designed to identify the lower energy state of an Ising Hamiltonian representing the original QUBO problem and are better suited for tackling larger problems in terms of the number of QUBO variables. In case of constraint optimization problems, the main drawback of the reformulation as a QUBO consists of associating a penalty term to the constraints and including them in the objective function. This approach requires empirical estimation of the penalty terms, which translates to running the QUBO solver (QA or QAOA) multiple times before achieving a feasible solution and poses several limitations especially when considering large problem instances and the imperfections of near-term quantum technology. This paper presents a novel method for solving the bin packing problem using quantum computation. Specifically, we introduce an analytical heuristic approach for estimating penalty multipliers based on the augmented Lagrangian framework [9] which allows obtaining a complete QUBO formulation without requiring empirical, instance-based parametrization. To demonstrate the effectiveness of our proposed method, we conduct experiments using a real quantum annealer and compare the results with two different state-of-the-art classical baselines. ## Problem Formulation The bin packing problem (BPP) is a classic optimization task that involves packing objects of different sizes into containers, or bins, with a limited capacity. The goal is to minimize the number of bins needed to pack all the objects. A mathematical formulation of the bin packing problem can be expressed as follows: given a set of \(n\) items of given integer size (or weight) \(w_{j}\) (\(j=1,\ldots,n\)) the goal is to pack them into the minimum number of identical bins of integer capacity \(C\). Let \(m\) be any upper bound on the solution value and let introduce \(y_{i},x_{ij}\) two sets of binary variables such that: \(y_{i}(i=1,\ldots,m)\) takes the value 1 if and only if bin \(i\) is used in the solution and \(x_{ij}(i=1,\ldots,m;j=1,\ldots,n)\) takes the value 1 if and only if item \(j\) is packed into bin \(i\). A commonly adopted Integer Linear Programming (ILP) problem formulation is the following [10]: \[\operatorname*{arg\,min}_{x,y} \sum_{i=1}^{m}y_{i}\] (1) s.t. \[\sum_{i=1}^{m}x_{ij}=1 \forall j=1,\ldots,n \tag{2}\] \[\sum_{j=1}^{n}w_{j}x_{ij}\leq Cy_{i} \forall i=1,\ldots,m\] (3) \[x_{ij}\in\{0,1\} \forall i=1,\ldots,m,\ \forall j=1,\ldots,n\] \[y_{i}\in\{0,1\} \forall i=1,\ldots,m\] A practical variant of significant interest is the _online_ bin packing problem. In this scenario, items of varying sizes are observed sequentially, and the decision maker must determine whether to select and pack the currently observed item or let it pass. Each decision is made without the ability to recall previous decisions. In contrast, the _offline_ bin packing problem allows for rearranging the items in an attempt to achieve a better packing arrangement when additional items arrive. However, this approach necessitates additional storage to hold the items that need to be rearranged. ## Related Works Classical algorithms for solving the BPP rely on Linear Programming relaxations and dynamic programming [11, 12]. However, as the number of items increases, the problem becomes intractable, and even for medium-sized instances, the optimal solution cannot be computed within a reasonable time frame. For this reason, several approximation algorithms and heuristics approaches can be adopted, such as simulated annealing [13, 14], Tabu search [15], population based algorithms [16], evolutionary and genetic heuristics [17, 18, 19, 20, 21] with hyper-heuristics [22, 23, 24, 25], variable neighborhood search meta-heuristics [26] and ad-hoc crafted heuristics [27, 28, 29]. In addition various alternative reformulations of the BPP have been proposed to improve the computational performance, such as pseudo-polynomial models [30, 31, 32]. While these approaches offer more efficient problem formulations and enable the implementation of solutions that do not scale exponentially with the input size, they suffer from the drawback of the number of variables depending on both the number of items and the bin capacity. More recently, the adoption of quantum computing has been explored for solving BPP. Existing quantum solutions involve reformulating the original problem as a QUBO problem and leveraging gate-based quantum computers or quantum annealing. At the time of writing this paper, two end-to-end QUBO models have been proposed for BPP, namely the Pseudo-Polynomial formulation [33] and the Unbalanced Penalization approach [34]. Alternatively, another existing approach [35] addresses the BPP through a hybrid approach, using quantum annealing to solve the sub-problem of filling a single bin, with the chance of reaching a sub-optimal solution. Pseudo-Polynomial QUBO formulationThe Pseudo-Polynomial QUBO formulation for the BPP [33] is defined by three sets of binary variables. These variables are employed to represent the placement of weights in bins, indicate whether bins are empty or not, and specify the filling levels of the bins. The corresponding Hamiltonian of the QUBO formulation consists of two weighted components, necessitating empirical estimation for the weights for each problem instance to avoid unfeasible solutions. The primary objective of the Hamiltonian is to minimize the number of used bins, which constitutes the classical objective function. The constraints Hamiltonian comprises three components. Firstly, it enforces the condition that each bin must be filled up to a unique level while ensuring that unused bins remain unfilled. Secondly, it guarantees that every item is allocated to a bin. Lastly, the third component penalizes configurations in which bins are overfilled, thereby violating the capacity constraint. Additionally, an extra term is introduced to account for considerations regarding only non-empty bins. A significant drawback of this formulation is its limited scalability concerning the number of binary variables. Although the introduction of slack variables enables the formulation to be pseudo-polynomial, it results in the addition of \(nC\) binary variables, rendering the formulation dependent on the specific problem instance due to the influence of bin capacity on the variable count. This scalability issue becomes particularly pertinent in the context of modern Quantum Processing Units (QPUs), which face restrictions in handling relatively small problem instances owing to qubit topology and connectivity constraints [36]. Consequently, even for small problem instances, a pseudo-polynomial Hamiltonian may become intractable when implemented on such quantum platforms. Moreover, achieving a well-balanced weight assignment for each term in the Hamiltonian is of paramount importance to effectively minimize the number of used bins while satisfactorily adhering to the defined constraints. This necessitates running the same problem instance multiple times using various hyperparameter sets in order to optimize the formulation's performance. Unbalanced penalization formulationAn alternative QUBO model for the BPP [34] introduces an inequality constraint \(g(x)=\sum_{i}l_{i}x_{i}-C\leq 0\) whose violation can be penalized using the exponential function \(e^{g(x)}\). To ensure a valid QUBO model, the exponential function is expanded up to its second-order Taylor's term, resulting in the approximation \(e^{g(x)}\approx 1+g(x)+\frac{1}{2}g(x)^{2}\). Despite its improved efficiency compared to the pseudo-polynomial QUBO formulation, this model has several limitations. Firstly, it necessitates the estimation of lambda parameters specific to each problem instance. This implies running the quantum algorithm multiple times to obtain feasible solutions for a single problem instance. Secondly, the model's performance is evaluated on a limited set of problem instances, raising concerns about its generalizability to other instances. The model's scalability across instances with varying numbers of items is not demonstrated; the evaluation is restricted to randomly generated instances with the same number of items. Furthermore, the experimental testing of the solution relies on QAOA, which poses restrictions on the number of problem variables due to the challenges associated with simulating even small-scale quantum systems. ### Contribution In this work, we presented QAL-BP (_Quantum Augmented Lagrangian method for Bin Packing_), a novel QUBO formulation for the BPP based on the Augmented Lagrangian method. QAL-BP is an end-to-end QUBO for the BPP that enables efficient scaling of logical qubits and the analytical estimation of the Lagrangian penalty terms. Specifically, we establish a connection between QUBO models and augmented Lagrangian methods, leveraging advancements in both fields and fostering potential future synergies. The proposed formulation offers several advantages. Firstly, it exhibits independence of the number of variables from the bin capacity. This eliminates the need for introducing slack variables, which typically increase the number of logical qubits and render quantum solutions infeasible for execution on real quantum hardware. Secondly, by employing an augmented Lagrangian formulation, we can analytically determine the Lagrangian penalty terms for a specific class of instances without the requirement of running the quantum solution multiple times, as is often necessary for alternative quantum approaches. Through experiments conducted on a real quantum annealing device, we demonstrate the effectiveness of our proposed approach. Thirdly, we proceed to compare the performance of QAL-BP with state-of-the-art classical approaches. The results demonstrate that QAL-BP consistently yields feasible solutions, and in most cases, it leads to the global minimum. To the best of our knowledge, this marks the first instance of an end-to-end analytical quantum solution for the BPP that has been rigorously tested across a diverse set of problem instances, displaying superior performance in comparison to existing quantum solutions. Furthermore, our results indicate promising potential concerning state-of-the-art classical solvers, particularly when more reliable quantum devices become available. ### Methods #### QAL-BP: Quantum Augmented Lagrangian method for Bin Packing Augmented Lagrangian methods are a class of algorithms used to solve constrained optimization problems by incorporating the constraints into the objective function through penalty terms. Consider a constrained minimization problem of the form: \[\min\,f(\mathbf{x}) \tag{4}\] \[\text{s.t.}\,\,c_{i}(\mathbf{x})=\mathbf{b}\,\,\forall i\in \mathcal{D}\] where \(x\) is a candidate solution, \(c_{i}(\mathbf{x})=\mathbf{b}\) are a set of equality constraints and \(\mathcal{D}\) is the set of constraints, the augmented Lagrangian method consists of defining an unconstrained problem of the form: \[\min\,\Phi(\mathbf{x})=f(\mathbf{x})+\sum_{i\in\mathcal{D}}\rho_{i}(c_{i}( \mathbf{x})-\mathbf{b})^{2}+\sum_{i\in\mathcal{D}}\lambda_{i}(c_{i}(\mathbf{x })-\mathbf{b}) \tag{5}\] where \(\rho_{i}\), \(\lambda_{i}\) for \(\{\text{i}=1,\,\ldots,\,|\mathcal{D}|\}\) are the lagrangian multipliers. In practice, when using the augmented Lagrangian approach, it is common to introduce additional constraints that do not alter the set of feasible solutions but aid in faster convergence of the solver (REF). For the BPP, we introduce the following additional constraints: \[\sum_{j=1}^{n}y_{i}x_{ij}=\sum_{j=1}^{n}x_{ij}\quad\forall i=1,\ldots,m \tag{6}\] This constraint implies that if a bin \(i\) is not used (\(y_{i}=0\)), it cannot contain any items (\(\sum_{j=1}^{n}x_{ij}=0\)). Therefore, the Quantum Augmented Lagrangian method for Bin Packing (QAL-BP) embeds the constraints using the augmented Lagrangian method as follows: \[\operatorname{argmin}_{x,y} \delta\sum_{i=1}^{m}y_{i} \tag{7}\] \[+\sum_{i=1}^{m}\lambda_{i}\left(\sum_{j=1}^{n}w_{j}x_{ij}-c_{i}y_{ i}\right)\] (8) \[+\sum_{i=1}^{m}\rho_{i}\left(\sum_{j=1}^{n}w_{j}x_{ij}-c_{i}y_{i} \right)^{2}\] (9) \[+\theta\sum_{j=1}^{n}\left(\sum_{i=1}^{m}x_{ij}-1\right)^{2}\] (10) \[+\gamma\sum_{i=1}^{m}\left(1-y_{i}\right)\sum_{j=1}^{n}x_{ij}\] (11) \[\text{s.t.} x_{ij}\in\{0,1\}\qquad\forall i=1\ldots,m,\forall j=1\ldots,n\] \[y_{i}\in\{0,1\}\qquad\forall i=1\ldots,m\] \[\delta\text{,}\lambda\text{,}\rho\text{,}\theta\text{,}\gamma \geq 0\] The penalties (8) and (9) represent the augmented Lagrangian expansion of (3). These terms impose a penalty of \(\lambda_{i}s_{i}+\rho_{i}s_{i}^{2}\) for infeasible configurations while providing a negative reward to the solver for feasible configurations. Accurate estimation of \(\lambda_{i}\) and \(\rho_{i}\) values is crucial for correctly modeling the solution space. Similarly, in Eq. (10), \(\theta\) represents a penalty for not placing an item \(j\), and penalizes \((k-1)\theta\) when item \(j\) is placed \(k\) times. Notably, this penalty term is not an augmented Lagrangian expansion of (2), but rather a pure squared penalty. This is because we do not wish to reward the solver when an item is not placed at all. Finally, the term (11) represents the penalty associated with the redundant constraints (6) which imposes a penalty of \(\sum_{j\in\mathcal{J}}x_{ij}\) when the set of items \(\mathcal{J}\) is assigned to bin \(i\) without setting the corresponding \(y_{i}\) to 1. It is important to note that the standard augmented Lagrangian approach typically transforms inequality constraints into equality constraints using slack variables, which are then incorporated into the Lagrangian as shown in (5). However, the proposed QUBO formulation in this study does not involve slack variables but directly utilizes the capacity constants \(c_{i}\). This aspect provides a significant advantage over the pseudo-polynomial approach [33]. #### Penalties estimation When incorporating constraints into the objective function, the estimation of penalty multipliers is typically carried out by testing a large set of parameters which requires running the algorithm multiple times with different parameter values to find a feasible solution for the specific problem instance and increase dramatically the time cost to find good solutions. Given the QAL-BP formulation and its corresponding set of constraints, we propose an analytical estimation of the penalty multipliers. The conditions are designed based on approximate worst-case reasoning, aiming to yield optimal or slightly sub-optimal solutions for most instances of the BPP. The following is a set of heuristic conditions that consider each penalty multiplier individually, with the exception of the pair \(\lambda,\rho\). Considering the \(i\)-th bin, the correspondent augmented Lagrangian term is given by \(\lambda_{i}(s_{i}-c_{i}y_{i})+\rho_{i}(s_{i}-c_{i}y_{i})^{2}\). When \(y_{i}=0\), i.e, \(\ldots\) using the smallest bin usage amount should be at least as expensive as using the bin: \[\lambda_{i}(w_{min}-0)+\rho_{i}(w_{min}-0)^{2}\geq 1, \tag{12}\] where \(w_{min}\) is the smallest item weight, i.e. \(w_{min}=\min\{w_{j}\}\). If this condition is satisfied, using more capacity makes the solver choose to set \(y_{i}=1\). On the contrary, if \(y_{i}=0\) and no capacity then the condition in eq. (12) is true. Let's consider now the case where \(y_{i}=1\), meaning that the bin \(y_{i}\) is included in the candidate solution. In this case, exceeding the capacity by any amount should be at least as expensive as using one more bin, i.e.: \[\lambda_{i}(c_{i}+w_{min}-c_{i})+\rho_{i}(c_{i}+w_{min}-c_{i})^{2}\geq 1 \tag{13}\] which conveniently is the same condition as Eq. (12). Lastly, it is necessary to identify a solution space that contains only feasible solutions. In this case, the Lagrangian term needs to provide a positive reward (i.e., negative cost) if the constraint is satisfied. It is also necessary that such a reward be small enough that it does not provide an incentive for using another bin, i.e.: \[\lambda_{i}(-\frac{c_{i}}{2})+\rho_{i}(-\frac{c_{i}}{2})^{2}\geq 0 \tag{14}\] Basically, here we are fitting a quadratic function using the conditions (12) and (14) to approximate the solution space excluding unfeasible solutions. We can therefore obtain values for \(\lambda_{i}\) and \(\rho_{i}\) by stating all conditions for their least restrictive values: \[w_{min}\lambda_{i}+w_{min}^{2}\rho_{i} = 1; \tag{15}\] \[-\frac{c_{i}}{2}\lambda_{i}+\frac{c_{i}}{4}^{2}\rho_{i} = 0, \tag{16}\] which leads to an analytical formulation of the form: \[\lambda_{i} = \frac{c_{i}}{w_{min}\left(2w_{min}+c_{i}\right)} \tag{17}\] \[\rho_{i} = \lambda_{i}\frac{2}{c_{i}}=\frac{2}{w_{min}\left(2w_{min}+c_{i} \right)} \tag{18}\] The next step is to calibrate \(\theta_{j}\). The abstract Lagrangian term associated with item \(j\) is \(\theta_{j}(p_{j}-1)^{2}\), where \(p_{j}\in\mathbb{N}\) is the number of times item \(j\) has been assigned to a bin. At this stage, only the assignment is considered and not capacities, it is possible to ignore the index \(j\) when define the \(\theta\) parameter. Moreover, we want to force the solver to assign all items to at maximum one bin, so the penalty should increase when \(p_{j}\neq 1\). Thus, in case \(p_{j}=1\), the following condition holds: \[\theta(1-1)^{2}\geq-1. \tag{19}\] In case \(p_{j}=0\), i.e., item \(j\) is not assigned to any bin, then the associated penalty needs to be greater than the cost of opening a new bin, i.e., \[\theta\geq 2. \tag{20}\] Also, the parameter \(\gamma\) needs to be set. The abstract Lagrangian term associated with the \(\gamma\) term is \(\gamma(1-y_{i})k_{i}\), which comes into play only when \(y_{i}=0\) and \(k_{i}\neq 0\) by adding \(k_{i}\) times the penalty \(\gamma\). We want the minimum penalty to be at least equal to the cost of opening a new bin, i.e., \[\gamma\geq 1. \tag{21}\] The final parameter to be estimated is \(\delta\). Although it is not strictly a penalty term since it serves as a multiplier of the objective function, including it is beneficial for controlling the other parameters. One purpose of \(\delta\) is to prevent issues that may arise when working with extremely small numbers. Another reason for incorporating this multiplier is to address the undesirable behavior of the model in certain item configurations. In some cases, the model may favor configurations where one or more bins are slightly overfilled due to the high cost associated with opening a new bin. Naturally, this behavior is contingent on the specific combination of instance weights relative to the bin capacity, as well as their number. To rectify this behavior, the following requirements must be met: \[\delta\leq\lambda s_{min}+\rho s_{min}^{2}, \tag{22}\] where \(s_{min}\) is the minimum capacity that can be exceeded. Thus, the cost of opening a new bin must be less than the cost of overfilling an already open bin, of the smallest possible amount, i.e., \(s_{min}\geq 1\). ### Model analysis When solving QUBO problems with quantum computing, the number of binary variables of the problem formulation corresponds to the number of logical qubits to use in the quantum computer. Therefore, having an efficient formulation that minimizes the number of variables without limiting the range of possible solutions is crucial for the adoption of quantum approaches. In terms of variable count, the QAL-BP model is more efficient than the pseudo-polynomial one[33], and equivalent, with respect to the _Unbalanced penalization formulation[34]_. Specifically, for a given problem instance \(BPP(n,C)\), the total number of variables is equal to the number of bins \(m\) plus \(n\times m\) decision variables representing the assignment of a specific item to a specific bin. Thus, in the worst case, \(m=n\), resulting in \(n(n+1)\) binary variables. Furthermore, from a methodological point of view, the QAL-BP approach offers a twofold advantage. Firstly, the number of binary variables is not affected by bin capacities and item weights, as observed in the pseudo-polynomial formulation[37]. Secondly, the reduced number of variables enables the execution of the QUBO problem using a smaller set of logical qubits, rendering it suitable for current QPUs. Fig. 1 shows a comparison of the performance, in terms of the number of variables, between the QAL-BP and the pseudo-polynomial formulation. Figure 1: Comparative analysis of variable growth in the Pseudo-Polynomial and Augmented Lagrangian models concerning the number of items and bin capacity. Three distinct values of bin capacity (C) are explored. The continuous dark red line represents the upper limit for QUBO problems represented by fully connected graph that can be mapped in the D-Wave Advantage Quantum Processing Unit (QPU) equipped with 5640 qubits. ## Evaluation ### Experimental Settings DataThe experiments are performed on a set of eight classes of randomly generated instances, ranging from 3 to 10 items, with corresponding weights ranging from 4 to 10 and fixed bin capacity equal to 10. This experimental choice allows finding the best compromise between exploring problem instances of different sizes while taking into account the limitations of modern QPUs which are restricted to non-sparse QUBO problems with up to 180 binary variables [38]. In particular, five different problem instances are generated for given a fixed number of items, then varying the weights, which are solved using different approaches. The generated instances are shown in Table 1. Solving methodsThe quantum solver employed in this study is the _D-Wave Advantage 4.1_, featuring a total of 5640 physical qubits. For the purpose of comparison, two classical solvers are also utilized. The first is _simulated annealing_, which is considered the classical counterpart of quantum annealing from a methodological perspective. Both quantum annealing and simulated annealing are available in the D-Wave Python library, enabling a thorough assessment of the correctness of the QAL-BP formulation without encountering any errors inherent to real quantum hardware. In addition, we also solve the ILP formulation from Eq. (1)-(3) via the Gurobi optimizer [40], as a representative for state-of-the-art solution that relies on the branch-and-bound technique to efficiently find the optimal solution. As previously mentioned, the rationale behind choosing quantum annealing over alternative quantum approaches, such as QAOA, is to enable a direct performance comparison between the most potent and dependable current quantum technology and the state-of-the-art classical optimizer. This comparison aims to demonstrate the comparable capability of current quantum computation in solving the BPP in relation to the best available classical solution. It is important to recognize that quantum hardware is still in its nascent stage, making such a comparative analysis critical in assessing the advancements and potential of quantum computing in tackling combinatorial optimization problems like BPP. MetricsIn order to assess the performance of QAL-BP, we employ three distinct metrics: _Time-To-Solution_ (TTS), _solution quality_, and _probability_. TTS represents the time required by the solver to produce the final solution, measured in microseconds (\(\mu\)s). For quantum annealing (QA), the _gpu_sampling_time_1 is considered, for an independent assessment of possible connection delays with the cloud and possible recalibrations. Differently from QA, simulated annealing (SA) and Gurobi run locally. Thus the TTS of SA is the time to run the annealing function, while for Gurobi, TTS is calculated as the time required to obtain the solution for a given problem instance. Footnote 1: docs.dwavesys.com/docs/latest/c_qpu_timing.html The _solution quality_ is defined as the number of bins utilized in a given solution. The lower the number, the better the solution. The _probability_ measures the odds, for the solution with minimum energy, to be a feasible one. In particular, this probability is computed, for each class of instances, as the ratio between the number of instances for which the minimum represents a feasible solution and the total number of instances in that class. This metric is only meaningful for simulated and quantum annealing and allows assessing the performance of current quantum technology in returning the solution for the QAL-BP formulation. Models parametersAccording to the analytical penalty estimation previously described, the multipliers are determined as follows: \(\delta=0.15;\ \lambda=0.1389;\ \rho=0.0278;\ \theta=2;\ \gamma=1\). ### Results Figure 2 illustrates the experimental results concerning TTS. The performance of Simulated Annealing (SA) deteriorates rapidly with the number of items. Comparing QA with Gurobi, the latter consistently outperforms the former. However, it is important to note that the asymptotic behavior of these two methods appears to be equivalent. Due to the limitations in running larger problem instances, a conclusive assessment of performance \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Seed & N. items & Weights & Lower bound & Instance name \\ \hline \hline & 3 & [4, 8, 6] & 2 & (3, 23) \\ & 4 & [8, 5, 4, 8] & 3 & (4, 23) \\ & 5 & [4, 4, 8, 8, 9] & 3 & (5, 23) \\ 23 & 6 & [7, 5, 5, 5, 4, 9] & 4 & (6, 23) \\ & 7 & [9, 7, 8, 6, 9, 6, 7] & 5 & (7, 23) \\ & 8 & [4, 5, 7, 5, 6, 4, 6, 4] & 4 & (8, 23) \\ & 9 & [7, 6, 8, 4, 8, 4, 9, 6, 4] & 6 & (9, 23) \\ & 10 & [5, 8, 6, 7, 10, 9, 4, 10, 7, 4] & 7 & (10, 23) \\ \hline & 3 & [4, 8, 6] & 2 & (3, 42) \\ & 4 & [7, 7, 10, 4] & 3 & (4, 42) \\ & 5 & [8, 5, 4, 7, 10] & 4 & (5, 42) \\ & 6 & [9, 9, 9, 9, 7, 4] & 5 & (6, 42) \\ 42 & 7 & [9, 7, 7, 6, 5, 10, 9] & 5 & (7,42) \\ & 8 & [8, 6, 9, 7, 7, 7, 5, 4] & 5 & (8, 42) \\ & 9 & [7, 10, 4, 10, 9, 5, 8, 5, 9] & 7 & (9, 42) \\ & 10 & [8, 6, 4, 10, 7, 10, 8, 9, 9, 5] & 7 & (10, 42) \\ \hline & 3 & [4, 8, 8] & 2 & (3, 123) \\ & 4 & [4, 10, 5, 5] & 3 & (4, 123) \\ & 5 & [5, 6, 5, 6, 9] & 3 & (5, 123) \\ 123 & 6 & [7, 10, 7, 5, 9, 9] & 5 & (6, 123) \\ & 7 & [10, 10, 4, 7, 5, 5, 5] & 5 & (7, 123) \\ & 8 & [9, 9, 5, 6, 9, 5, 8, 7] & 6 & (8, 123) \\ & 9 & [10, 9, 5, 9, 9, 5, 7, 9, 5] & 7 & (9, 123) \\ & 10 & [5, 5, 4, 7, 4, 8, 6, 5, 6, 4] & 5 & (10, 123) \\ \hline & 3 & [8, 6, 4] & 2 & (3, 90) \\ & 4 & [8, 5, 7, 6] & 3 & (4, 90) \\ & 5 & [6, 7, 8, 7, 4] & 3 & (5, 90) \\ 90 & 6 & [7, 8, 9, 9, 10, 6] & 5 & (6, 90) \\ & 7 & [6, 4, 4, 4, 8, 9, 6] & 4 & (7, 90) \\ & 8 & [7, 10, 8, 8, 8, 5, 5, 8] & 6 & (8, 90) \\ & 9 & [9, 6, 4, 10, 10, 5, 4, 4, 6] & 6 & (9, 90) \\ & 10 & [9, 6, 8, 7, 8, 10, 9, 6, 9, 10] & 8 & (10, 90) \\ \hline & 3 & [5, 8, 6] & 2 & (3, 510) \\ & 4 & [7, 9, 5, 5] & 3 & (4, 510) \\ & 5 & [6, 10, 4, 9, 4] & 3 & (5, 510) \\ & 6 & [5, 5, 9, 10, 8, 6] & 4 & (6, 510) \\ & 7 & [9, 7, 9, 4, 10, 10, 8] & 6 & (7, 510) \\ & 8 & [9, 10, 8, 9, 4, 4, 9, 5] & 6 & (8, 510) \\ & 9 & [5, 9, 10, 9, 7, 8, 4, 10, 6] & 7 & (9, 510) \\ & 10 & [10, 5, 9, 5, 8, 9, 7, 4, 6, 9] & 7 & (10, 510) \\ \hline \end{tabular} \end{table} Table 1: The selected set of bin packing instances serves as a testbed to evaluate all models under consideration. Each instance in the set is characterized by specific parameters, which are organized into columns for easy reference and analysis. The first column denotes the seed of the random generator utilized in generating the instances. The second column indicates the number of items that need to be placed within the bins. The third column comprises an array representing the weight of each individual item. The fourth column provides the \(L_{1}\) lower bound for the given instance[39]. Lastly, the last column assigns a unique label to facilitate identification and reference to each specific instance. cannot be made. Nevertheless, the quantum annealing's ability to scale with a runtime linear in the input size has been observed in other contexts, such as coalition formation in multi-agent systems [3, 4]. On the other hand, Gurobi, relying on an exact branch-and-bound approach, is guaranteed to have exponential worst-case complexity, since the bin packing problem is NP-hard. Considering these aspects, the results from QA suggest the potential for outperforming the state-of-the-art classical solver Gurobi once more reliable quantum technology becomes available. This highlights the promise of quantum annealing in achieving superior performance in solving combinatorial optimization problems compared to classical approaches when leveraging advancements in quantum hardware. In terms of _solution quality_, QA demonstrates proficiency in finding the correct solution only for small problem instances, while encountering challenges as the problem size increases. Figures 3 and 4 respectively present the number of used bins in a given solution (referred to as _solution quality_) and the _probability_ of that solution being feasible. To gain a comprehensive understanding of QA's performance, these two plots must be jointly analyzed. In Fig. 3, Gurobi solutions serve as a benchmark, representing the correct minimum number of bins. The results obtained by SA confirm the correctness of the QAL-BP formulation in identifying only feasible solutions for any problem instance and, in most cases, the best solution of QAL-BP corresponds to the global optimum of the optimization problem. The only exceptions are observed in instances \((8,23)\), \((10,123)\), and \((9,510)\), where the solutions returned by SA, although feasible, are sub-optimal. With regard to the results obtained by QA, it is observed that for relatively larger instances (instances with more than 7 items), QA tends to select a higher number of bins compared to Gurobi and SA. Specifically, in the instances \((10,23)\), \((8,123)\), \((10,23)\), and \((9,90)\), QA utilizes a higher number of bins compared to SA and Gurobi, leading to sub-optimal solutions. Additionally, in \((7,23)\), \((9,23)\), and \((10,42)\), the solutions returned by QA are not feasible, as the number of bins is lower than the number of bins selected by Gurobi. This implies that in these cases, the bins are overfilled beyond their capacity, resulting in infeasible solutions. Again, this limitation is due attributed to the current constraints of quantum hardware when dealing with large problem instances. A similar situation occurs in the case of \((9,510)\), where QA achieves a different solution compared to SA, but this solution is inconsistent with Figure 2: TTS comparison between Quantum Annealing (QA), Simulated Annealing (SA) and Gurobi. the QAL-BP formulation. To further validate the impediments of current QA hardware in correctly finding the solutions for the QAL-BP formulation, Fig. 4 presents the probability of obtaining a feasible solution as the average of different problem instances of the same size. For instances with more than 7 items, the probability of obtaining a feasible solution significantly decreases for QA, while it remains consistently at 1 for SA. In summary, the experimental results demonstrate that the QAL-BP formulation allows for the accurate analytical estimation of penalty parameters, leading to almost all cases resulting in the global optimum. However, as the problem size increases, QA's performance diminishes due to limitations in current quantum hardware. To fully leverage the potential of QA in effectively tackling larger BPP instances, further advancements in quantum technology and solver optimization are imperative. Figure 4: Probability of being a feasible solution for the minimum obtained using annealing-based approaches (simulated and quantum). Figure 3: Comparison of the number of bins of the best solution found by Gurobi, Simulated and Quantum annealing solvers. ## Conclusion This paper introduces QAL-BP, a novel quantum formulation based on the augmented Lagrangian method for efficiently solving the Bin Packing Problem (BPP) using quantum annealing. QAL-BP offers an analytical estimation of penalty terms in the model for a specific class of problem instances, eliminating the need for recursive approximation methods to empirically estimate Lagrangian multipliers. This enhancement amplifies the generalizability of our approach to diverse input instances and improves efficiency by reducing the number of QUBO variables compared to alternative quantum formulations. We demonstrated the effectiveness of our approach by solving larger problem instances than any previous QUBO formulation for the BPP. Additionally, we present the first experimental comparison of classical and quantum solutions for the BPP, validating that QAL-BP is an analytically correct QUBO formulation obviating the need for empirical estimation of penalty terms. Nevertheless, while our implementation on a quantum annealer does not outperform the state-of-the-art classical solver Gurobi, its TTS exhibits efficient scaling as the problem size increases, considering the current limitations of available quantum technology. However, several limitations and challenges remain. Firstly, the generalizability of our model to generic BPP instances or other combinatorial optimization problems requires further investigation. Secondly, the limited number of qubits on current quantum annealers poses a significant challenge, restricting the size of problem instances that can be effectively solved. Consequently, testing our model on larger instances and evaluating scalability across a wider range of inputs is currently unattainable. Furthermore, noise and errors in the quantum annealer significantly impact the quality of provided solutions, particularly evident when dealing with larger problems, as demonstrated in experimental results compared to simulated annealing. To address these challenges, future research will explore advanced quantum hardware with improved qubit accuracy and a greater qubit count. Another promising avenue involves investigating hybrid quantum annealing approaches that leverage classical and quantum methods in tandem, facilitating the solution of larger problem sizes beyond the capabilities of current QPUs. These endeavors are critical to further harnessing the potential of quantum computing in combinatorial optimization problems and propelling the field forward.
2309.15117
Generating Visual Scenes from Touch
An emerging line of work has sought to generate plausible imagery from touch. Existing approaches, however, tackle only narrow aspects of the visuo-tactile synthesis problem, and lag significantly behind the quality of cross-modal synthesis methods in other domains. We draw on recent advances in latent diffusion to create a model for synthesizing images from tactile signals (and vice versa) and apply it to a number of visuo-tactile synthesis tasks. Using this model, we significantly outperform prior work on the tactile-driven stylization problem, i.e., manipulating an image to match a touch signal, and we are the first to successfully generate images from touch without additional sources of information about the scene. We also successfully use our model to address two novel synthesis problems: generating images that do not contain the touch sensor or the hand holding it, and estimating an image's shading from its reflectance and touch.
Fengyu Yang, Jiacheng Zhang, Andrew Owens
2023-09-26T17:59:52Z
http://arxiv.org/abs/2309.15117v1
# Generating Visual Scenes from Touch ###### Abstract An emerging line of work has sought to generate plausible imagery from touch. Existing approaches, however, tackle only narrow aspects of the visuo-tactile synthesis problem, and lag significantly behind the quality of cross-modal synthesis methods in other domains. We draw on recent advances in latent diffusion to create a model for synthesizing images from tactile signals (and vice versa) and apply it to a number of visuo-tactile synthesis tasks. Using this model, we significantly outperform prior work on the tactile-driven stylization problem, i.e., manipulating an image to match a touch signal, and we are the first to successfully generate images from touch without additional sources of information about the scene. We also successfully use our model to address two novel synthesis problems: generating images that do not contain the touch sensor or the hand holding it, and estimating an image's shading from its reflectance and touch. Project Page: [https://fredfyyang.github.io/vision-from-touch/](https://fredfyyang.github.io/vision-from-touch/) ## 1 Introduction Humans rely crucially on cross-modal associations between sight and touch to physically interact with the world [58]. For example, our sense of sight tells us how the ground in front of us will feel when we place our feet on it, while our sense of touch conveys the likely visual appearance of an unseen object from a brief contact. Translating between these modalities requires an understanding of physical and material properties. Models trained to solve this problem must learn, for instance, to associate rapid changes in shading with rough microgeometry, and smooth textures with soft surfaces. Touch is arguably the most important sensory modality for humans [48, 43, 40], due to its role in basic survival [40, 9, 23] and physical interaction. Yet touch sensing has received comparably little attention in multimodal learning. An emerging line of work has addressed the problem of translating touch to sight, such as by learning joint embeddings [64, 39], manipulating visual styles to match a tactile signal [64], or adding a plausible imagery of a robotic arm to an existing photo of a scene [38]. While these tasks each capture important parts of the cross-modal prediction problem, each currently requires a separate, special-purpose method. Existing methods also lag significantly behind those of other areas of multimodal perception, which provide general-purpose methods for cross-modal synthe sis, and can translate between modalities without the aid of extra conditional information. In this paper, we generate plausible images of natural scenes from touch (and vice versa), drawing on recent advances in diffusion models [51, 12, 21, 22, 45]. We adapt latent diffusion models to a variety of visuo-tactile synthesis problems. Our proposed framework obtains strong results on several novel synthesis problems, and unifies many previously studied visuo-tactile synthesis tasks. First, we study the problem of generating images from touch (and vice versa). We address the task of generating images from touch without any image-based conditioning, where we are the first method to successfully generate images for natural scenes (Fig. 1a). We also address the task of adding an arm to a photo of an existing scene, where we significantly outperform prior work [38]. Second, we address the recently proposed _tactile-driven image stylization_ task,, the problem of manipulating an image to match a given touch signal [64] (Fig. 1b), using an approach based on guided image synthesis [44]. Our approach obtains results that are higher fidelity and that match the tactile signal significantly more closely than those of prior work. It also provides the ability to control the amount of image content preserved from the input image. Finally, we show that we can augment our model with additional conditional information. Taking inspiration from the classic problem of intrinsic image decomposition [41, 3], we perform _tactile-driven shading estimation_, predicting an image after conditioning on reflectance and touch (Fig. 1c). Since changes in tactile microgeometry often manifest as changes in shading (, the information missing from reflectance), this tests the model's ability to link the two signals. We also use segmentation masks to create "hand-less" images that contain the object being pressed but not the tactile sensor or arm that pressed it. We demonstrate our framework's effectiveness using natural scenes from the _Touch and Go_ dataset [64], a collection of egocentric videos that capture a wide variety of materials and objects using GelSight [28], and using robot-collected data from _VisGel_[38]. ## 2 Related Work Cross-modal synthesis with diffusion models.Diffusion models have recently become a favored generative model family due to their ability to produce high-quality samples. However, one major concern for diffusion models is their slow inference speed due to the iterative generation process on high dimensional data. Recently, latent diffusion [51] addressed this drawback by working on a compressed latent space of lower dimensionality, which allows diffusion models to work on more extensive tasks with accelerating the speed. These models have demonstrated remarkable success in tasks such as image synthesis [12, 21, 22, 45], super-resolution [54], and image editing [57, 44, 8]. Additionally, the advancements in multimodal learning [25, 27, 16] have enabled diffusion models to be utilized for cross-modal synthesis tasks. For vision-language generation, diffusion models have been studied for text-to-image synthesis [1, 29, 46, 50, 53], text-to-speech generation [7, 31, 35], text-to-3D generation [42, 55]. In addition, diffusion models also show promising results in audio synthesis including text-to-audio generation [56], waveform generation [32, 18, 6]. In this work, we are the first to employ diffusion model on real-world visual-tactile data, exploring the possibility of utilizing tactile data as a prompt for image synthesis. In concurrent work, Higuera [20] used diffusion to simulate tactile data, which they used to train a braille classifier. Tactile sensing.Early touch sensors recorded simple, low-dimensional sensory signals, such as measures of force, vibration, and temperature [33, 34, 10]. Beginning with GelSight [65, 28], researchers proposed a variety of vision-based tactile sensors, which convert the deformation of an illuminated membrane using a camera, thereby providing detailed information about shape and material properties [59, 36]. We focus on these sensors, particularly using GelSight, since it is widely used applications [38, 4], and available in visuo-tactile datasets [15, 17, 64]. Crucially, these sensors produce images as output, allowing us to use the same network architectures for both images and touch [66]. Other work proposes collocated vision and touch sensors [62, 5]. Cross-modal models for vision and touch.Li [38] used a GAN [24] to translate between tactile signals and images, using a dataset acquired by a robot. In contrast, they require conditioning their touch-to-image model on another photo from the same scene. This is a task that amounts to adding an arm grasping the correct object (given several possible choices), rather than generating an object that could have plausibly led to a touch signal according to its physical properties. It is not straightforward to adapt their method to the other touch-to-image synthesis problems we address without major modifications. Yang [64] proposed a visuo-tactile dataset and used a GAN to restyle images to match a touch signal. Their approach only learns a limited number of visual styles, and cannot be straightforwardly adopt extra conditional information (such as reflectance) or be applied to unconditional cross-modal translation tasks. Other work has learned multimodal visuo-tactile embeddings [64, 39]. Other work learns to associate touch and sight for servoing and manipulation [5]. ## 3 Method Our goal is to translate touch to vision (and vision to touch) using a generative model. We will do this using a model based on latent diffusion [51]. We will use this model to solve a number of tasks, including: 1) cross-modal visual-tactile synthesis, 2) tactile-driven image stylization, and 3) tactile-driven shading estimation. ### Cross-Modal Synthesis of Vision and Touch We now describe our framework for cross-modal synthesis. First, we describe a contrastive visuo-tactile model, which we use to perform conditional generation. Second, we describe our cross-modal latent diffusion model. #### 3.1.1 Contrastive Visuo-tactile Pretraining (CVTP) Following other work in cross-modal synthesis [49, 51], we provide conditional information to our generation models through multimodal embeddings via contrastive learning [63, 60, 14, 67]. Our embedding-learning approach resembles that of Yang [64] and contrastive multiview coding [60]. A key difference is that we incorporate temporal information into our visual and tactile representations. Touching an object is a dynamic process, and the information we obtain varies over time, from the moment when the tactile sensor begins touching the object, to the point when the sensor has reached it maximum deformation. Adding temporal cues provides information about material properties that may be hard to perceive from a single sample, such as the hardness or softness of a surface [66, 26]. Given the visual and tactile datasets \(X_{I}\) and \(X_{T}\), which consist of \(N\) synchronized visual-tactile frames \(\{\mathbf{x}_{I}^{i},\mathbf{x}_{T}^{i}\}_{i=1}^{N}\), we denote the video clip sampled at time \(i\) with the window size \(w=2C+1\), \(v_{I}^{i}=\{\mathbf{x}_{I}^{i-C},...,\mathbf{x}_{I}^{i},...,\mathbf{x}_{I}^{ i+C}\}\) and the corresponding tactile clip \(v_{I}^{t}=\{\mathbf{x}_{I}^{i-C},...,\mathbf{x}_{I}^{i},...,\mathbf{x}_{I}^{ i+C}\}\). We denote examples taken from the same visual-tactile recording \(\{v_{I}^{i},v_{T}^{i}\}\) as positives, and samples from different visual-tactile video pair \(\{v_{I}^{i},v_{T}^{j}\}\) as negatives. Our goal is to jointly learn temporal visual \(z_{I}=E_{\phi_{I}}(v_{I})\) and tactile \(z_{T}=E_{\phi_{T}}(v_{T})\) encoder. We use a 2D ResNet as the architecture for both encoders. For easy comparison to static models, we incorporate temporal information into the model via early fusion (concatenating channel-wise). Then we maximize the probability of finding the corresponding visuo-tactile video pair in a memory bank containing \(K\) samples using InfoNCE [47] loss: \[\mathcal{L}_{i}^{V_{I},V_{T}}=-\mathrm{log}\frac{\mathrm{exp}(E_{\phi_{I}}(v_ {I}^{i})\cdot E_{\phi_{T}}(v_{T}^{i})/\tau)}{\sum_{j=1}^{K}\mathrm{exp}(E_{ \phi_{I}}(v_{I}^{i})\cdot E_{\phi_{T}}(v_{T}^{j})/\tau)} \tag{1}\] where \(\tau\) is a small constant. Analogously, we get a symmetric objective \(\mathcal{L}^{V_{T},V_{I}}\) and minimize: \[\mathcal{L}_{\text{CVTP}}=\mathcal{L}^{V_{I},V_{T}}+\mathcal{L}^{V_{T},V_{I}}. \tag{2}\] #### 3.1.2 Touch-conditioned Image Generation We now describe the tactile-to-image generation model (an image-to-touch model can be formulated in an analogous way). Our approach follows Rombach [51], which translates language to images, but with a variety of extensions specific to the visuo-tactile synthesis problem. Given a visuo-tactile image pair \(\{\mathbf{x}_{I},\mathbf{x}_{T}\}\in\mathbb{R}^{H\times W\times 3}\), our goal is to generate an image \(\widetilde{\mathbf{x}}_{I}\) from tactile input \(\mathbf{x}_{T}\). We encode the input \(\mathbf{x}\) into a latent representation \(\mathbf{z}=\mathcal{E}(\mathbf{x})\in\mathbb{R}^{h\times w\times 3}\). A decoder \(\mathcal{D}\) will reconstruct the image \(\hat{x}=\mathcal{D}(\mathbf{z})\) Figure 2: **Touch-to-image model**. We use a latent diffusion model to generate an image of a scene from touch. The touch signal is represented using multiple frames of video from a GelSight sensor. The model uses a segmentation mask to optionally generate only the scene content containing the pressed object (, without a hand or touch sensor). We also optionally condition on reflectance from a scene, in which case the model’s generation task requires it to estimate shading. from the code. The latent dimension \(h\times w\) is smaller than the image dimension \(H\times W\). Training.We train a touch-to-vision diffusion generation in the latent space \(\mathbf{z}_{I}=\mathcal{E}(\mathbf{x}_{I})\). Diffusion models learn to generate images by recursively denoising from a normal distribution to the desired data distribution. Specifically, given our latent representation \(\mathbf{z}_{I}\), we uniformly sample a diffusion step \(t\in\{1,...,T\}\) and obtain the corresponding noisy image \(\mathbf{z}_{I}^{t}\) by iteratively adding Gaussian noise with a variance schedule. We use a U-Net [52] network \(\epsilon_{\theta}\) as our denoising model, which is conditioned on the tactile representation encoded through the tactile encoder \(E_{\phi_{T}}\) trained in Section 3.1.1. We minimize: \[L(\theta,\phi)=\mathbb{E}_{\mathbf{z}_{I},\mathbf{c},\epsilon,t}\left[\| \epsilon_{t}-\epsilon_{\theta}(\mathbf{z}_{I}^{t},t,E_{\phi_{T}}(\mathbf{v}_{ T}))\|_{2}^{2}\right], \tag{3}\] where \(\epsilon_{t}\) is the added noise at time \(t\), and \(\mathbf{v}_{T}\) is the tactile example. The denoising network \(\epsilon_{\theta}\) and the tactile encoder \(E_{\phi_{T}}\) are jointly trained. Inference.At test time, we first sample noise \(\widetilde{\mathbf{z}}_{I}^{T}\sim\mathcal{N}(0,1)\) at time \(T\), and then use the trained diffusion model to iteratively predict the noise \(\widetilde{\epsilon}_{t}\), resulting in a denoised latent representation \(\widetilde{\mathbf{z}}_{I}^{t}=\widetilde{\mathbf{z}}_{I}^{t+1}-\widetilde{ \epsilon}_{t+1}\) from \(t\in\{T-1,...,0\}\). Following [51, 12], we use classifier-free guidance to trade off between sample quality and diversity in the conditional generation, computing the noise as: \[\widetilde{\epsilon}_{t}=\epsilon_{\theta}(\widetilde{\mathbf{z}}_{I}^{t},t, \emptyset)+s\cdot\left(\epsilon_{\theta}(\widetilde{\mathbf{z}}_{I}^{t},t,E_{ \phi_{T}}(\mathbf{v}_{T}))-\epsilon_{\theta}(\widetilde{\mathbf{z}}_{I}^{t}, t,\emptyset)\right), \tag{4}\] where \(\emptyset\) denotes a zero-filled conditional example (for unconditional generation), and \(s\) is the guidance scale. Finally, we convert the latent representation \(\widetilde{\mathbf{z}}_{I}^{0}\) to an image \(\widetilde{\mathbf{x}}_{I}=\mathcal{D}(\widetilde{\mathbf{x}}_{I}^{0})\in \mathbb{R}^{H\times W\times 3}\). ### Visuo-Tactile Synthesis Models So far, we have presented models for translating between touch and images (and vice versa). We now describe several visuo-tactile synthesis models that we build on this diffusion framework. #### 3.2.1 Generating realistic images without hands One of the challenges of dealing with visuo-tactile data is that the tactile sensor typically occludes the object that is being touched (Fig. 3). Generated images will therefore contain the sensor, and potentially the arm that held it. This is not always desirable, as a major goal of touch sensing is to generate images of objects or materials that could have plausibly led to a given touch signal. We address this problem for the natural scenes from the _Touch and Go_ dataset [64], which contain visible human hands and Gel-Sight sensors [65]. To generate images containing only objects that yield a given tactile signal (without hands or touch sensors), we only compute the loss for pixels that do not overlap with hands during the training, thereby depriving the model of supervision for hand pixels. We first generate hand segmentation masks for the visual image \(\mathbf{m}_{I}=\mathcal{S}(\mathbf{x}_{I})\) and obtain the downsampled mask \(\mathbf{z}_{m}\) of the same spatial dimension of the image latent representation. For this, we use the off-the-shelf hand segmentation model from Darkhalil et al. [11], which is a modified model from PointRend [30] instance segmentation designed specifically for segmenting hands. We then mask the diffusion loss (Eq. 6) to be: \[\mathbb{E}_{\mathbf{z}_{m},\mathbf{z}_{I},\mathbf{c},\epsilon,t}\left[\| \mathbf{z}_{m}\odot\left(\epsilon_{t}-\epsilon_{\theta}(\mathbf{z}_{I}^{t},t,E _{\phi_{T}}(\mathbf{v}_{T}))\right)\|_{2}^{2}\right], \tag{5}\] where \(\mathbf{z}_{m}\) indicates whether a pixel overlaps with a hand, and \(\odot\) denotes pointwise multiplication. #### 3.2.2 Tactile-driven Image Stylization Tactile-driven image stylization [64] aims to manipulate the visual appearance of an object so that it looks more consistent with a given touch signal. Previous work posed the problem of editing the visual style of an image while preserving its structure [64, 37]. Given an input image \(\mathbf{x}_{I}\) and a desired tactile signal \(\mathbf{x}_{T}^{\prime}\) (obtained from a different scene), our goal is to manipulate \(\mathbf{x}_{I}\) so that it appears to "feels" more like \(\mathbf{x}_{T}^{\prime}\). We adapt the approach of Meng _et al_. [44]. We first compute the noisy latent representation \(z_{I}^{N}\) at time \(0\leq N\leq T\), where \(T\) denotes the total number of denoising steps. We then conduct the denoising process for \(z_{I}^{N}\) from time step \(N\) to 0 conditioned on \(\mathbf{x}_{T}^{\prime}\). This allows for fine-grained control over the amount of content preserved from the input image, via the parameter \(N\). We analyze the choice of \(N\) at Sec. 4.6. #### 3.2.3 Tactile-driven Shading Estimation Touch conveys a great deal of information about a surface's microgeometry [28]. Much of this information can also be perceived through _shading_ cues: intensity variations due to light interacting with surface orientation for objects with Lambertian material properties. Following classic work in intrinsic image decomposition [2, 19, 3], we assume that the image can be factorized into reflectance and shading for each pixel, i.e., we can write our image \(\mathbf{x}_{I}=\mathbf{x}_{R}\odot\mathbf{x}_{S}\) where the two terms in the product are the per-pixel reflectance and shading. Figure 3: **Visuo-tactile datasets**. For our experiments, we evaluate our model on natural scenes from _Touch and Go_[64] and robot-collected data from _VisGel_[38]. We propose a model that deals with inferring shading from touch. Given an image's estimated reflectance map \(\mathbf{x}_{R}\), along with a touch signal \(\mathbf{x}_{T}\), we reconstruct the original image \(\mathbf{x}_{I}\). This is a task that requires inferring the shading, since it is the component that is missing from the input. By formulating the problem so that we predict the original image, we can easily reuse the latent encoder/decoder from natural images. We address this task by modifying our network so that it also takes reflectance as input (Eq. 6). We first estimate reflectance using the intrinsic image decomposition model of Liu [41] and downsample it to the same dimensions as the latent space. We then concatenate the downsampled reflectance \(\mathbf{z}_{R}\) to the noisy representation \(\mathbf{z}_{I}^{\mathbf{\prime}}\) as the input for each denoising step. Thus we modify the loss function (Eq. 6) as the following: \[L(\theta,\phi)=\mathbb{E}_{\mathbf{z}_{I},\mathbf{c},\epsilon,t}\left[\| \epsilon_{t}-\epsilon_{\theta}(\mathbf{z}_{I}^{\mathbf{\prime}}\otimes\mathbf{ z}_{R},t,E_{\phi_{T}}(\mathbf{v}_{T}))\|_{2}^{2}\right], \tag{6}\] where \(\otimes\) denotes concatenation. ## 4 Results We evaluate our cross-modal synthesis models through qualitative and quantitative experiments on natural scenes and robot-collected data. ### Implementation details Contrastive visuo-tactile model.Following [64], we use ResNet-18 as the backbone of contrastive model, and train on _Touch and Go_[64]. This model is trained using SGD for 240 epochs with the learning rate of \(0.1\) and weight decay of \(10^{-4}\). The ResNet takes 5 reference frames as input using early fusion (concatenated channel-wise) and we take the feature embedding from the last layer of the feature and map it to 512 dimensions. Following prior work [60], we use \(\tau=0.07\) and use a memory bank with 16,385 examples. Visuo-tactile diffusion model.We base our latent diffusion model on Stable Diffusion [51]. We use the Adam optimizer with the base learning rate of \(2\times 10^{-6}\). Models are all trained with 30 iterations using the above learning rate policy. We train our model with the batch size of 96 on 4 RTX A40 GPUs. The conditional model is finetuned along with the diffusion model. We use the frozen, pretrained VQ-GAN [13] to obtain our latent representation, with the spatial dimension of 64\(\times\)64. During the inference, we conduct denoising process for 200 steps and set the guidance scale \(s=7.5\). ### Experimental Setup Dataset.We conduct our experiments on two real-world visuo-tactile datasets: * _Touch and Go_** dataset.** The _Touch and Go_ dataset is a recent, real-world visuo-tactile dataset in which humans probe a variety of objects in both indoor and outdoor scenes. There are 13,900 touches from roughly 4000 different object instances and 20 material categories. Since this is the only available dataset with zoomed-in images and clearly visible materials, we use it for all three tasks. * _VisGel_** dataset.** The _VisGel_ dataset contains synchronized videos of a robot arm equipped with a GelSight sensor interacting with 195 household objects. The dataset includes 195 objects from a wide range of indoor Figure 4: **Tactile-driven Image Stylization. _(Top)_ We restyle the input image using the given touch signal (reference image from scene provided for clarity). We compare our approach to Yang et al. [64]. Our approach generates images with higher quality matching more closely to the given tactile signal. _(Bottom)_ We show more examples of the manipulated images. Please see supplement for more examples.** scenes of food items, tools, kitchen items, to fabrics and stationery. In total, the dataset contains 12k touches and around 3M frames. Evaluation metrics.We use several quantitative metrics to evaluate the quality of our generated images or tactile signals. We use **Frechet Inception Distance (FID)**, which compares the distribution of real and generated image activations using trained network. Following Yang [64] and CLIP [49], we take the cosine similarity between our learned visual and tactile embeddings for the generated images and conditioned tactile signals, a metric we call **Contrastive Visuo-Tactile Pre-Training (CVTP)**. A higher score indicates a better correlation between touch and images. It is worth noting that the CVTP metric only takes one frame of touch input. Following [64], we measure **Material Classification Consistency**: we use the material classifier from Yang [64] to categorize the predicted and ground truth images, and measure the rate at which they agree. Finally, following [16], we evaluate standard **Structural Similarity Index Measure (SSIM)** and **Peak Signal to Noise Ratio (PSNR)**[61] metrics. ### Cross-modal Generation We perform cross-modal generation, _i.e._, generating an image from touch and vice versa, on both in-the-wild _Touch_ \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Touch \(\rightarrow\) Image**} & \multicolumn{2}{c}{**Image \(\rightarrow\) Touch**} \\ \cline{2-5} & \multicolumn{2}{c}{CVTP (\(\uparrow\))Material(\(\uparrow\))FID(\(\downarrow\))SSIM(\(\uparrow\))PSNR(\(\uparrow\))} \\ \hline Pix2Pix [24] & 0.08 & 0.15 & 136.4 & 0.43 & 14.3 \\ VisGel [38] & 0.07 & 0.15 & 128.3 & 0.45 & 15.0 \\ Ours w/ hands & **0.12** & 0.22 & **48.7** & **0.50** & **15.4** \\ Ours w/o hands & **0.12** & **0.24** & 81.5 & **0.50** & **15.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of cross-modal generation on _Touch and Go_. Figure 5: **Visuo-tactile Cross Generation on _Touch and Go_ dataset. _(Top)_ We compare our approach to state-of-the-art method Visgel [38]. _(Bottom)_ We show more results of our generated images with and without hands. In both case our approach is able to generate realistic images with high fidelity.** _and Go_ dataset and robot-collected dataset _VisGel_. For straightforward comparison to prior work [38], on _VisGel_ we provide a _reference_ photo of the scene as an input to the model. Thus, successfully predicting the ground truth image amounts to inserting imagery of the robotic arm to the correct location in the scene. For _Touch and Go_, we do not condition the model on a visual input: instead, we simply translate one modality to the other. For evaluation metrics, we use CVTP, material classification consistency, and FID score for touch-to-image generation and SSIM and PSNR for image-to-touch generation. For _VisGel_ dataset we leverage SSIM and PSNR as the evaluation metric for both tasks. We only use CVTP, material classification consistency and FID only on touch-to-image generation task on _Touch and Go_, since these evaluation metrics rely on a pretrained neural network from datasets of natural images, which may not generalize well on a different modality or to robot-collected data. We compare our model to the prior state-of-the-art visuotactile generation method [38], which is adapted from pix2pix [24] and is specifically designed to bridge the large domain gap between modalities by adding a reference image and temporal condition. As it is not possible to find a reference image in the natural image dataset, we remove the reference image while keeping everything else the same. We show quantitative results for both tasks on _Touch and Go_ and _VisGel_ in Table 1 and Table 2 respectively. Our methods outperform existing state-of-the-art methods by a large margin for all evaluation metrics. We note that the variation of our model that removes hands from images obtains a worse FID score compared to those with hands, due to the discrepancy of hands between the original dataset and our generated images. Interestingly, the presence of hands does not does not affect the performance of CVTP and material classification consistency. We provide qualitative results from both models in Figure 5 (_bottom_). ### Tactile-Driven Image Stylization Following [64], we evaluate the performance of tactile-driven image stylization on _Touch and Go_[64] using CVTP and material classification metrics. We also calculate the FID score between the set of generated images and the set of real images associated with the given tactile signals, which measures the fidelity of the output. We compare our model to a modified version of CycleGAN [68] and the state-of-the-art method of Yang et al. [64]. From the quantitative comparisons in Table 3, our method demonstrates a significant improvement over existing methods. We also show qualitative comparisons in Figure 3, where the generated images more closely match the tactile signal, and we are \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Evaluation Metrics**} \\ \cline{2-4} & CVTP (\(\uparrow\)) & Material (\(\uparrow\)) & FID (\(\downarrow\)) \\ \hline Cycle GAN [68] & 0.09 & 0.15 & 24.6 \\ Yang et al. [64] & 0.10 & 0.20 & 22.5 \\ Ours & **0.13** & **0.22** & **15.8** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative results of of tactile-driven image stylization. Figure 6: **Visuo-tactile Cross Generation on _VisGel_ dataset. _(Top)_ We compare our approach to state-of-the-art method VisGel [38]. _(Bottom)_ Our approach is able to generate robotic hands touching reasonable locations of objects given the same reference image but different tactile signals. able to generate styles that existing methods fail to capture. ### Tactile-driven Shading Estimation We hypothesize that the tactile signal conveys information about the microgeometry of an image, and thus allows a model to produce more accurate images than a reflectance-to-image model that does not have access to touch. We evaluated both models on _Touch and Go_ (Table 4) and found that adding touch indeed improves performance on all evaluation metrics. We also show qualitative comparisons in Figure 7. We found that tactile signals are especially informative for predicting roughness and smoothness of Lambertian surfaces, such as bricks. ### Analysis Importance of temporal information.We first study the effect of adding multiple GelSight frames to the contrastive visuo-tactile embedding (Figure 9). We compare our method with the unconditional generation and material class conditional generation on _Touch and Go_. We found that conditioned generation provides a large improvement in performance compared to the unconditional generation. We also observed that the generation conditioned on the pretrained model is significantly better than that without pre-training. Interestingly, the model conditioned on the material class outperforms the variation of the model that only observes a single GelSight frame, suggesting that perceiving a touch signal from only a single moment in time may be less informative than the material category. Providing the model with additional frames significantly improves the model, with the 5-frame model obtaining the overall best performance. Controllable Image StylizationOur method allows us to control over the amount of image content that is preserved from the original image by changing the denoising staring point \(N\) (Sec. 3.2.2) [44]. From Figure 8, we observe that if we select the larger \(N\), the generated image will be changed more drastically where the visual appearance will be changed completely to match the tactile signal while ruining the original image structure. In extreme case, where \(N=T\) the manipulated result will be equal to the touch-to-image generation result, while small \(N\) will result in little overall change. We empirically found that selecting \(N=T/2\) obtains a good trade-off between these factors. ## 5 Conclusion We proposed a visuo-tactile diffusion model that unifies previous cross-modal synthesis tasks, and allows us to address novel problems. We are the first to generate realistic images in the natural scenes from touch (and vise versa) without any image-based conditioning. We also show the \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Reflectance \(\rightarrow\) Image**} \\ \cline{2-4} & SSIM(\(\uparrow\)) & PSNR(\(\uparrow\)) & FID(\(\downarrow\)) \\ \hline Touch Only & 0.27 & 11.6 & 48.7 \\ Reflectance Only & 0.46 & 14.5 & 40.7 \\ Reflectance + Touch & **0.48** & **15.4** & **36.9** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative results for tactile-driven shading estimation. Figure 8: **Controlling the amount of preserved image content. Manipulated images of tactile-driven image stylization using different values of \(N\).** Figure 7: **Tactile-driven shading estimation. We compare our approach to a model without a tactile signal (only reflectance), finding that the tactile-driven model better captures subtle material properties, such as roughness.** Figure 9: **Effect of different types of tactile conditioning.** ability to generate realistic "hand-less" images and solve a novel tactile-driven shading estimation task. Finally, we obtain significantly more realistic results on the tactile-driven stylization task than prior work. We see our work as being a step toward integrating the fields of tactile sensing and generative modeling. Limitations.Since our work has applications in creating fake imagery, a potential issue is that it could be used to create disinformation. Also, as touch mainly conveys material properties and microgeometry, the generated image will often differ semantically from the ground truth. Acknowledgements.We thank Chao Feng, Ziyang Chen and Shaokai Wu for the helpful discussions and help for visualizations. This work was supported in part by Cisco Systems.
2309.07513
RecycleNet: Latent Feature Recycling Leads to Iterative Decision Refinement
Despite the remarkable success of deep learning systems over the last decade, a key difference still remains between neural network and human decision-making: As humans, we cannot only form a decision on the spot, but also ponder, revisiting an initial guess from different angles, distilling relevant information, arriving at a better decision. Here, we propose RecycleNet, a latent feature recycling method, instilling the pondering capability for neural networks to refine initial decisions over a number of recycling steps, where outputs are fed back into earlier network layers in an iterative fashion. This approach makes minimal assumptions about the neural network architecture and thus can be implemented in a wide variety of contexts. Using medical image segmentation as the evaluation environment, we show that latent feature recycling enables the network to iteratively refine initial predictions even beyond the iterations seen during training, converging towards an improved decision. We evaluate this across a variety of segmentation benchmarks and show consistent improvements even compared with top-performing segmentation methods. This allows trading increased computation time for improved performance, which can be beneficial, especially for safety-critical applications.
Gregor Koehler, Tassilo Wald, Constantin Ulrich, David Zimmerer, Paul F. Jaeger, Jörg K. H. Franke, Simon Kohl, Fabian Isensee, Klaus H. Maier-Hein
2023-09-14T08:30:02Z
http://arxiv.org/abs/2309.07513v1
# RecycleNet: Latent Feature Recycling Leads to Iterative Decision Refinement ###### Abstract Despite the remarkable success of deep learning systems over the last decade, a key difference still remains between neural network and human decision-making: As humans, we can not only form a decision on the spot, but also ponder, revisiting an initial guess from different angles, distilling relevant information, arriving at a better decision. Here, we propose RecycleNet, a latent feature recycling method, instilling the pondering capability for neural networks to refine initial decisions over a number of recycling steps, where outputs are fed back into earlier network layers in an iterative fashion. This approach makes minimal assumptions about the neural network architecture and thus can be implemented in a wide variety of contexts. Using medical image segmentation as the evaluation environment, we show that latent feature recycling enables the network to iteratively refine initial predictions even beyond the iterations seen during training, converging towards an improved decision. We evaluate this across a variety of segmentation benchmarks and show consistent improvements even compared with top-performing segmentation methods. This allows trading increased computation time for improved performance, which can be beneficial, especially for safety-critical applications. ## 1 Introduction Over the past decade, the field of computer vision has witnessed an unprecedented paradigm shift due to the advent and proliferation of deep learning algorithms. Neural networks have become the de facto standard for a variety of tasks, excelling in their ability to solve previously impossible tasks across many domains and modalities. One of the most intriguing distinctions between human cognition and neural networks, however, is the former's capacity for iterative decision-making - a skill that is still lacking from most recent artificial systems. Humans exhibit the innate ability to dynamically revisit and revise their initial decisions, evaluating their options from multiple perspectives, and continuously improving their decision based on new information. This process underlies an inherent characteristic of human decision-making - that of iterative refinement. It allows humans to evolve their understanding over time, improving the quality of decisions, especially in complex, non-deterministic scenarios. By stark contrast, conventional deep learning architectures have typically operated in a one-shot, feed-forward manner, lacking the property of iterative revision. In this work, we seek to bridge this gap, to bring neural networks a step closer to the iterative decision-making process that characterizes human cognition. We introduce RecycleNet, a simple approach to deep learning that leverages the concept of latent feature recycling, enabling neural networks to refine their initial predictions over a series of itera tive steps. The advantage of RecycleNet lies in its universal applicability - it makes minimal assumptions about the underlying architecture and is easily adaptable across a wide array of contexts. Medical image segmentation, with its critical implications for diagnosis and treatment in healthcare, serves as an excellent testing ground for our approach. The task's complexity and the inherent noise in medical imaging data pose formidable challenges that demand robust, reliable, and refined predictions. Through our evaluation, we demonstrate that RecycleNet exhibits the remarkable property of refining its predictions iteratively, even beyond the iterations witnessed during training. The results clearly outperform the state-of-the-art segmentation methods across a range of segmentation benchmarks, demonstrating the promise of our approach. At the heart of RecycleNet lies a trade-off - the opportunity to exchange increased computational time for a significant improvement in performance. For safety-critical applications, where the stakes are high, and the margin for error is virtually non-existent, this trade-off could be an important step in enhancing the reliability and precision of decision-making processes in neural networks. ## 2 Related Work Neural networks' inability to refine initial predictions has been addressed in approaches which introduce varying degrees of additional complexity. Refinement modules:One straight-forward way to refine initial network outputs makes use of additional modules. In the context of image segmentation, these refinement modules typically act on segmentation maps [15] or features close to the segmentation layers [4] and make use of additional layers to refine a main network's outputs. This naturally introduces complexity by introducing additional parameters to the original network. In contrast, our suggested technique of latent feature recycling operates without requiring any additional parameters, thereby ensuring a more seamless integration in situations where the cost of extra parameters is prohibitive. Recurrent Learning:Another natural approach to refining initial network outputs is to cast refinement as a recurrent learning problem, with refinement steps as the temporal axis. To render typical computer vision network architectures as Recurrent Neural Networks, either the whole network [19] or key parts of the network [18] are adapted. Alternatively, the recurrent learning can also be done on just the segmentation outputs [12]. While closely related to the proposed latent feature recycling, these approaches interfere strongly with the network architecture and due to their recurrent network formulation, come with substantial memory costs during training, which quickly become prohibitive for example in the context of medical image segmentation, where large 3D receptive fields are required to capture all relevant context for the task. Multi-stage approaches:To alleviate the memory costs connected to Recurrent Neural Networks, multi-stage approaches can be employed to refine previous stage segmentations [16] or learn based on the error feedback coming from the previous iteration [3]. While this does not come at the cost of substantially increased memory demands during training, multi-stage approaches often require multiple complete training cycles. Comparatively, our proposed method requires less training time since the additional forward passes are performed on a single sample. Additional loss terms:The ability to refine predictions has also been explored by repeatedly applying a given stateful network architecture and learning a stopping criterion using an additional loss term together with the task loss [2]. This, however, requires a careful balancing between loss terms which is not present in the proposed method. Output recycling:Recently, Jumper et al. [9] have proposed a technique that uses output structures in the context of protein structure prediction to refine initial guesses without additional modules. They re-use outputs from certain transformer blocks [17] of the architecture over multiple Figure 1: Schematic overview of the proposed U-Net feature recycling. \(n\) depicts the number of recycling cycles where the features close to the network’s decoder are fed back into early encoder features. The letters I, R and O refer to the input projection, recycling module and output projection, as described in Algorithm 1. iterations to refine the predicted structures. While this is closely related to our proposed technique, we show that recycling not only a part of the network architecture, but a whole convolutional segmentation network leads to refined predictions. Additionally, recycling features instead of outputs allows integrating this mechanism in a flexible way without special requirements w.r.t. the network architecture. Crucially, we also introduce a robust training schedule for recycling and demonstrate its importance for both reliable results across datasets, as well as an emergent convergence property where segmentation performance increases monotonously when increasing the number of cycles during inference. ## 3 RecycleNet Our proposed method, referred to as RecycleNet, relies on increasing the number of forward passes through a large part of the network (referred to as cycles), both during training and inference. To instill the capability to refine initial decisions over a number of such cycles, features close to the output are fed back into early layers of the neural network via a simple addition operation. Figure 1 depicts a schematic of the proposed latent feature recycling process. A given neural network architecture (here the U-Net [14]), can be partitioned into three disjoint parts: The input projection **I**, the recycling module **R** and the output projection **O** (see Fig. 1). The recycling module **R** is not an additional module, but rather refers to the part of the architecture where features should be recycled. The recycling process, where recycled features are summed onto earlier feature representations, is repeated \(N_{c}\) times during training, where \(N_{c}\) is sampled uniformly from a predefined range (see Section 3.1). After the recycling process, the output projection produces the final output. However, each individual cycle can also be projected to a meaningful prediction, allowing for introspection and ensembling. The recycling mechanism is described in Algorithm 1. We note that while it is, in general, possible to use gradients accumulated for more than one cycle, we find this to be impractical due to the memory demands involved. So instead, we only use gradients for the last iteration. Reusing the recycling features \(r\) can be achieved in various ways. We propose a simple addition of normalized recycling features to the input projection, similar to the standard practice of adding position encodings in the context of Language Models [17]: \[R(z,r)=R(z+norm(r)) \tag{1}\] This approach requires the feature dimensions after the input projection to match the dimensions after the recycling module. This property is typically fulfilled for attention-based transformer architectures, as well as U-Nets [17]. In architectures, where this is not the case, other conditioning mechanisms, e.g. using projection layers, can be applied. We propose the integration of latent feature recycling in the context of the U-Net [14], a network architecture which is ubiquitous in medical image segmentation. As depicted schematically in Figure 1, we propose U-Net feature recycling by reusing features close to the output of the network's decoder at earlier layers, e.g. after the encoder's first convolutional block. This enables the network to revisit features based on which previous predictions would be computed, thus instilling the capability to iteratively refine early decision hypotheses over a number of cycles. ``` Input: Maximum number of cycles \(N_{max}\), model input \(x\), input projection I, recycling module R, output projection O 1 Project input into recycling feature space: \(z=I(x)\) 2 Sample number of cycles: \(N_{c}=RandInt(1,N_{max})\) 3 Initialize recycling features as zeros: \(r=0\) 4forallcycles\(i\in[1,...,N_{c}]\)do 5if\(i<N_{c}\)then 6 r = r.detach() # gradients only for last cycle r = R(z, r) # 1 cycle 7 8 end for 9 Project to output: \(\hat{y}=O(r)\) 10return\(loss(\hat{y},y)\) ``` **Algorithm 1**Latent Feature Recycling: Training ### Robust Training Schedule Feature recycling introduces a single new hyperparameter to an existing neural network training, the number of cycles \(N_{c}\). This hyperparameter determines how many shots the network gets to refine initial predictions during training. At the beginning of the training, the initial predictions might be unreliable to the extent that there is little value in refining them step by step. To combat this, we introduce a robust recycling schedule, where during an initial warm-up phase, only a single cycle (no recycling) is used, therefore defaulting to standard network training during this period. Over time, we incrementally increase the range of possible cycles to allow for more and more refinement steps. As the number of cycles during training is not deterministic, the network is incentivized to distill useful information for each next recycling step in the recycled features. This helps learning an iterative refinement, even when only using gradients from the last cycle. ## 4 Experiments and Results In the following, we present the experiments and results based on the proposed U-Net feature recycling in the context of challenging medical image segmentation datasets. To reliably compare the proposed method with a strong baseline, we compare to the widely used nnU-Net and implement the proposed method in the same well-tested data pipeline and training framework [6]. ### Datasets and Evaluation To demonstrate the general effectiveness of the proposed U-Net latent feature recycling for medical image segmentation, we test the proposed method on a range of established segmentation tasks covering various dataset sizes, segmentation targets and task difficulties. These datasets include the Kidney Tumor Segmentation (KiTS 2019) dataset [5], the Liver Tumor Segmentation task of the Medical Segmentation Decathlon [1], the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) challenge [11] and the large-scale Abdominal Multi-Organ Benchmark (AMOS) for versatile medical image segmentation [8]. To also test the proposed method in the context of MRI tasks, we additionally include two tasks of the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge [10]. To assess segmentation performance, we use the average foreground Dice coefficient (DSC). It measures the overlap between predicted segmentation and ground truth, popular in medical image segmentation for its capacity at handling imbalanced datasets. ### Baseline Models To ensure a fair comparison, we implement all models in the nnU-Net framework [6], which is regarded as the state-of-the-art framework for medical image segmentation and serves as the basis for numerous successful segmentation challenge contributions [7]. Naturally, the default nnU-Net is also included as a baseline. We also test against the best-performing model proposed in Wang, Yu et al. [18] which still has a manageable memory demand when applied in the medical image segmentation domain, referred to as DRU. As the DRU was not originally tested in the context of 3D segmentation, we base our implementation on the authors' code and implement the model within the nnU-Net framework. This allows to make fair comparisons w.r.t. performance, memory demands and training time requirements. ### RecycleNet We implement the proposed RecycleNet as an adaptation of nnU-Net, making use of the same, well-tested, 3D full resolution U-Net architectures. For each dataset, we adopt the same feature recycling strategy, recycling the feature maps before the last convolutional layer and feed them back into the encoder after the first convolutional stage, see Section 3. Through this, we can keep the automatic configuration property offered by nnU-Net and roll out the same proposed recycling method for any given segmentation task. For a fair comparison between the proposed method and the baselines, we use the preprocessing pipeline of nnU-Net [6] for all experiments and evaluate on public leaderboards when possible. Where not possible, we employ an identical 5-fold cross validation. We evaluate single full resolution 3D models without ensembling or postprocessing. As the recycling training schedule, we make use of the schedule proposed in Section 3.1, starting with 1 cycle for the first 200 epochs, then gradually increasing the range from which the number of cycles is sampled by 1 every 200 epochs, to a maximum of 3. During inference, we increase the number of cycles to a maximum of 7 cycles to benefit from the property discussed in Section 4.4 and always report metrics using the maximum number of cycles. Table 1 shows the results comparing RecycleNet to the nnU-Net and DRU baselines on 5-fold cross validation as well as public leaderboard held-out test sets, where possible. We show clear performance improvements compared to both baselines across all evaluation datasets but the liver tumor segmentation dataset, where the DRU [18] shows the best cross validation score. We note that although the differences between the individual methods seem small, they can be regarded as substantial improvements in the saturated performance domain that is medical image segmentation. We use a fixed recycling schedule and a fixed number of cycles during inference (determined based on the BTCV cross-validation, employed on all datasets), leaving all nnU-Net training hyperparameters untouched. We suspect that further improvements are possible by fine-tuning the training and recycling schedule on a given target dataset. From these results, we conclude that the proposed latent feature recycling represents an effective way to instill iterative refinement capabilities for even strong image segmentation models. The cost for this performance surplus is an increased training memory demand, as well as increased training and inference time (see Section 4.5). ### Prediction Convergence of RecycleNet To test the iterative refinement capabilities of the proposed method, we investigate predictions over the number of cycles, both quantitatively and qualitatively. Figure 2 shows the performance in terms of average Dice score in a 5-fold cross validation on the BTCV dataset. We observe a monotonous performance increase when increasing the number of recycling cycles during inference, asymptotically converging towards a saturation DSC value. This suggests an interesting property arising from the introduction of feature recycling. We refer to this property as prediction convergence, where the model naturally con verges towards a refined prediction. Surprisingly, even a single cycle can yield improvements compared with the baseline, as shown in Figure 2. However, this observation is not consistent across datasets. We also note that the network learns to refine predictions even beyond the number of cycles seen during training, while still improving on prior predictions. This property is also reinforced when qualitatively inspecting the segmentation predictions the proposed method creates for single samples, as shown for samples of two segmentation tasks in Figure 3. We see an increased prediction quality beyond the 3 cycles seen during training for both samples. ### Memory and Run Time In this section, we analyze the memory and training epoch time surplus of RecycleNet and the Recurrent U-Net baseline [18]. Since we implemented both in the nnU-Net framework [6], a fair comparison with an identical base network architecture, epoch definition, and underlying hardware can be ensured. Training memory:Figure 4 shows the memory consumption during training on the BTCV dataset w.r.t. the number of iterations. We notice a steep increase in memory consumption when training a Recurrent U-Net on medical image segmentation tasks. This increase stems from two sources: the additional recurrent module employed in the bottleneck of the network architecture and the recurrent training with backpropagation through time. In the context of 3D medical image segmentation, where the network architectures rely on large 3D inputs which lead to a large number of feature activations stored in memory for backpropagation, the latter contributes significantly to the total memory consumption. This leads to more than doubled memory requirements compared to a standard nnU-Net on this task. From this, we conclude that a recurrent U-Net does not efficiently scale to a larger number of recurrent time steps during training. \begin{table} \begin{tabular}{l|c|c||c|c||c|c||c||c||c} \multirow{2}{*}{**Model**} & \multicolumn{2}{c||}{**KiTS**} & \multicolumn{2}{c||}{**BTCV**} & \multicolumn{2}{c||}{**CHAOS**} & \multicolumn{2}{c||}{**AMOS**} & \multicolumn{1}{c}{**Liver T.**} \\ \cline{2-10} & **CV** & **Test** & **CV** & **Test** & **CV(T5)** & **T3** & **T5** & **CV** & **Test** & **CV** \\ \hline nnU-Net & 89.29 & 89.04 & 82.96 & 87.21 & 94.77 & **93.49** & 91.47 & 88.58 & 90.68 & 78,84 \\ DRU [18] & 89.58 & & 82.99 & & 91.62 & & & 88.62 & & **80.36** \\ RecycleNet & **90.26** & **89.11** & **83.75** & **87.80** & **94.92** & 93.48 & **91.85** & **88.77** & **90.82** & 79.88 \\ \end{tabular} \end{table} Table 1: Average DSC scores for 5-fold cross-validation (CV) and public leaderboard held-out test sets (Test). We compare the vanilla nnU-Net [6] with the proposed method and don’t use post-processing on either model’s predictions for a fair comparison. T3 and T5 refer to the test sets of two MRI tasks of the CHAOS challenge, selected to test the proposed method on a different modality than CT. Figure 2: Evaluation of the 5-fold cross validation performance of the proposed method on the BTCV dataset over number cycles in inference. The dotted red line represents the default nnU-Net performance, the yellow to orange bars represent the Recycling nnU-Net’s performance for different numbers of cycles during inference. The proposed RecycleNet also shows a memory surplus during training, but due to the formulation described in Algorithm 1, the memory demands saturate and don't grow with recycling cycles larger than two. In the experiments on the BTCV dataset, we see an increase in memory consumption of roughly 30%. This surplus varies depending on the dataset at hand. We notice that future implementations should in principle be able to further reduce this memory surplus. Training epoch time:Considering the average training epoch times, we again report numbers on the BTCV dataset as a benchmark. Due to the 3D nature of medical image segmentation tasks, we note that data loading and augmentation can often become bottlenecks during training. However, the nnU-Net framework, along with current hardware Figure 4: Memory consumption during training on the BTCV dataset. We show the memory consumption for the proposed RecycleNet, as well as the Recurrent U-Net implemented in the nnU-Net framework. The memory consumption during training is displayed w.r.t. the number of iterations (recycling cycles or recurrent timesteps). While the proposed RecycleNet (blue) does come at the cost of a memory surplus, this surplus stays constant irrespective of the number of recycling cycles. Due to the recurrent nature of the Recurrent U-Net [18] (light brown), the memory costs quickly become prohibitively large when increasing the number of recurrent timesteps during training. Figure 3: Qualitative showcase of refined segmentations for liver tumor label in the Liver Tumor Segmentation Task [1] (left) and the gallbladder label in the BTCV dataset [11] (right). The initial prediction (light yellow) is iteratively refined (shades of yellow), converging towards an improved segmentation (brown). The predictions move closer to the ground truth label (blue) when increasing the number of cycles. Intermediate cycles omitted for visual clarity. enables alleviating this bottleneck, thus allowing for an unbiased measure of realistic epoch time changes stemming from the compared methods. While a typical nnU-Net epoch (defined as 250 mini-batch updates with 2 samples each) on the BTCV dataset takes 74 seconds, the Recurrent U-Net trained using 2 recurrent timesteps requires 165 seconds on average for each epoch. This is more than a 100% increase in training time. The proposed RecycleNet, however, comes with a much lower training time increase. While the later stage of the proposed recycling schedule leads to an epoch time increase of roughly 30% during training (up to 96 seconds compared to nnU-Net's 74 seconds), this epoch time increase is non-existent during the first 200 epochs, where the recycling schedule only uses 1 cycle. Between epochs 200 and 400 of the proposed schedule, we measure an epoch time increase of roughly 15%. When training for 1000 epochs, this leads to a total training time increase of about 20% with an average epoch time of 89.4 seconds. In terms of memory and training time surplus, we see a non-negligible increase for the proposed RecycleNet. This increase, however, is much lower compared to the Recurrent U-Net baseline, rendering the RecycleNet much more accessible in the context of 3D medical image segmentation. We note that the inference time is also increased, since for each sample, \(N_{c}\) forward passes have to be computed. However, in the context of time-consuming preprocessing and resampling, these additional forward passes only result in a minor inference time increase (roughly by a factor of 2 when increasing the number of cycles by a factor of 7). ### Ablation of Training Schedules In this section, we investigate how different recycling training schedules affect the performance in the case of multi-organ segmentation on the BTCV dataset. Figure 5 shows differences in DSC score when compared with nnU-Net [6] in a 5-fold cross-validation. We show that while even using a fixed number of cycles during training (blue) can increase segmentation performance w.r.t. the baseline, this approach does not generalize well to a larger number of cycles during inference. Incrementally increasing the number of cycles over the course of the training process (green) does lead to a similar convergence property as discussed in Section 4.4 when increasing the number of inference cycles beyond the maximum training cycles. However, due to the missing stochasticity in the number of cycles seen during training, the network forgets how to handle lower recycling numbers and performs poorly in those cases during inference. To ensure the observed convergence property described in Section 4.4, both the sampling and the incremental schedule components are important to achieve a reliable and strong prediction accuracy out of the box when confronted with unseen segmentation tasks. This quality is crucial for a wide applicability in the context of medical image segmentation. Figure 5: Ablation of different recycling training schedules on the BTCV dataset. The difference in DSC (compared with nnU-Net) is shown w.r.t. the number of cycles used during inference. A static training schedule of 2 cycles is shown in blue, while an increasing training schedule (without sampling) is shown in green. The proposed schedule (see Section 3.1), is marked and displayed in red. ## 5 Conclusion In this work, we propose a novel method for instilling iterative decision refinement capabilities into neural networks in a process we call latent feature recycling. This approach relies on minimal assumptions w.r.t. the network architecture and naturally leads to performance improvements, showing a convergence towards refined decisions. We demonstrate these capabilities using medical image segmentation as a showcase. In this context, latent feature recycling can improve on even strong models with a simple and robust recycling schedule. We observe that this schedule leads to iterative refinement capabilities, allowing to trade inference time for improved performance. These refinement capabilities even extend to a larger number of cycles than seen during training. We leave it for future work to explore the limits of this phenomenon for applications where even the smallest improvements in performance are worthwhile. As this approach is not limited to U-Nets or medical image segmentation, we expect further work to leverage latent feature recycling on a variety of neural network architectures and tasks. We note, however, that the proposed method comes with an additional computational cost through increased numbers of forward passes during training and inference. Compared with other iterative refinement approaches, we observe a much lower memory and training time surplus, making the RecycleNet a promising candidate e.g. in safety-critical applications where the additional memory and time requirements are not the limiting factor. However, this increased inference time potentially limits the application in time-critical applications. The proposed method also bears a resemblance to latent diffusion models [13], in that an iterative refinement process takes place in latent space, leading to a convergent refinement property during inference. Contrary to latent diffusion models, latent feature recycling covers multiple refinement steps also during training, whereas latent diffusion models don't allow multistep feedback during training. We leave it for future work to explore whether such multistep feedback is also beneficial in the denoising diffusion process. Acknowledgment:This work was in part supported by the Helmholtz Association under the joint research school HIDSS4Health - Helmholtz Information and Data Science School for Health. Part of this work was funded by Helmholtz Imaging, a platform of the Helmholtz Incubator on Information and Data Science.
2301.13344
Anomalous compressible mode generation by global frame projections of pure Alfven mode
Alfven wave is the single most important physical phenomenon of magneto-hydrodynamic turbulence and has far-reaching impact to almost all studies related to astrophysical magnetic field. Yet the restoration of the Alfven wave fluctuations from a given magnetic field, aka the local Alfven wave problem, is never properly addressed in literature albeit its importance. Previous works model the Alfven wave fluctuation as the perturbation along a straight-line, constant magnetic field. However, Lazarian & Pogosyan (2012) suggested that the decomposition of Alfven wave along a straight line, aka. the global frame decomposition, has a factor of discrepancy to the true local Alfven wave fluctuation. Here we provide a geometric interpretation on how the local Alfven wave is related to the global frame through the use of vector frame formulation. We prove both analytically and numerically that the local frame Alfven wave is an orthogonal transformation of that of the global frame and related by the local Alfvenic Mach number. In other words, when we observe Alfven wave in the global frame of reference, some of the Alfven wave will be mistaken as compressible waves. The importance of frame choices have a far-reaching impact to the analytical studies of MHD turbulence. Combining the frame formalism and the new techniques we can have accurate measurement to some of the fundamental turbulence properties like the inclination angle of mean magnetic field relative to the line of sight.
Ka Ho Yuen, Huirong Yan, Alex Lazarian
2023-01-31T00:28:35Z
http://arxiv.org/abs/2301.13344v1
# Anomalous compressible mode generation by global frame projections of pure Alfven mode ###### Abstract Alfven wave is the single most important physical phenomenon of magneto-hydrodynamic turbulence and has far-reaching impact to almost all studies related to astrophysical magnetic field. Yet the restoration of the Alfven wave fluctuations from a given magnetic field, aka the local Alfven wave problem, is never properly addressed in literature albeit its importance. Previous works model the Alfven wave fluctuation as the perturbation along a straight-line, constant magnetic field. However, Lazarian & Pogosyan (2012) suggested that the decomposition of Alfven wave along a straight line, aka. the global frame decomposition, has a factor of discrepancy to the true local Alfven wave fluctuation. Here we provide a geometric interpretation on how the local Alfven wave is related to the global frame through the use of vector frame formulation. We prove both analytically and numerically that the local frame Alfven wave is an orthogonal transformation of that of the global frame and related by the local Alfvenic Mach number. In other words, when we observe Alfven wave in the global frame of reference, some of the Alfven wave will be mistaken as compressible waves. The importance of frame choices have a far-reaching impact to the analytical studies of MHD turbulence. Combining the frame formalism and the new techniques we can have accurate measurement to some of the fundamental turbulence properties like the inclination angle of mean magnetic field relative to the line of sight. keywords: turbulence - ISM: magnetic fields - ISM: structure -- galaxies: ISM ## 1 Introduction Turbulence is ubiquitous in astrophysical environment and the interstellar gases are permeated by turbulent magnetic fields. Magneto-hydrodynamic (MHD) turbulence plays a very important role in various astrophysical phenomena (see Armstrong et al. (1995); Chepurnov et al. (2010); Biskamp (2003); Elmegreen & Scalo (2004); McKee & Ostriker (2007)), including star formation (see McKee & Ostriker (2007); Fisel et al. (2016)), propagation and acceleration of cosmic rays (see Chandran (2000); Yan & Lazarian (2002); Farmer & Goldreich (2004); Lazarian (2016)), as well as regulating heat and mass transport between different ISM phases (Green (1993); Deshpande et al. (2000); Dickey et al. (2001); Lazarian & Pogosyan (2004, 2006); Khalil et al. (2006); Begum et al. (2006); Padoan et al. (2006) see Draine (2009, 2011) for the list of the phases). MHD turbulence is usually highly compressible, and has been thoughtfully studied by a number of authors in the community (e.g. Kowal et al. (2007)). However, the compressibility of the turbulence adds additional difficulty in the understanding of how the three fundamental MHD modes (namely Alfven, slow and fast modes) would behave in various astrophysical phenomena, each carrying different spectra and anisotropies. For instance, it is believed that the Alfven mode plays a central role in making the cold neutral media aligned with the magnetic field (Lazarian et al., 2018) and controls the transport of heat and particles across magnetic fields (Narayan & Medvedev, 2001; Lazarian, 2006; Yan & Lazarian, 2008; Maiti et al., 2021). In comparison, fast modes play an important role in the scattering and acceleration of cosmic rays (Yan & Lazarian, 2002, 2004; Cho & Lazarian, 2005; Lazarian & Pogosyan, 2008; Brunetti & Lazarian, 2007). The modes composition strongly depends on the way of driving Makwana & Yan (2020). It is therefore essential to have a handy way in decomposing the three fundamental MHD modes in numerical analysis. A notable development is the statistical mode decomposition developed by Cho & Lazarian (2002, 2003, latter hereafter CL03), which allows one to obtain the realization of the three fundamental MHD modes in numerical simulations by considering a perturbation along a locally strong magnetic field direction. The realization of MHD modes allowed the community to validate the theory of MHD turbulence (Gol dreich & Sridhar (1995) hereafter GS95, see also Lazarian & Vishniac (1999); Cho & Vishniac (2000); Maron & Goldreich (2001); Lithwick & Goldreich (2001); Cho & Lazarian (2002, 2003)) through numerical simulations. In particular, the scaling relation of compressible modes were first verified through the realization of MHD modes using the mode decomposition algorithm developed by CL03. The realization of MHD modes also excites the development of different techniques in studying MHD turbulence in observations, including the Velocity Gradient Technique (VGT, Yuen & Lazarian (2017, 2018)) which uses the anisotropy of different modes in retrieving the magnetic field directions in spectroscopic data, and also the Synchrotron Polarization Analysis (SPA, Zhang et al. (2020)) which utilizes the properties of the projected statistics in predicting the dominance of Alfven or compressible modes in observational synchrotron data, as well as detailed analysis of solar wind turbulence (e.g. Zhao et al., 2021, 2022). However, Goldreich & Sridhar (1995) model of MHD turbulence is of centre importance in the modern theory of MHD turbulence. The latter is employs the concept of "local frame of reference" that was added to the theory later (Lazarian & Vishniac, 1999; Cho & Vishniac, 2000). This means that the eddies, which are usually elliptical in shape, are aligned to the local magnetic field rather than the mean magnetic field. As pointed out by Kowal & Lazarian (2010), the decomposition of CL03 is a global frame decomposition, as opposed to the local frame MHD theory stressed in the works that followed the original GS95 study (Lazarian & Vishniac, 1999). As described in Fig.1, when one considers a different volume, the realization of the three fundamental modes will be different due to the change of the mean magnetic field directions under the CL03 decomposition algorithm. The difficulty of obtaining the statistics of three modes in a localized manner has been attempted, including abandoning the realization of modes but focusing on the structure functions Beresnyak et al. (2005), decomposing the MHD quantities into linear combination of fundamental localized patches before performing the CL03 decomposition (Kowal & Lazarian, 2010), or the introduction of the frame changing parameters in the framework of correlation functions (Lazarian & Pogosyan, 2012). Yet, how the three fundamental modes are realized in the local system of reference is still an unsolved problem for numerical community. In this paper, we explore how the Alfven and compressible modes in the local system of reference are expressed globally. In SS2 we review the CL03 method and its possible improvements. In particular, in SS3 we discuss about the generation of "compressible waves signature" due to the wrong choice of local frame of reference. From SS4 to SS5, we describe a few applications that utilize the concept of Alfven leakage, namely the applications of the Synchrotron Polarization Analysis Technique to regimes with strong Faraday rotation (SS4) and the determination of the line of sight angle \(\gamma\) (SS5). In SS6 we discuss about the possible impacts of our method and the caveats of our work. In SS7 we conclude our paper. ## 2 Mode Decomposition ### Review of the MHD mode decomposition methods In this section we review the underlying assumptions of the mode decomposition method as introduced by CL03 and the development since then. In CL03 they consider a volume \(d\Omega\) in which the perturbation of magnetic field is small compared to the mean field \(\delta B(d\Omega)<\langle B\rangle\), so does the density fluctuations \(\delta\rho/\langle\rho\rangle<1\). Fig.1 shows how the volume \(d\Omega\) is selected. Readers should be careful that once the volume is selected the mean magnetic field direction \(\hat{\lambda}\) is also defined respectively. In this scenario, the small perturbation in the presence of a strong mean magnetic field will provide a linearized set of MHD equations in which three non-trivial eigenvectors would be found. In this localized box, the Alfven, slow and fast mode eigenvectors are1: Footnote 1: However, recent publication from Gan et al. (2022) suggests that a significant portion of the projected spectral powers are not in the form of propagating waves, but fluctuations with miniature frequencies. The nature of the non-wave fluctuations as dubbed in Gan et al. (2022) requires further clarifications from the theory of MHD turbulence. See Beresnyak & Lazarian (2019); Fu et al. (2022); Schekochihin (2022). \[\begin{split}\zeta_{A}(\hat{\mathbf{k}},\hat{\lambda})& \propto\hat{\mathbf{k}}\times\hat{\lambda}\\ \zeta_{S}(\hat{\mathbf{k}},\hat{\lambda})&\propto(-1+ \alpha-\sqrt{D})(\hat{\mathbf{k}}\cdot\hat{\lambda})\hat{\lambda}\\ &+(1+\alpha-\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{ \lambda}))\\ \zeta_{F}(\hat{\mathbf{k}},\hat{\lambda})&\propto(-1+ \alpha+\sqrt{D})(\mathbf{k}\cdot\hat{\lambda})\hat{\lambda}\\ &+(1+\alpha+\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{ \lambda}))\end{split} \tag{1}\] where \(\alpha=\beta\gamma/2\), \(D=(1+\alpha)^{2}-4\alpha\cos^{2}\theta_{\lambda}\), \(\cos\theta_{\lambda}=\hat{\mathbf{k}}\cdot\hat{\lambda}\), plasma \(\beta\equiv P_{gas}/P_{mag}\) measures the compressibility and \(\gamma=\partial P/\partial\rho\) is the polytropic index of the adiabatic equation of state. The presence of \(\hat{\mathbf{k}}\) suggests that the direction of the three mode vectors are changing as \(\mathbf{k}\) changes. In this scenario, the perturbed quantities, say for the velocity fluctuations \(\mathbf{v}_{1}=\mathbf{v}-\langle\mathbf{v}\rangle\) can be written as: \[\begin{split}\mathbf{v}_{1}(\mathbf{r})=\int d^{3}\mathbf{k}e^{ \imath\mathbf{k}\cdot\mathbf{r}}\sum_{X\in A,S,F}F_{0,X}(\mathbf{k})F_{1,X}( \mathbf{k},\hat{\lambda})C_{X}\zeta_{X}(\hat{\mathbf{k}},\hat{\lambda})\end{split} \tag{2}\] for some power spectrum \(E_{v}(k)=F_{0}^{2}=k^{-n/2}\)(Yuen et al., 2022), some anisotropy weighting function \(F_{1}\) and the mode constants \(C_{X}\) denoting the relative weight of the three modes. Notice that the decomposition (Eq.(1)) is a **global** decomposition method since the magnetic field fluctuations _within the volume \(d\Omega\) is not considered_ when computing the eigenvectors of the three modes. One of the most important consequences of performing global decomposition is the loss of the GS95 scaling for small \(k\). In fact, Cho & Lazarian (2002) (see also Lazarian & Pogosyan (2012)) pointed out that in the global system of reference the anisotropic scaling is scale independent, meaning that the elongation of turbulence eddies is fixed and does not change as the eddies cascade. Another important consequence of the concept of \(d\Omega\) is that, when one changes the sampling volume, e.g. from the volume in blue to that of yellow or orange in Fig.1, the weighting of the three modes will also change due to the change of the mean direction of magnetic field from volume to volume. Notice that the selection of the volume \(d\Omega\) has to fulfill the conditions as assumed in CL03: Both the sonic and Alfven Mach number within the volume must be smaller than unity. Notice that if the volume is smaller than the volume defined by the correlation length of the turbulence, the dispersion of the turbulence observables will be scaled as a function of distance according to their respective turbulence statistics, meaning that if \(\rho,v\) follows GS95, \[\delta\rho^{2}(\mathbf{r}) \propto r^{2/3} \tag{3}\] \[\delta v^{2}(\mathbf{r}) \propto r^{2/3}\] To address the issue of the locality, the community has explored a number of ways to include the local fluctuations of magnetic field during the calculation of statistics of MHD modes. For instance, one of the most notable ways of obtaining the statistics of MHD modes is to compute the local structure functions Beresnyak et al. (2005). The mathematical expression of the 3D structure function of the turbulence variable \(v\) in the local frame of reference is given by: \[SF\{\mathbf{v}\}(\mathbf{r})=\Big{\langle}\left((\mathbf{v}(\mathbf{r}^{ \prime}+\mathbf{r})-\mathbf{v}(\mathbf{r}^{\prime}))\cdot\hat{\lambda}( \mathbf{r},\mathbf{r}^{\prime})\right)^{2}\Big{\rangle}_{\boldsymbol{r}^{ \prime}} \tag{4}\] where \[\hat{\lambda}(\mathbf{r},\mathbf{r}^{\prime})=\frac{\mathbf{B}(\mathbf{r}^{ \prime}+\mathbf{r})+\mathbf{B}(\mathbf{r}^{\prime})}{|\mathbf{B}(\mathbf{r}^{ \prime}+\mathbf{r})+\mathbf{B}(\mathbf{r}^{\prime})|} \tag{5}\] The anisotropy computed throughout this manner is scale dependent and exhibit the GS95 scaling \(r_{\parallel}\propto r_{\perp}^{2/3}\). The use of the local structure function correctly recovers the GS95 statistics, yet it is not possible to obtain the realization of the modes in this manner, meaning that further study of the features of the modes, e.g. how does the mode look like when projected, are prohibited when using the structure functions. Another important way of improving the CL03 is to rewrite the turbulence variables into the linear sum of small localized patches through the wavelet transform (Kowal and Lazarian, 2010). Physically, they are looking for specific functional forms obeying the orthogonality requirement and represent the volumes as shown in Fig.1.By considering the set of orthogonal wavelets \(\phi\), one can write the velocity field \(\mathbf{v}(\mathbf{r})\) as: \[\tilde{\mathbf{v}}(\mathbf{w};a)=a^{-3/2}\int d^{3}x\phi(\frac{\mathbf{r}- \mathbf{w}}{a})\mathbf{v}(\mathbf{r}) \tag{6}\] where \(\phi\) is the set of wavelet functions. For (Kowal and Lazarian, 2010) they select the 12-tap Daubechies wavelet. To perform mode decomposition, they first convert the velocity field into the linear combinations of wavelets, and then proceed with the procedures of CL03 for the wavelet transformed variable. The contributions for all wavelets for a specific mode are added before the inverse Fourier transform takes place. Notice that the wavelet functions is simply a mathematical construction that may contain spatially dispatched regions (e.g. D4 and D12 of the Daubechies wavelet), for which taking the statistical calculations within the wavelet might not be physically justifiable. Nevertheless, the improved mode decomposition method proposed by Kowal and Lazarian (2010) still retrieve seemingly the correct GS95 statistics from the simulations. ### The locality of the mode decomposition problem due to magnetic field wandering effect In realistic MHD simulations the magnetic field lines are fluctuating within any selected volume, which is named wandering effect as in CL03. However the mode decomposition algorithm available in the community had not considered any of these wandering effect, which makes the estimation of modes be rather unrealistic for larger \(M_{A}\). To model the additional effect when decomposing the modes in the global frame of reference, Lazarian and Pogosyan (2012) model the _magnetic field_ correlation function as the linear combination of the isotropic tensor \(\hat{T}_{E}\) and the axis-symmetric tensor \(\hat{T}_{F}\)(See Yan and Lazarian, 2004 also our Appendix for a detailed mathematical construction). In general the direct tensor of the magnetic field in the Fourier space at a given wavevector \(\mathbf{k}\) in the local frame of reference can be written as: \[\hat{H}_{i}\hat{H}_{j}\propto E(\mathbf{k})\hat{T}_{E,ij}+F(\mathbf{k})\hat{T} _{F,ij} \tag{7}\] The transformation from the local to global frame as modelled by LP12 is: \[\hat{T}_{E,local}\rightarrow\hat{T}_{E,global} \tag{8}\] \[\hat{T}_{F,local}\to W_{I}\hat{T}_{E,global}+W_{L}\hat{T}_{F,global}\] where \(W_{I,L}\) are two modelling constants that are functions of \(M_{A}\) and also \(\mathbf{k}\). ## 3 Alfven leakage ### Description of the problem Since \(\hat{T}_{E}=\hat{T}_{C}+\hat{T}_{A}\) and \(\hat{T}_{F}=\hat{T}_{C}\) (See the Appendix A2) one can actually write the magnetic field in the Fourier space in the global frame of reference using vector notations due to the orthogonality of the base vectors: \[\tilde{H}(\mathbf{k})=C_{C}\zeta_{C}+C_{A}\zeta_{A} \tag{9}\] where \[\zeta_{C} =\frac{(\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\hat{ \lambda}))_{i}}{|\hat{\mathbf{k}}\times\hat{\lambda}|} \tag{10}\] \[\zeta_{A} =\frac{(\hat{\mathbf{k}}\times\hat{\lambda})_{i}}{|\hat{\mathbf{k} }\times\hat{\lambda}|}\] (See Appendix). The modelling of LP12 simply means that, assuming if in the local frame of reference the magnetic field only has the Alfven component \(\tilde{H}(\mathbf{k})=\sqrt{E(\mathbf{k})}C_{0}\zeta_{A}\), then the transformation from local to global reference frame simply means a vector projection: \[\tilde{H}(\mathbf{k})=\sqrt{E(\mathbf{k})}C_{0}\zeta_{A}\xrightarrow{\text{ global}}\sqrt{E(\mathbf{k})}(C_{C}\zeta_{C}+C_{A}\zeta_{A}) \tag{11}\] where \(C_{C}=C_{0}\sin(\theta_{f})\), \(C_{A}=C_{0}\cos(\theta_{f})\) for some \(\theta_{f}\) that we will explore in the coming subsection. For Alfven waves, \(E(\mathbf{k})=-F(\mathbf{k})\), and the relation between \(C_{C,A}\) and \(W_{I,L}\) can be easily derived: \[C_{A}^{2} =1-W_{I} \tag{12}\] \[C_{C}^{2} =1-W_{I}-W_{L}\] Notice that when \(M_{A}\ll 1\), there is no magnetic field wandering effect. In this limit, \(C_{A}(M_{A}\to 0)=1\), \(C_{C}(M_{A}\to 0)=0\). The vector notation (cf. Eq.(9)) allows us to think of this problem _geometrically_ and relates to a very important physical effect that mentioned in both Yuen et al. (2018) and more recently Yuen and Lazarian (2020): The **Alfven leakage** effect. Alfven leakage describes the effect that the locally Alfven component of the turbulent variables are projected as linear combinations of Alfven and non-Alfven components in the presence of a curved magnetic field averaged over selected volume (**local mean magnetic field**). In Yuen and Lazarian (2020) we consider the effect that the gravitational forces creates extra compression to the Alfven waves and thus some of the Alfven waves from the self-gravitating systems are transferred into compressible components. In fact, in the Figure 1: The concept of mode decomposition in CL03: (1) By selecting a volume \(d\Omega\), a local mean magnetic field direction \(\hat{\lambda}\) would then be defined for later decomposition. (2) All the wavevector \(\mathbf{k}\) that is contained in this volume \(d\Omega\) are used for the decomposition. (3) For each \(\mathbf{k}\) there is a local reference frame (See Fig.A1) that decomposes the magnetic field into the three eigenmodes. The change of the selected volume \(d\Omega\) will lead to different mean field vector \(\hat{\lambda}\). As a result, the local decomposition result would be functions of both \(\lambda\) and the wavevector \(\mathbf{k}\). Figure 2: An illustrative figure showing how the _Alfven Leakage_ phenomenon happens during the mode decomposition process. Here the red line represents the B-field line, point X is the origin of the volume \(d\Omega\), that is represented by the dash orange circle. The vector \(\hat{\lambda}\) represents the mean field averaged over \(d\Omega\). For a given point within \(d\Omega\), assuming the k-vector points inside the plane, the local Alfven wave unit vector (green) makes an angle to that of the Alfven wave unit vector defined by \(\hat{\lambda}\). This effect will be stronger if the magnetic field fluctuations within \(d\Omega\) is larger, and vice versa. presence of non-trivial magnetic field structures, there exist non-zero (Yuen and Lazarian, 2020) field lines in any volume due to magnetic field wandering. Similar effect has been considered in Maron and Goldreich (2001) but being corrected by Cho and Vishniac (2000) as the "rotation effect" by recognizing that the global frame anisotropy axis ratio is a function of \(M_{A}\). However the actual relation between the local and global frame of reference has yet to be explored. Fig.2 gives a pictorial illustration on how the Alfven leakage happens during mode decomposition. For a given magnetic field \(H_{i}(\mathbf{r})\), its Fourier transform \(\tilde{H}_{i}(\mathbf{k})\) can be written as the linear sum of Alfven and compressible components. To simplify our argument, we consider a pure Alfven wave B-field in the local frame of reference, which \(\tilde{H}_{i}\) is simply proportional to the local Alfven vector \(\zeta_{A,local}\), which is represented by the green vector. However if the volume \(d\Omega\) is selected (the orange dash circle), the mean field is defined in \(d\Omega\). In this case, if we consider a pure Alfven mode magnetic field in the local frame of reference as we show in Fig.2, the projection of the Alfven mode in the global frame of reference will be an linear combination of Alfven and compressible modes as written in Fig.2. By selecting a k-vector that points inside the plane, the local Alfven wave unit vector (green), which is defined by the cross product between the k-vector and the _local_ magnetic field direction, makes an angle to that of the Alfven wave unit vector defined by \(\hat{k}\times\hat{\lambda}\) which we will name that angle \(\delta\theta_{d\Omega}\). The projection effect causes artificial compressible modes to be detected within \(d\Omega\). It is very apparent that if the magnetic field line is aligned with the mean field vector \(\hat{\lambda}\) in \(d\Omega\), then there is no artificial compressible modes due to the projection effect. We call this effect _Alfven Leakage_. This effect is artificial and does not involve the change of the cascade laws or anisotropies. ### Modelling of the Alfven leakage problem We can further model the leakage phenomenon through the use of \(\delta\theta_{d\Omega}\). From Fig.2 we see that \(\delta\theta_{d\Omega}\) is a measure of the _average_ magnetic field angle dispersion in this selected volume. We can postulate that the dispersion of angles are related to the Alfven Mach number measured within \(d\Omega\): \[\delta\theta_{d\Omega}\sim M_{A,d\Omega} \tag{13}\] Notice that the dispersion of magnetic field angle can scale up to \(M_{A}\approx 2\) as shown in the appendix of Lazarian et al. (2022b). Notice that according to the definition of the Alfven Mach number and the self-similarity of turbulence cascade, any localized calculation of statistical observables in a turbulence system can be approximated by the scale relation through the observable's structure function. For the case of Alfven Mach number, the corresponding observable is \(\delta B/B\), then \[\begin{split} M_{A,d\Omega}&\approx\left(\frac{1}{2 B^{2}}\langle\delta B(\mathbf{r}+\mathbf{r}^{\prime})-\delta B(\mathbf{r}^{ \prime})\rangle_{\mathbf{r}^{\prime}}\right)^{1/2}\\ &\sim M_{A,global}\left(\frac{r}{L_{inj}}\right)^{1/3}\end{split} \tag{14}\] where the last equality comes from the fact that the magnetic field fluctuation is small and Kolmogorov: \(\langle\delta B^{2}\rangle\sim r^{2/3}\). We can then relate the \(C_{C,A}\) with Eq.(13) and (14). ### Numerical tests We can perform a very simple numerical test to illustrate the behavior of the Alfven leakage, which can be done by the following steps: 1. We select the local frame vectors \(\zeta_{A,C}\), where they can be _approximated_ by selecting a very small volume in the numerical simulations and use Eq.9. We shall call this volume 1. 2. We then perform Alfven wave decomposition using \(\zeta_{A}\) from simulations so that we can put \(C_{A,local}=1\) in all our case (See Fig.1). 3. We then select a larger volume with size \(\sigma\). According to SS2, if there is indeed the effect of Alfven leakage happening, then \(C_{C}=\sin M_{A}\sim\sin(C\sigma^{1/3})\), where \(C=M_{A,global}L^{-1/3}\). In this new volume, there will be another pair of \(\zeta_{A,C}\) being defined. We shall call this volume 2. 4. We plot the quantity \(C_{C}^{2}=1-(\langle\hat{B}_{Alf,1}\cdot\hat{B}_{Alf,2}\rangle^{2})\). Notice we only compare the directions of \(B\) within volume 1, since this calculation only makes sense. 5. There could be three outcome from this test 1. If there is no such leakage effect, \(C_{C}\) should be a constant zero as we already removed all non-compressible components in the previous step. 2. If there is indeed \(C_{C}\) but our model in the previous section is incorrect, then there should not be a dependence of \(C_{C}\sim\sin(C\sigma^{1/3})\). 3. If the Alfven leakage effect indeed exists, we expect \(C_{C}\sim\sin(C\sigma^{1/3})\). Fig.3 shows how the \(C_{C}\) behave as a function of the size of the volume \(\sigma\), where we plot the regime when \(\sigma\) is not in the dissipation range. Notice that here we intentionally pick simulation cubes from various numerical codes and with different conditions to show that the leakage effect is universal and rather independent to what choices of MHD code one works with. For each figure, we draw the predicted proportionality \(C_{C}\sim\sin^{2}(C\sigma^{1/3})\) as the red dash curve. As we can see from these three subplots, our prediction \(C_{C}\sim\sin^{2}(C\sigma^{1/3})\), which came analytically from previous sections and not from a fitting algorithm, follows the trend reasonably well. This indicates that (1) the Alfven leakage effect actually exists even in incompressible mode turbulence (2) the leakage is smaller when one goes to smaller scales (3) our postulate that \(\delta\theta_{d\Omega}\sim M_{A,local}\) is a good approximation. These three consequences indicate that the mode decomposition as proposed by CL03 requires an additional update in addressing the contributions of large scale magnetic field wandering to the relative composition of modes within the volume. As a direct consequence of this section, the \(W_{I,L}\) constants originated from Lazarian and Pogosyan (2012) as a function of \(M_{A,local}\) (\(\ll 1\)) are given by Eq.(8) are : \[\begin{split} W_{I}&=1-C_{A}^{2}\propto M_{A}^{2}\\ W_{L}&=C_{A}^{2}-C_{C}^{2}\propto 1-2M_{A}^{2} \end{split} \tag{15}\] ## 4 Application (I): Advancement of the SPA Technique ### Review on the SPA technique In Zhang et al. (2020) the authors discussed a novel implementation called SPA in obtaining the modes from synchrotron emission maps. Their argue that, since the tensor structures for Alfven or compressible waves being different at the synthesis of the Stokes parameter, the signature of the dominance of the modes are left in the "signature parameter". Let us first recap the formulation of Zhang et al. (2020) and their main results (See Method section of Zhang et al. (2020). To start with, the authors consider the emissivity of the synchrotron emissions _under a locally defined reference frame_: \[\begin{split}&\epsilon_{xx}=(I+Q)/2\\ &=B_{0,\perp}^{2}\cos^{2}\phi_{s}+2B_{0,\perp}\cos\phi_{s}B_{i} \hat{e}_{xi}+(B_{i}\hat{e}_{xi})^{2}\end{split} \tag{16}\] where our symbols follow the Zhang et al. (2020) notations and we employ Einstein notation of summation. Very importantly, the angle \(\phi_{s}\) is the angle between the polarization vector to the currently defined x-axis of the local magnetic reference frame. In Zhang et al. (2020) they select a small area and compute the change of \(\epsilon_{xx}\) as the reference frame rotates. They recognize that the variance of \(\epsilon_{xx}\) contains factors that could reflect the relative contributions of MHD modes. The variance of the emissivity contains the linear term (the first term) and the quadratic term depending on the power of the tensor \(\hat{T}\): \[\begin{split}& s_{xx}=\langle\epsilon_{s}^{2}\rangle\propto(2B_{0, \perp}\cos\phi_{s})^{2}\int d\mathbf{k}F^{2}(\mathbf{k})e^{i\mathbf{k}\cdot \mathbf{r}}\hat{e}_{xi}\hat{e}_{xi}\hat{T}_{ij}(\mathbf{\hat{k}})\\ &+2\left(\int d\mathbf{k}F^{2}(\mathbf{k})\hat{e}_{xi}\hat{e}_{xi }\hat{T}_{ij}(\mathbf{\hat{k}})\right)^{2}\end{split} \tag{17}\] Zhang et al. (2020) pointed out that the linear term, namely the **signature parameter**: \[s_{xx}(\phi_{s})\propto(2\cos\phi_{s})^{2}\int d\mathbf{k}F^{2}(\mathbf{k})e ^{i\mathbf{k}\cdot\mathbf{r}}\hat{e}_{xi}\hat{e}_{xi}\hat{T}_{ij}(\mathbf{\hat {k}}) \tag{18}\] can be expressed in the following format with some constants \(a_{xx},b_{xx}\) defined according to MHD theory (See Zhang et al. 2020 for details): \[s_{xx}(\phi_{s})=(a_{xx}\sin^{2}\phi_{s}+b_{xx})\cos^{2}\phi_{s},\quad\phi_{s} \in[0,\pi] \tag{19}\] where the classification parameter \(r_{xx}\) is defined as \[r_{xx}=\frac{a_{xx}}{b_{xx}} \tag{20}\] In practice, we need to compute the parameter \[s_{xx,tot}=\frac{Var(\epsilon_{xx})}{4\langle\epsilon_{xx}\rangle} \tag{21}\] This term contains both the "linear" term (Eq.19) and the "quadratic" term as defined in Zhang et al. (2020). \begin{table} \begin{tabular}{c c c c c} Model & \(M_{S}\) & \(M_{A}\) & \(\beta=2M_{A}^{2}/M_{C}^{2}\) & Resolution \\ \hline \hline **ZEUS-MP Simulations** & & & & \\ Ms0.92Ma0.09 & 0.92 & 0.09 & 0.02 & \(480^{3}\) \\ Ms0.98Ma0.32 & 0.98 & 0.32 & 0.22 & \(480^{3}\) \\ Ms0.93Ma0.94 & 0.93 & 0.94 & 2.0 & \(480^{3}\) \\ huge-0 & 6.17 & 0.22 & 0.0025 & \(792^{3}\) \\ huge-1 & 5.65 & 0.42 & 0.011 & \(792^{3}\) \\ huge-2 & 5.81 & 0.61 & 0.022 & \(792^{3}\) \\ huge-3 & 5.66 & 0.82 & 0.042 & \(792^{3}\) \\ huge-4 & 5.62 & 1.01 & 0.065 & \(792^{3}\) \\ huge-5 & 5.63 & 1.19 & 0.089 & \(792^{3}\) \\ huge-6 & 5.70 & 1.38 & 0.12 & \(792^{3}\) \\ huge-7 & 5.56 & 1.55 & 0.16 & \(792^{3}\) \\ huge-8 & 5.50 & 1.67 & 0.18 & \(792^{3}\) \\ huge-9 & 5.39 & 1.71 & 0.20 & \(792^{3}\) \\ e5r2 & 0.13 & 1.57 & 292 & \(1200^{3}\) \\ e5r3 & 0.61 & 0.52 & 1.45 & \(1200^{3}\) \\ e6r3 & 5.45 & 0.24 & 0.0039 & \(1200^{3}\) \\ \hline ### Alternative ways to separate the Alfven and compressible modes As discussed in LP12, two-point turbulence statistics contains three types of contributions: The spectrum which measures the cascade as a function of scales, the anisotropy which records whether there is a preferred direction for the cascade to happen, and the tensor structures which records the projection effect of the mode components. From the discussion above, the SPA technique actually did not consider two-point statistics. In particular, Eq.(21) is a one-point statistics, in which the anisotropy of the observable does not play a role in the value of the output. Since all scales are summed up when computing the mean and variances, the spectrum also do not play an important role for Eq.(21). As a result, a natural guess on how the SPA technique works is the projection effect from the tensor structures. In this scenario using the vector formulation (See Appendix) we can understand quantitatively better how the SPA technique works. Notice that, in the case of one-point statistics, the 2D (i.e. the average operator above) and 3D statistics (which we will consider later below) should be the same. Let us consider a 3D magnetic field line written as sum of the mean and perturbed contribution in a selected volume \(d\Omega\) with \(M_{A,d\Omega}\ll 1\): \[\mathbf{H}_{i}(\mathbf{r})=\langle\mathbf{H}_{i}\rangle+\int d^{3}ke^{i \mathbf{k}\cdot\mathbf{r}}\sum_{X\in\text{any frame}}C_{X}(\hat{\mathbf{k}}, \hat{\lambda})\hat{\zeta}_{X}(\hat{\mathbf{k}},\hat{\lambda}) \tag{22}\] From now on we are going to choose the frame to be the PCA frame (See Appendix A.2, Fig.A1), and assuming the line of sight direction is at the z-axis and the magnetic field in the plane of sky defines the x-axis. The x-component magnetic field dispersion, which is just the mean value of the emissivity subtracted by a constant (cf.Eq.(16)), is given by: \[\langle\delta H_{x}^{2}\rangle=2\pi\int d^{3}k\sum_{X\in C,A}C_{X}^{2}(\hat{ \zeta}_{X}\cdot\hat{x})^{2} \tag{23}\] Notice that the only difference between the compressible and the Alfven component can be observed when we expand the dot product for the above equation: \[\begin{split}\hat{\zeta}_{A}\cdot\hat{x}&=\frac{( \hat{\lambda}\cdot\hat{z})(\hat{k}\cdot\hat{y})}{|\hat{k}\times\hat{\lambda}|} \\ \hat{\zeta}_{C}\cdot\hat{x}&=-(\hat{\lambda}\cdot \hat{x})\frac{1-(\hat{k}\cdot\hat{x})^{2}}{|\hat{k}\times\hat{\lambda}|}\end{split} \tag{24}\] We can model Eq.(23) via the frame definition of \(\phi_{s}\) in Eq.(16), where the frame angle \(\phi_{s}=0\) when the projection of magnetic field is along the x-axis: \[\langle\delta H_{x}^{2}\rangle=A_{xx}\cos^{2}\gamma+B_{xx}\sin^{2}\gamma\cos^ {2}\phi_{s} \tag{25}\] where \(\cos\gamma=\hat{\lambda}\cdot\hat{z}\) is the line of sight angle, and \(A_{xx},B_{xx}\) are: \[\begin{split} A_{xx}&=2\pi\int d^{3}kC_{A,obs}^{2} \left(\frac{(\hat{k}\cdot\hat{y})}{|\hat{k}\times\hat{\lambda}|}\right)^{2}\\ B_{xx}&=2\pi\int d^{3}kC_{C,obs}^{2}\left(\frac{1-( \hat{k}\cdot\hat{x})^{2}}{|\hat{k}\times\hat{\lambda}|}\right)^{2}\end{split} \tag{26}\] The factors within the bracket of each equation above are the _geometric factors_ as discussed in LP12. Here we consider the general case of the leakage, which applies to both Alfven and compressible modes (See SS3), i.e. the observed amplitudes of Alfven and compressible modes \(C_{A,obs},C_{C,obs}\) undergo an orthogonal rotation of angle \(M_{A}<1\) (See Eq.13): \[\begin{split} C_{A,obs}&\approx C_{A}\cos M_{A}-C_ {C}\sin M_{A}\\ C_{C,obs}&\approx C_{A}\sin M_{A}+C_{C}\cos M_{A} \end{split} \tag{27}\] The expressions inside the brackets of \(A_{xx},B_{xx}\) are the geometric factors that considered in both LP12 and Zhang et al. (2020). We can see from Eq.(25) that the contributions of Alfven and compressible modes are separated when one considers the frame rotation even for \(\langle\epsilon\rangle\). It is not necessary to compute Eq.(21) in extracting the contributions of the modes. Moreover, we can now quantify the contributions of modes via Eq.(26) by using the modes for \(C_{A,C}\) by simply comparing the values of \(A_{xx}\) and \(B_{xx}\) while analyzing the observed synchrotron emission. In particular, if \(M_{A}\) is small and there is no compressible mode, then \(B_{xx}=0\). i.e. that contribution of the Alfven mode to \(\langle\epsilon\rangle\) is frame independent (i.e. rotating the x-y plane does not alter the result) since \((\hat{\lambda}\cdot\hat{z})\) cannot be changed due to frame rotation, while that for compressible mode is a frame dependent quantity since \(\hat{\lambda}\cdot\hat{x}\) is a function of the reference frame. ### The SPA technique for synchrotron emissions with significant Faraday Rotation In Zhang et al. (2020) they study the effects of Faraday rotation to the SPA technique. Pictorially the Faraday depolarization effects shields information up to a certain distance along the line of sight. This distance has been adequately discussed in Lazarian and Pogosyan (2012) in the presence of Figure 4: An illustration in showing how the observed features of the synchrotron intensities are related to the different weighting of spectrum, anisotropy and frame vectors (See Tab.A1) if a strong guided field is given. From the top left: from a \(k^{-11/3}\) spectrum plus the Alfven frame vector; top right: a \(k^{-11/3}\) spectrum with the Alfven anisotropic factor and also the Alfven frame vector mimicking the Alfven mode; lower left: a \(k^{-11/3}\) spectrum with the compressible frame vector, mimicking the fast mode; lower right: a \(k^{-11/3}\) spectrum with the slow mode (See Tab.A1) anisotropic factor and also the compressible frame vector, mimicking the slow mode. It is very apparent that both the anisotropy factor and also the frame vector contributes to the observed anisotropy in 2D observables. From synthetic simulations assuming \(M_{A}=0.1\) such that the leakage effect is small. galactic MHD turbulence and is called the Faraday screening effect (Lazarian and Yuen, 2018). Qualitatively, the SPA technique can only determine the mode fraction before the Faraday screen. However, we would like to perform the analysis based on the formalism of Lazarian and Yuen (2018). In general, the synchrotron emission depends both on the distribution of relativistic electrons \[N_{e}(\mathcal{E})d\mathcal{E}\sim\mathcal{E}^{o}d\mathcal{E}, \tag{28}\] with intensity of the synchrotron emission being \[I_{sync}(\mathbf{X})\propto\int dzB_{\perp}^{\gamma}(\mathbf{x}) \tag{29}\] where \(\mathbf{X}=(x,y)\) is the 2D position of sky (POS) vector and \(B_{\perp}=\sqrt{B_{2}^{2}+B_{y}^{2}}\) being the magnitude of the magnetic field perpendicular to the LOS \(z\). In general, \(\gamma=0.5(\alpha+1)\) is a fractional power, which was a serious problem that was successfully addressed in LP12. LP12 proves that the statistics of \(I(\alpha)\) is similar to that of \(I(\alpha=3)\). Therefore it suffices to discuss the statistical properties of the case \(\alpha=3\). Per Lazarian and Pogosyan (2012), Synchrotron complex polarization function _with Faraday rotation_ is given by: \[P_{synch}(\mathbf{R})=\int dz\epsilon_{synch}\rho_{rel}B^{2}e^{2i\left(a( \mathbf{R},z)+C\lambda^{2}\Phi(R,z)\right)} \tag{30}\] where \(\epsilon_{synch}\) is the emissivity of synchrotron radiation, \[\Phi(R,z)=\int_{\infty}^{z}dz^{\prime}(4\pi)^{-1/2}\rho_{thermal}(\mathbf{R}, z)B_{z}(\mathbf{R},z)\text{rad m}^{-2} \tag{31}\] is the Faraday Rotation Measure 2. Notice that \(\rho_{rel}\) is the relativistic electron density, while \(\rho_{thermal}\) is the thermal electron density. The C-factor \(\approx\) 0.81 (Lee et.al, 2016). The projected magnetic field orientation is then given by: Footnote 2: It is usually more convenient to use \(H_{z}=B_{z}/\sqrt{4\pi}\) for analysis. \[\theta_{B}=\frac{\pi}{2}+\frac{1}{2}\tan_{2}^{-1}(\frac{U}{Q}) \tag{32}\] where \(\tan_{2}^{-1}\) is the 2-argument arc-tangent function. For frequencies lower than \(O(1GHz)\), the amplitude of the Faraday Rotation measure will exceed \(2\pi\). The physical picture of the synchrotron polarization with Faraday rotation measure can be understood as: photons that passes through a section of ISM has to experience a certain amount of phase shift. If this phase shift exceeds \(2\pi\), all information coming from the source is completely lost. Therefore an important concept called the **Faraday screening** emerges, which indicates the maximal line of sight distance that the observed synchrotron emissions can measure in the presence of line of sight magnetic field. In the case of sub-Alfvenic turbulence, the source term \(P_{i}=\rho_{rel}\exp(2i\theta(\mathbf{R},z))\) is dominated by the mean field rather than the fluctuating one. The two regimes: (1) strong and (2) weak Faraday Rotation depend on whether the ratio \(L_{eff}/L\), is smaller (strong) or larger (weak) than unity: \[\frac{L_{eff}}{L}\sim\frac{1}{\lambda^{2}L}\frac{1}{\phi} \tag{33}\] where \(\phi=\max(\sqrt{2}\sigma_{\phi},\tilde{\Phi})\) with \(\sigma_{\phi}\) is the dispersion of random magnetic field. The difference between the two regimes are, the Faraday rotation and the emission happens together in the former regime (\(\phi=\sqrt{2}\sigma_{\phi}\)), while the latter has the Faraday rotation happens after the emission of the polarization. We shall name the two regimes "Variance-driven Faraday Rotation" (VFR) and "Mean-field Faraday Rotation" (MFR), respectively. Notice that both regimes have been considered in Zhang et al. (2020). Fig.5 shows a plot on how VFR and MFR could change the value of \(r_{xx}\). For this current plot we _intentionally_ plot \(r_{xx}\) with values that are not typically considered in previous literature (See, e.g. Zhang et al., 2020, \(r_{xx}\in[-1,1]\)). This allows us to better characterize whether the value of \(r_{xx}\) came from the effect of compressibility or from Faraday rotations. We can observe from Fig.5 that there are two new regimes of \(\lambda\) that could make \(r_{xx}\) fluctuates well beyond the values previously considered in Zhang et al. (2020). From Fig.5 we classify the ranges of values of \(\lambda\) via the fluctuations of \(r_{xx}\) into three regimes: he "weak" regime correspond to the case where \(r_{xx}\) is small (\(\in[-1,1]\) as in Zhang et al., 2020). The intermediate regime correspond to the case where \(r_{xx}\) start to grow exponentially, and the strong regime correspond to the case where the \(r_{xx}\) basically loses traces on the compressibility. We can see that obviously the technique of SPA does not work when we are in the strong regime. However an interesting question is whether the SPA technique actually works in the intermediate regime which will be the subject for future studies. Application (II): A self-consistent line of sight angle tracing method via structure functions of \(I+q\) and \(I-q\) The second application that we will deliver in this paper would be the retrieval of the mean global line of sight angle \(\gamma\). In the case of synchrotron/dust polarization, we have adequate information to estimate \(\gamma\) by considering the struc Figure 5: A figure showing how the values of \(r_{xx}\) varies as a function of \(\lambda\) in the presence of Faraday Rotation \(\propto\lambda^{2}\int dz\rho B_{z}\) for both Variance-driven Faraday Rotation (VFR) and mean-field driven Faraday Rotation (MFR). ture functions of both \(I+Q\propto\int dzB_{x}^{2}\) and \(I-Q\propto\int dzB_{y}^{2}\). For the following subsections, we will assume that the global mean field within the sampling area is \(\parallel\) to x-axis. One could always rotate the frame in Stokes parameter space to have the above condition satisfied. ### Why \(\gamma\) is encoded in the statistics of \(I+Q\) and \(I-Q\)? The essence of on why \(\gamma\) is encoded in \(I+Q\) and \(I-Q\) is based on the fact that tensor formulation (c.f. Eq.2) contains different expressions for observables that \(\parallel B\) and \(\perp B\). In Fig. 6, we present a set of figures showing the anisotropy of \(I+Q\) and \(I-Q\) for both A and F type contributions (c.f. Fig.A1). We present two extreme cases for \(\gamma\) in Fig.63 that is sufficient to illustrate the differences of behaviors for the anisotropy of A and F type fluctuations. The left group of figures in Fig.6 shows the case when \(\gamma=89^{o}\), while the group of figures on the right shows the case when \(\gamma=9^{o}\). We can observe from Fig.6 a few interesting phenomena which is not covered in previous anisotropy studies: Footnote 3: Notice that the projection of pure Alfven wave fluctuations when \(\gamma\) is exactly \(90^{o}\) will vanish, see Lazarian et al. (2022b) for the analysis. 1. [label=()] 2. The anisotropies of A and F type tensor do not necessarily align with the mean magnetic field direction. We discussed this effect already from Fig.4. The reason behind is that both the anisotropy and tensor contribution are anisotropic (c.f. SS4.2). However, the direction of anisotropy for the tensor contribution (with the Alfven leakage in effect) does not necessary be parallel to B-field and is a function of \(\gamma\). Notice that the change of anisotropy is highly tied with the \(\gamma\) value (See Fig.7) 3. For the case of pure Alfven fluctuations, the anisotropy is more or less parallel to magnetic field for \(I+Q\), while \(\perp B\) for \(I-Q\). Yet, the compressible mode does not carry the same trend as its Alfven counterpart: When \(\gamma\approx 9^{o}\), the F-type anisotropy for \(I+Q\) is actually \(\perp B\), while that for \(I-Q\) is \(\parallel B\). In contrast, when \(\gamma\approx 89^{o}\) the F-type anisotropy varies very similarly to that of the Alfven counterpart. 4. The measurement of relative anisotropies between \(I+Q\) and \(I-Q\) allows us to characterize the \(\gamma\) value. From Fig.6 we can see that if we consider the anisotropies of \(I+Q\) and \(I-Q\) at \(\gamma\approx 89^{o}\), \(I+Q\) tends to be parallel to magnetic field, while that for \(I-Q\) tends to be perpendicular to magnetic field. We utilize the formulation in Appendix B that the minor-to-major axis ratio \(l_{\perp}/l_{\parallel}=\sqrt{1-\epsilon^{2}}\), which the eccentricity \(\epsilon\) is related to the quadropole-to-monopole ratio \(|D_{2}/D_{0}|\) via Eq.B3. The quadropole-to-monopole ratio is the key parameter in parametrizing the anisotropy in previous literature (Lazarian and Pogosyan, 2012, 2016; Kandel et al., 2016, 2017; Lazarian et al., 2022b). ### Formalism via SS3 We will start from the parameters \(I+Q\) and \(I-Q\) in which we will assume the projected mean field is right now along the \(x\) direction 4. For the case of \(I+Q\), we adopt the structure function expression from Eq.(E20) of Lazarian et al. (2022b): Footnote 4: For a general magnetic field configuration, one could always consider the combination \(I+(Q\cos(2\phi_{pol})-U\sin(2\phi_{pol}))\), where we perform an inverse orthogonal transform with twice of the polarization angle \(2\phi_{pol}=\tan_{2}^{2}U/Q\) for this analysis. \[\begin{split} D_{I+Q}(\mathbf{R})&=\langle(B_{x}( \mathbf{R}+\mathbf{R}^{\prime})-B_{x}(\mathbf{R}^{\prime}))^{2}\rangle_{ \mathbf{R}^{\prime}}\\ &=\frac{1}{2\pi^{2}}\!\int\!\!d^{2}K\left(1-e^{i\mathbf{K}\cdot \mathbf{R}}\right)\times\\ &\left[A(K,\sin\gamma\cos\phi_{K})\frac{\cos^{2}\gamma\sin^{2} \phi_{K}}{1-\sin^{2}\gamma\cos^{2}\phi_{K}}+\right.\\ &\left.F(K,\sin\gamma\cos\phi_{K})\frac{\sin^{2}\gamma\sin^{4} \phi_{K}}{1-\sin^{2}\gamma\cos^{2}\phi_{K}}\right]\end{split} \tag{34}\] where those factors are simply the expressions of \(\zeta_{A}\zeta_{A}\) and \(\zeta_{F}\zeta_{F}\) in the global frame of reference (i.e. after leakage). The main takeaway here is, This D factor depends on the following form \[D_{I+Q}(\mathbf{R})\sim\bar{A}(\mathbf{R})\cos^{2}\gamma+\bar{F}(\mathbf{R}) \sin^{2}\gamma \tag{35}\] Similarly, the structure function for \(I-Q\) can be also modelled similarly as: \[D_{I-Q}(\mathbf{R})\sim\bar{A}(\mathbf{R})\sin^{2}\gamma+\bar{F}(\mathbf{R}) \cos^{2}\gamma \tag{36}\] Based on Fig.6 we can see that the construction: \[\bar{y}=\frac{\text{Anisotropy}(D_{I+Q})}{\text{Anisotropy}(D_{I-Q})} \tag{37}\] contains the information on \(\gamma\). Here we take the convention that Anisotropy\((D)>1\) when the anisotropy of structure function is parallel to the global magnetic field direction, and vice versa. In particular, from Fig.6 we expect that \(\bar{y}_{A}>1\) for all \(\gamma\), while that for F-type contribution changes from smaller than 1 to greater than 1. Detecting the value of \(\bar{y}\) for Figure 6: A set of figures showing how the orientation of anisotropy for \(I+Q\) and \(I-Q\) is related to the line of sight angle \(\gamma\) for pure A (Alfven) and F (compressible, see Lazarian and Pogosyan, 2012) type tensor. The key difference between the case of \(\gamma\rightarrow\pi/2\) (correspond to the case when \(B\perp\) LOS) and \(\gamma\to 0\) is that, the anisotropies of \(I+Q\) and \(I-Q\) for pure A and F tensors are similar for the former case, while for the latter case the anisotropies of pure A and F tensors are exactly opposite. compressible modes (in global frame of reference) detected in observation is the key to extract the value of \(\gamma\). The key reason why we consider the ratio of structure functions instead of individual quantity is because, from our expressions _in the global frame of reference_, the structure function of some observables carries factors on spectrum, anisotropy and tensors. For the case of structure functions of \(I+Q\) and \(I-Q\), their only difference is coming from the tensor factor as spectrum and anisotropy factors are fixed once the turbulence is set-up. To proceed with our analysis, we consider the multipole expansion up to quadrupole (See Appendix SSB for the condition for the expansion. In particularly, the expansion is valid only for \(M_{A}\sim 0.5-1.0\).) Formally we can express the anisotropy function that we defined above with the monopole and quadrupole coefficients \(D_{0},|D_{2}|\): \[\text{Anisotropy}_{M_{A}\in[0.5,1]}\approx\text{sign}(\text{Anisotropy}) \times\frac{D_{0}-|D_{2}|}{D_{0}+|D_{2}|} \tag{38}\] Recall from the previous discussion that the factors \(d_{0,2}\) can be literally written as the spectrum, anisotropy and the tensor contribution, and the first two contributions are cancelling out under our treatment, we can formally write \(y\), which is the quadrupole approximation of \(\bar{y}\) to be (c.f. Eq.(114) of Lazarian et al., 2022b): \[\begin{split} y&=\frac{\text{Anisotropy}(I+Q)}{ \text{Anisotropy}(I-Q)}\\ &=\left(\frac{D_{0}-|D_{2}|}{D_{0}+|D_{2}|}\right)_{I+Q}\left( \frac{D_{0}-|D_{2}|}{D_{0}+|D_{2}|}\right)_{I-Q}^{-1}\end{split} \tag{39}\] where we notice that under our current configurations, Figure 7: A set of visualizations showing how the structure function of a certain variable \(D(\mathbf{R})\) can be visually decomposed as the linear combination of the multipole moments \(D_{n}\), and how the multipole moments should be physically correlated to the relative angle between the line of sight and mean magnetic field \(\gamma\). The multipole moments collects the relative weight on the shapes that are specifically defined with the angular function \(\exp(in\theta)\). In particular, \(D_{0}\) records the weights of the isotropic components of the structure functions, while \(D_{2}\) records the first order directionless anisotropy. Since empirically structure functions are mostly elliptical-like, \(|D_{2}|\) must be non zero. modern turbulence theory predicts that the observed anisotropy would be a function of \(\gamma\). When \(\bar{B}\parallel\text{LOS}\), then the structure function should be isotropic. While \(\bar{B}\perp\text{LOS}\), the structure function should be anisotropic. Therefore under the framework of multipole moments, the absolute amplitude of \(D_{2}\) should be a function of \(\gamma\). \(Q\sim{\cal L}_{s}b_{s}^{2}\) and \(I-Q\sim{\cal L}_{s}b_{s}^{2}\) where \(L_{s}\) is the length of the integration. Keeping only the tensor term, we will have an expression that is purely based on \(W_{I,L}\) in Eq.15, and also functions of \(\gamma\) (See Eqs.35 and 36). Fig.8 shows how numerically the factor \(y\) depends on the line of sight angle \(\gamma\) for Alfven mode (black) and the compressible mode (green). We notice that the qualitative phenomenon happened in Fig.6 is exactly described by \(y\) for compressible modes: \(y<1\) for \(\gamma\to 90^{\circ}\), while \(y>1\) for \(\gamma\to 0^{\circ}\). We recognize that there are fluctuations in terms of the variation of \(y\) relative to \(\gamma\) for the compressible case. Surprisingly, the Alfven mode \(y\) also exhibits some interesting properties that we can exploit in obtaining \(\gamma\) in observation. Notice that \(y\) for Alfven mode stays \(<1\) from what we observe in Fig.6, we see that the Alfven mode's \(y\) has very similar trend when \(\gamma\gtrapprox 55^{\circ}\), but when \(\gamma\lessapprox 55^{\circ}\), the Alfven mode \(y\)-value went exactly opposite to that of compressible mode. Moreover, we observe that the change of values of \(y\) as a function of \(\gamma\) is more or less monotonic if we consider \(\gamma\lessapprox 55^{\circ}\) and \(\gamma\gtrapprox 55^{\circ}\). Notice that the modes that we are talking about here are all in the global frame of reference. To see whether the trend that we observed in Fig.8 is robust, we select some of the numerical cubes from Tab.1 and to plot \(y\) as a function of \(\gamma\) for both \(A\) and \(F\) type contribution and plot it as Fig.9. The selected numerical cubes cover a wide range of sonic and Alfvenic Mach numbers. We can see from Fig.9 that the trends of the two curves are very similar to that of Fig.8. Furthermore, the exact values of \(y\) are also very similar across different turbulent conditions. Originally, the formalism of \(A\) and \(F\) type tensor applies only for \(M_{s,A}<1\). However, we perform the calculation of \(y\) also for supersonic sub-Alfvenic simulations, which is closer to the environment of molecular clouds (See, e.g. Draine, 2011) and still observe the same trend. We therefore conclude that the \(\bar{y}\) parameter tracers \(\gamma\). In fact, we observe from Fig.9 that when the plasma \(\beta\propto M_{A}^{2}/M_{s}^{2}\) is smaller, it is easier to recover the trend that we see in Fig.8. At last, we provide the empirical formula (units in degrees) for the case of low \(\beta\) (\(\beta<1\)). For \(\gamma<40^{\circ}\) \[y(F) \sim 1.2-\gamma/40\times 0.4\] \[y(A) \sim 0.4+\gamma/40\times 0.2 \tag{40}\] for \(\gamma>40\) degrees \[y(F) \sim 0.8-(\gamma-40)/50\times 0.2\] \[y(A) \sim 0.8 \tag{41}\] The full study on how the y-parameter can be applied to situation with different mixture of driving will be discussed in Malik et al. (in prep). ## 6 Discussion ### The importance of Alfven leakage for mode decomposition The analysis of turbulence properties generally from observations requires the consideration of the local-to-global frame problem, which is modelled as the "magnetic field wandering problem". While the theory of MHD turbulence is well-established, how the local scaling laws are projected globally is still mysterious, despite models have been proposed from both Lazarian and Pogosyan (2012) and Lazarian et al. (2020). Here, we propose the first physical model in explaining how the wandering of magnetic field happens when projected along the line of sight, and how we could utilize the magnetic field wandering in deducing a number of important physical quantities such as the line of sight angle and also the mode fractions. The problem of the local-to-global frame transition in theoretical MHD turbulence studies have puzzled the community for a while. While the anisotropic scaling \(k_{\parallel}\sim k_{\parallel}^{2/3}\) is well motivated from the simple constant energy cascade and critical balance condition (GS95), we cannot retrieve the local scaling from the global frame of reference. In fact, the global correlation function usually gives a constant scaling rather than a geometrically-driven, size-dependent scaling as predicted by GS95. Before the availability of MHD simulations (e.g. Cho and Lazarian, 2003; Beresnyak et al., 2005), it is not yet possible to validate the GS95 relation even from numerical simulations. The more puzzling effect comes when \(M_{A}\) is very large. Traditionally the numerical test on GS95 are done in small \(M_{A}\) systems and in small scales. However as we see from the previous sections, in moderate and small \(k\) the Alfven mode acquired from the Cho and Lazarian (2003) decomposition method contains non-negligible contributions along the \(\hat{\zeta}_{C}\) vector, indicating the _presence of anomalous compressive wavevector even after Alfven mode decomposition_. The only plausible reason why this happens is because the mode decomposition method from Cho and Lazarian (2003) is done on a global frame of reference. As a result when we are looking at small scales, _in average_ the mean field is not very different from its local field. Yet for larger scales the mean field is very Figure 8: A figure showing the characterization of the relative anisotropy index (\(y=\text{anisotropy}(I+Q)/\text{anisotropy}(I-Q)\)) as a function of the line of sight angle \(\gamma\). As we outlined in Fig.6, the relative anisotropies for A and F type fluctuations are different when \(\gamma\) is different. For the case of \(\gamma\to 90\), we expect the anisotropies of A and F type tensor fluctuate in the same way, which is illustrated as the light blue box in the figure. However, when we are looking at small \(\gamma\) limit, the anisotropies of A and F type tensor went completely opposite, which is highlighted by the red box in the figure. We denote these two regimes the ”compressible” and the ”Alfven” regime respectively. From numerical simulation ”e5r2” (See Tab.1.) different from the local field, so that there exists anomalous compressible terms even the data are supposed to be "Alfven modes" according to Cho & Lazarian (2003). We named this effect "Alfven leakage" in our previous section since this effect happens even for Alfven waves as long as the Alfvenic Mach number is not zero. In this paper, we further show that the Alfven leakage effect is a global function of \(M_{A}\). In fact, the presence of the leakage effect suggests that the mode decomposition method by Cho & Lazarian (2003) should subject to the a correction term for moderate and small \(k\). However since most of the calculation from Cho & Lazarian (2003) are done in small scales, i.e. large \(k\), the results of their work are not affected. ### The importance of tensor forms to the SPA technique and general turbulence studies The novel invention of the SPA technique (Zhang et al. (2020)) utilizes the fact that the tensor projections have different contributions for Alfven and compressible modes to identify them in observations. This work further strengthens their argument through the use of Alfven leakage picture and suggests a few important improvements to their method. For instance, it is not necessary to compute the parameter \(s_{xx}\) as in Eq.21 to distinguish the modes. The tensor properties are encoded in the Stokes parameters and thus ignoring the tensor contribution would make dramatically different predictions in astrophysical applications. One very important factor that is accounted by Zhang et al. (2020) is the use of one point statistics under Stokes frame transformation. The traditional turbulence statistical studies usually utilize multi-point statistics since they are either directly related to the spectra (e.g. two-point) or is used to validate scaling relations for higher order structure functions (e.g. Kolmogorov 4/5 law). The reason of why single point statistics was not useful before is because the spectrum and anisotropy are the main characteristics of turbulence studies for the past 60 years. However, how the tensor projection affects the geometry of the structures for each of the turbulence variable is not really explored. Tensor forms of turbulence modes were not much explored beyond the physics of cosmic rays(Schlickeiser, 2002; Yan & Lazarian, 2002, 2004). In fact, the previous anisotropy analysis also did not consider what is the statistics of a single component of a 3D turbulence, i.e.tensor projection, after projection along the line of sight. While the series of papers by Lazarian & Pogosyan started to consider how the single component statistics works, not until recently did both numerically (e.g. Lazarian et al., 2018) and observationally (Zhang et al., 2020) found the effect of tensors to be that important during single component projection. The anisotropy of projected fast modes with the direction opposite to the Alfvenic anisotropy was shown in Lazarian & Pogosyan (2012). In fact, one of the most common belief that is circulating in the earlier studies of MHD turbulence theory (e.g. the discussion section of Lazarian et al. (2017)) is the presumption that the projection of the observables (e.g. velocities, magnetic field) from fast modes will be isotropic since the fast modes in 3D are. This is empirically proven wrong by Lazarian et al. (2018) through the development of velocity gradient and also utilized through the development of SPA in Zhang et al. (2020). In fact, in a number of astrophysical applications, the ob Figure 9: A set of figures showing the universality of our finding (\(y\) as function of the line of sight angle \(\gamma\) in degrees) in Fig.8 in 6 numerical simulations from Tab.1, which covers a large range of value of \(M_{s,A}\). servables are constructed through not all three directions of velocities or magnetic field, but just some of them. For instance, the Davis-Chandrasekhar-Fermi (DCF) technique utilizes both the line of sight velocity dispersion and the plane of sky polarization angle dispersion to estimate the magnetic field strength through the use of Alfven relation (See also Cho and Yoo, 2016). However, as found in Lazarian et al. (2022), the direction of the velocity and magnetic field fluctuations as collected in DCF technique are exactly perpendicular to each other. Moreover, in this work we also show both analytically and numerically that the tensor term contains anisotropy and can be dominant as long as the \(\gamma\) fulfills some conditions. As a result, one should not ignore the contributions of the tensor term in studying the properties of MHD turbulence. ### The use of high pass filter? In MHD turbulence studies, there are a few length scales that determine whether the underlying turbulence is hydrodynamic or GS95-like. It is a general phenomenon that for 3D saturated turbulence the small scale fluctuations are GS95 like. However for both sub-Alfvenic and super-Alfvenic there exist a transition scale that the turbulence becomes non-GS95 like. For instance, for the sub-Alfvenic case there exist the transition from weak to strong turbulence (Cho and Lazarian, 2003; Makwana and Yan, 2020) at the length scale \(LM_{A}^{2}\), while for the super-Alfvenic case above the scale \(LM_{A}^{-3}\) the turbulence is hydrodynamic. This might suggest that the removal of large scale fluctuations could allow observers to obtain the desired GS95 statistics with the use of high pass filters. However upon projection the high pass filter in 2D acts a little bit differently compared to 3D. Fundamentally the high pass filter (HPF) in 3D serves as the high frequency extractor. As noticed in Lazarian et al. (2020), HPF in 2D acts as a lower bound of the HPF in 3D, i.e. if we explicitly want \(K=\sqrt{k_{x}^{2}+k_{y}^{2}}>K_{0}\), this will automatically apply to \(k=\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}}\leq K>K_{0}\). However since the sampling of turbulence statistics upon projection is not statistically complete, meaning that the wavevectors with \(k>K_{0}\) but \(K<K_{0}\) is not sampled, it is hard to determine whether we will obtain back the same turbulence spectrum anisotropy just by inspection here since we did have additional knowledge on how the LOS direction is related to the inclination angle. More importantly, if we are considering the case when \(M_{A}\) is not small, the randomness of the magnetic field fluctuation will make the filtering in 2D in Stokes parameters being completely different from that of the 3D. Fig.10 shows an example on how different the Stokes Q look like. On the left of Fig.10 we perform filtering after projection (i.e. 2D), while on the right it is the projection after 3D filtering. We can see that, while the statistical anisotropies for the two maps are roughly the same, the differences of the features are prominent. ## 7 Conclusion In this paper, we introduce a vector-based framework in explaining the strength and the limitation of the recently introduced techniques, namely SPA, CFA and VGT. In particular, due to the use of the vector framework, we recognize that in the presence of curved magnetic field Alfven waves will be seen as the linear combination of Alfven and compressible waves, which is named "Alfven leakage". In short, 1. We recognize a straightforward transformation from the local to global reference frame through the Alfven leakage model. (Fig.3). Moreover, the projection parameters \(W_{I,L}\) that are introduced in LP12 are derived in an alternative way in the picture of Alfven leakage. (Eq.(8), Fig.3) 2. The SPA technique, which allows the identifications of the dominance of the Alfven and compressible waves in observed synchrotron emissions, is the result of the one-point statistics. The Alfven wave contribution is frame independent while that for compressible waves are frame dependent. As a result, the quantitative contribution of Alfven and compressible waves can be separated observationally (See Eq.(23)). 3. We suggest that the SPA technique is also applicable to slightly Faraday rotated regime. (SS4.3). 4. Based on the formulation of the Alfven leakage, we discover a new \(\gamma\) tracing method that utilize the anisotropy fraction of \(I+Q\) and \(I-Q\) in observations. We test the method in numerical simulations and see universality of trends across a wide range of turbulence parameters. (SS5, Fig.8). The expression of the vector frame formulation allows us to visually understand and analyze the statistics of MHD turbulence. Together with the theoretical establishment of the Lazarian and Pogosyan series, how the turbulence statistics are imprinted into observables will be better understood by observers. Appendix A The mathematical description on vector and tensor formulations in MHD statistical turbulence theory For our analysis in this paper, we need to review some of the required mathematical tools for the descriptions of the MHD turbulence. The reason why we need them is because some of the frame representations are advantageous in some situations. Here we will first review the concept of the global and local frame of reference, the leakage of modes due to the Yuen and Lazarian (2020) of local magnetic field, and also the mathematical establishments that are scattered in different Figure 10: An illustration of the features of the Stokes Q after 2D (left) and 3D (right) filtering. One can see that there is a significant difference in terms of the structures of the features. literature. The unified approach that we use in this paper will lead to establishment of an analysis framework in understanding how the modes should behave in observations. ### Global and local frame of reference The first important concept is the use of the local frame of reference when computing the structure function of the turbulence variable. The mathematical expression of the 3D structure function of the turbulence variable \(v\) in the local frame of reference is given by: \[SF\{\mathbf{v}\}(\mathbf{r})=\Big{\langle}\left((\mathbf{v}(\mathbf{r}^{\prime} +\mathbf{r})-\mathbf{v}(\mathbf{r}^{\prime}))\cdot\frac{\mathbf{B}(\mathbf{r} ^{\prime}+\mathbf{r})+\mathbf{B}(\mathbf{r}^{\prime})}{|\mathbf{B}(\mathbf{r} ^{\prime}+\mathbf{r})+\mathbf{B}(\mathbf{r}^{\prime})|}\right)^{2}\Big{\rangle} \tag{11}\] where in small \(\mathbf{r}\), the separation of the three eigenmodes (Alfven, Fast, Slow) will give the correct spectrum and anisotropy as predicted in GS95 and LV99. In particular, the anisotropy will be scale dependent when observed locally through the 3D structure functions. Table 1 summarizes the spectral slopes and anisotropies that we expect from the local structure functions. However, we cannot deduce the expressions from Tab.1 due to the restriction of the local-to-global reference frame transformation, which is the main topic of the current paper. A more common method in computing the structure function is by simply computing the simplistic structure function below, assuming \(V_{i}(\mathbf{R})=\int dz\hat{\mathbf{z}}\cdot\mathbf{v}(\mathbf{r})\): \[SF\{V_{i}\}(\mathbf{R})=\langle(V_{i}(\mathbf{R}^{\prime}+\mathbf{R})-V_{i}( \mathbf{R}^{\prime}))^{2}\rangle \tag{12}\] which the spectrum and anisotropy that is observed from this variable could be different from what the local expressions. In particular,the anisotropy in the global frame of reference becomes scale independent, meaning that there is no particular advantage in probing the anisotropy in smaller scale in actual observations, aside from the standard \(LM_{A}^{-3}\) scale. ### Tensor representation In the global frame of reference, the spectral tensor for different modes can be represented by the sum of the three linearly independent spectral tensors \(T_{P,C,A}\), which is given by (Lazarian and Pogosyan (2012), cf Yan and Lazarian, 2004): \[T_{P,ij} = \hat{k}_{i}\hat{k}_{j}\] \[T_{C,ij} = \frac{(\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\hat{\lambda }))_{i}(\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\lambda))_{j}}{|\hat{ \mathbf{k}}\times\hat{\lambda}|^{2}}\] \[= \frac{(\lambda_{i}-(\hat{k}\cdot\hat{\lambda})k_{i})(\lambda_{j}- (\hat{k}\cdot\hat{\lambda})k_{j})}{|\hat{\mathbf{k}}\times\hat{\lambda}|^{2}} \tag{13}\] \[T_{A,ij} = I_{ij}-T_{P,ij}-T_{C,ij}\] \[= \frac{(\hat{\mathbf{k}}\times\hat{\lambda})_{i}(\hat{\mathbf{k}} \times\lambda)_{j}}{|\hat{\mathbf{k}}\times\hat{\lambda}|^{2}}\] Notice that for Alfven mode \(v_{A,i}T_{P,ij}=0\) since \(\nabla\cdot v_{A,i}=0\). Notice that \(T_{C}+T_{A}\) is isotropic. ### The ASF (CL03) frame with respect to the PCA frame For the actual numerical analysis, the realization of the individual MHD modes in the local frame of reference is not achievable since obtaining the modes requires the perturbation theory to start with. In this case, the expressions of the modes are given in Fourier space by evaluating the perturbation along a locally averaged mean field. In that case, for each \(\mathbf{k}\in\mathcal{R}^{3}\), we can locally define the eigenvectors for the three modes \(\hat{\zeta}_{A,S,F}\) given by Eq.1. Notice that the A(lfven)-S(low)-F(ast) frame is a simple rotation of the "magnetic frame" along \(\hat{\zeta}_{A}\) given by the three eigenvectors \((\hat{\lambda},\hat{\mathbf{k}}\times\hat{\lambda},\hat{\lambda}\times(\hat{ \mathbf{k}}\times\hat{\lambda}))\) by an angle \(\phi\): \[\tan\phi=\frac{2\alpha\cos^{2}\theta_{\lambda}-(\alpha+1+\sqrt{D})}{2\alpha \cos^{2}\theta_{\lambda}}\tan\theta_{\lambda} \tag{14}\] The "magnetic field" is simply given by an additional rotation of \(\tan\theta_{\lambda}\) from the P(otential)-C(ompressible)-A(lfven) frame \((\hat{\zeta}_{P}=\hat{\mathbf{k}},\hat{\zeta}_{A}=\hat{\mathbf{k}}\times \hat{\lambda},\hat{\zeta}_{C}=\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times \hat{\lambda}))\). The PCA frame has its special advantage since the sampling of \(\mathbf{k}\) is usually complete in \(d\Omega_{h}\). That means we have the freedom to fix \(\mathbf{k}\) despite other unit vectors are changing. From the tensor product we can always write the arbitrary vector in the Fourier space as : \[\zeta_{i}(\mathbf{k})=C_{P}\hat{k}_{i}+C_{C}\frac{(\hat{\mathbf{k}}\times(\hat {\mathbf{k}}\times\hat{\lambda}))_{i}}{|\hat{\mathbf{k}}\times\hat{\lambda}| }+C_{A}\frac{(\hat{\mathbf{k}}\times\hat{\lambda})_{i}}{|\hat{\mathbf{k}} \times\hat{\lambda}|} \tag{15}\] which we will name the unit vector \(\zeta_{P,C,A}\) now From Cho and Lazarian (2003), in the global frame of reference the Alfven, slow and fast mode eigenvectors are: \[\zeta_{A} \propto \hat{\mathbf{k}}\times\hat{\lambda}\] \[\zeta_{S} \propto (-1+\alpha-\sqrt{D})(\mathbf{k}\cdot\hat{\lambda})\hat{\lambda}+( 1+\alpha-\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{\lambda}))\] \[\zeta_{F} \propto (-1+\alpha+\sqrt{D})(\mathbf{k}\cdot\hat{\lambda})\hat{\lambda}+( 1+\alpha+\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{\lambda}))\] where \(\alpha=\beta\gamma/2\), \(D=(1+\alpha)^{2}-4\alpha\cos^{2}\theta\), \(\cos\theta=\hat{\mathbf{k}}\cdot\hat{\lambda}\). We recognize that there is a frame rotation between the vector \(\zeta_{P,C}\) and \(\zeta_{S,F}\): \[\begin{bmatrix}\zeta_{S}\\ \zeta_{F}\end{bmatrix}=\frac{-1}{2\cos 2\theta\sqrt{D}}\mathbf{L}(\alpha,\theta) \mathbf{R_{0}}(\theta)\begin{bmatrix}\zeta_{P}\\ \zeta_{C}\end{bmatrix} \tag{17}\] where \(\mathbf{R_{0}}(\theta)\) is the standard two-dimensional rotation matrix, the factor beforehand is just for normalization and: \[\mathbf{L}(\alpha,\theta)=\begin{bmatrix}(-1+\alpha-\sqrt{D})\cos\theta&(1+ \alpha-\sqrt{D})\sin\theta\\ (-1+\alpha+\sqrt{D})\cos\theta&(1+\alpha+\sqrt{D})\sin\theta\end{bmatrix} \tag{18}\] Then we can rewrite the tensors by \[T_{S/F}=\zeta_{S/F}\otimes\zeta_{S/F} \tag{19}\] Notice that \(T_{ij}\zeta_{j}=\zeta_{i}\) if \(T_{ij}=\zeta_{i}\otimes\zeta_{j}\). ### Frenet-Serret frame From Yuen and Lazarian (2020), the Frenet-Serret frame of the the magnetic fields lines would be: \[\frac{d\hat{t}}{ds} =\qquad+\kappa\hat{n} \tag{10}\] \[\frac{d\hat{n}}{ds} =-\kappa\hat{t}\qquad\quad+\tau\hat{b}\] \[\frac{d\hat{b}}{ds} =\qquad\quad-\tau\hat{n}\] Here \(\hat{t}=\hat{\lambda}\), representing the tangent vector of the magnetic field line. \((\hat{t},\hat{n},\hat{b})\) forms a complete orthogonal set independent of the choice of \(k\). Notice that for mode decomposition, the "mean" field is selected before selecting (Fourier transforming into) \(\mathbf{k}\), thus we can treat \(\lambda\) as k-independent and uses its own position vector \(\mathbf{r}_{\lambda}\)). Notice that the unit vector \(\hat{n}\) can be expressed as the linear combination of \(\hat{\zeta}_{A}\) and \(\hat{\lambda}\times\hat{\zeta}_{A}\) in the magnetic frame * **The relation between the tensor representation (Lazarian and Pogosyan 2012) and vector representation (this work)** In the local frame of reference, the Alfven mode magnetic field is given by simply: \[\mathbf{H}_{A}(\mathbf{r})=\int d^{3}kC(k)\hat{\zeta}_{A}(k) \tag{11}\] where \(C\) contains the isotropic and anisotropic factors from its spectrum. However as we move from the local frame to the global frame, the actual Alfven wave magnetic field realization will contain both compressible and Alfven wave contribution (here we simply pick an arbitrary \(\mathbf{k}\)): \[\tilde{H}_{A}(\mathbf{k})=CW_{A}\hat{\zeta}_{A}+CW_{C}\hat{\zeta}_{C} \tag{12}\] where \(W_{A,C}\) are two factors yet to be found. LP12 branded these two factors in the form of the direct tensor product \(\zeta_{E}=\zeta_{C}+\zeta_{A}\) and \(\zeta_{F}=\zeta_{C}\), and \(T_{E,F}=\hat{\zeta}_{E,F}\otimes\hat{\zeta}_{E,F}\). In their case when Alfven mode is observed in the local frame of reference, the Alfven mode correlation function in k-space is given by: \[\tilde{H}_{i}\tilde{H}_{j}=C^{2}(T_{E,ij}-T_{F,ij}) \tag{111}\] while in the global frame of reference \[\tilde{H}_{i}\tilde{H}_{j}=C^{2}T_{E,ij}-C^{2}(W_{I}T_{E,ij}+W_{L}T_{F,ij}) \tag{112}\] Some algebra will give \[\tilde{H}_{i}\tilde{H}_{j}=C^{2}(1-W_{I}-W_{L})T_{C,ij}+C^{2}(1-W_{I})T_{A,ij} \tag{113}\] ### Conversion between the frame of references of velocity field and magnetic field As derived by Cho & Lazarian (2002, 2003) the decomposed Alfven-Slow-Fast frame was the frame for the displacement vector \(\zeta\), which also applies to the velocity fluctuations. However the magnetic field fluctuations do not necessary follow the ASF frame as defined in CL03. For Alfven wave, the fluctuations of the magnetic field is in the same direction as that of velocities, i.e. \(\dot{k}\times\dot{\lambda}\). For compressible modes, the propagation of the magnetic field fluctuations \(\tilde{b}(\mathbf{k})\) at a specific wavevector \(\mathbf{k}\) is given by the following relation: \[\tilde{b}=\dot{k}\times(\tilde{v}\times\dot{\lambda}) \tag{114}\] where \(\tilde{v}\) is the velocity fluctuation at \(\mathbf{k}\). Notice that the above vector is parallel to the compressible vector \(\tilde{\zeta}_{C}=\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\dot{\lambda})\). ## Appendix B The fundamentals of describing the anisotropy in structure functions In this section we will discuss the essence of multipole expansions in analysing the statistics of turbulence under the assumption of two-point closure5 based on the formalism of Kandel et al. (2016). Footnote 5: The concept of two-point closure is simply to say that turbulence variables can be ”adequately” described by the two-point structure functions. This approximation is evidently incorrect in general turbulence case as intermittency is a well-studied topic in the field. However for equilibrium MHD turbulence that we are considering here, the two-point description contains \(\sim 95\%\) of the spectral power. The prominent features that we are measuring (e.g. mode fraction, \(\gamma\) etc) are therefore dominated by the two-point statistics. See Yuen (thesis, 2022) It is visually compelling that the two-point structure functions are concentric ellipses. Mathematically the structure functions of anisotropic fundamental modes (e.g. Alfven, slow modes) contains a dependence in the form of \(\exp(-C|\cos\phi|)\) for some constant \(C\) that carries a weak dependence on \(\phi\)(See, e.g. Lazarian & Pogosyan, 2012; Kandel et al., 2016). This exponent term is naturally elliptical like. The expression of this term in the two-point statistics of any observables is the main direction of theoretical study recently in literature (Lazarian & Pogosyan, 2012, 2016; Kandel et al., 2016, 2017; Lazarian et al., 2022, 2022). There are a few choices in describing elliptical features on the sky via complete basis: **Multipole expansions of even order**: The spatial symmetry of the function \(\exp(-C|\cos\phi|)\) allows one to express the structure function of any observables \(X\) into the summation the cosines with even orders: \(D_{X}(\phi)=\sum_{m\in 2\mathbb{Z}^{+}}D_{m}\cos(m\phi)\)6. Visually we are expressing the structure function into linear combination of cosines in polar coordinate. Notice that for all \(m|4=2\) the \(D_{m}\) term carries some anisotropy, however for \(m\geq 6\) the multipole anisotropy has a upper limit. For instance, the \(\cos 6\phi\) term has a maximum anisotropy of \(1.15\). Notice that the non-vanishing \(D_{m\geq 6}\) will _decrease_ the anisotropy of the structure function. A typical treatment of the multipole expansion is to truncate the series into \(m=0,2\), where the visual minor-to-major axis ratio for the elliptical feature appeared in the structure function \(\chi=\sqrt{1-\epsilon_{elll}^{2}}\) (\(\epsilon_{elll}\) is the eccentricity of ellipse) is given by: Footnote 6: In previous literature (e.g. Kandel et al., 2016) they express \(D_{X}\sim\sum_{m\in 2\mathbb{Z}}\bar{D}_{m}e^{im\phi}\), where \(m\) can be both positive and negative. Typically structure functions are always real-valued. Therefore for the sake of simplicity we adopt the cosine formalism. \[\chi=\frac{D_{0}-D_{2}}{D_{0}+D_{2}} \tag{115}\] Notice that the multipole expansion fails when \(M_{A}\ll 1\) or \(M_{A}>1\), as the \(D_{m}\) term is comparable \(\forall m\). The empirical limit where \(D_{4}/D_{2}\) is comparable (\(\sim 0.5\),Lazarian et al., 2022) is roughly at \(M_{A}\approx 0.5\). Therefore the multipole expansion is suitable only for \(M_{A}\sim 0.5-1\) (See Fig.115) **Legendre Polynomial**: The Legendre polynomial \(P_{l}(\cos\phi\) is another popular choice in describing the statistics in 2D. Similar to the multipole expansion, we express the structure function \(D_{X}(\phi)=\sum_{l\in 2\mathbb{Z}^{+}}a_{l}P_{l}\). \(a_{l}\) carries very similar mathematical properties as \(D_{m}\) in multipole expansions and therefore we would not discuss further. (See Fig.115) **Elliptical basis**: As the structure function look like ellipses, it is natural to consider the function below to capture the anisotropy of the structure function: \[f(\phi,\epsilon_{elll})=\frac{\sqrt{1-\epsilon_{elll}^{2}}}{(1-\frac{\epsilon_{ 2l}^{2}}{2}+\frac{\epsilon_{2l}^{2}}{2}\cos 2\phi)} \tag{116}\] The advantage of this basis is that (1) the eccentricity \(\epsilon_{elll}\) is a direct measure of the minor-to-major axis ratio, which allows one to quickly construct this function by simply measuring the minor and major axis (2) due to the non-vanishing higher-order multipole of Eq.116, this functional form is still applicable when \(M_{A}\ll 1\). Notice that one can convert the eccentricity \(\epsilon_{elll}\) to the \(D_{2}/D_{0}\) via the formula: \[\Big{|}\frac{D_{2}}{D_{0}}\Big{|}\approx\frac{1}{2}\frac{2\epsilon_{elll}^{2}} {2-\epsilon_{elll}^{2}} \tag{117}\] in which the approximation is valid when \(M_{A}\in[0.5,1]\)_for the case of linear_ (i.e. centroid, \(C\propto\int d{\it zv}_{z}\)) or _quadratically projected observables (i.e. Stokes parameters)_.. The approximation is valid for caustics (c.f. Yuen et al., 2021) for even smaller values of \(M_{A}\). **Acknowledgments.** K.H.Y. & A.L. acknowledge the support the NSF AST 1816234, NASA TCAN 144AAG1967 and NASA ATP AAH7546. KHY thanks Dmitri Pogosyan (U.Alberta) and Ka Wai Ho (UW-Madison) for their inspirational comments. We thank Sunil Malik (DESY) and Parth Pavaskar (DESY) for extensive discussions and cross-checks on the validity of the y-parameter analysis. The main simulations and the first version of the work is done during KHY's tenure in UW Madison. Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number(s) 20220700PRD1. **Code Availability** The code can be found in [https://github.com/kpyean2/MHD_codes](https://github.com/kpyean2/MHD_codes) **Data Availability** The data underlying this article will be shared on reasonable request to the corresponding author.
2309.08620
Variance Reduction of Resampling for Sequential Monte Carlo
A resampling scheme provides a way to switch low-weight particles for sequential Monte Carlo with higher-weight particles representing the objective distribution. The less the variance of the weight distribution is, the more concentrated the effective particles are, and the quicker and more accurate it is to approximate the hidden Markov model, especially for the nonlinear case. We propose a repetitive deterministic domain with median ergodicity for resampling and have achieved the lowest variances compared to the other resampling methods. As the size of the deterministic domain $M\ll N$ (the size of population), given a feasible size of particles, our algorithm is faster than the state of the art, which is verified by theoretical deduction and experiments of a hidden Markov model in both the linear and non-linear cases.
Xiongming Dai, Gerald Baumgartner
2023-09-10T17:25:43Z
http://arxiv.org/abs/2309.08620v1
# Variance Reduction of Resampling for Sequential Monte Carlo ###### Abstract A resampling scheme provides a way to switch low-weight particles for sequential Monte Carlo with higher-weight particles representing the objective distribution. The less the variance of the weight distribution is, the more concentrated the effective particles are, and the quicker and more accurate it is to approximate the hidden Markov model, especially for the nonlinear case. We propose a repetitive deterministic domain with median ergodicity for resampling and have achieved the lowest variances compared to the other resampling methods. As the size of the deterministic domain \(M\ll N\) (the size of population), given a feasible size of particles, our algorithm is faster than the state of the art, which is verified by theoretical deduction and experiments of a hidden Markov model in both the linear and non-linear cases. Markov chain Monte Carlo Hidden Markov models Riesz ## 1 Introduction Sequential Monte Carlo (SMC) or Particle Filter [1] is a set of Monte Carlo methods for solving nonlinear state-space models given noisy partial observations, which are widely used in signal and image processing [2], stock analysis [3, 4], or robotics [5]. It updates the predictions recursively by samples composed of weighted particles to infer the posterior probability density. While the particles will be impoverished as the sample forwards recursively, it can be mitigated by resampling where the negligible weight particles will be replaced by other particles with higher weights [6]. In the literature, several resampling methods and corresponding theoretical analysis [7, 8, 9, 10] can be found. The frequently used algorithms are residual resampling [11], multinomial resampling [1], stratified resampling [12], and systematic resampling [13, 14]. A justified decision regarding which resampling strategies to use might result in a reduction of the overall computation effort and high accuracy of the estimations of the objective. However, for resampling, most of these strategies traverse repetitively from the original population, the negligible weight particles fail to be discarded completely, although the diversity of the particle reserve, causes unnecessary computational load and affects the accuracy of estimations of the posterior distribution. From the perspective of complexity and variance reduction with promising estimation, we propose a repetitive deterministic domain ergodicity strategy, where more concentrated and effective particles are drawn to approximate the objective. Our proposal can be widely used in large-sample approximations. In this paper, we concentrate on the analysis of the importance sample resamplings built-in SMC for the hidden Markov model. In Section 2, we present a brief introduction to SMC. Here, a brief introduction to the hidden Markov model and the sequential importance sampling method will be given. Our method will be introduced in Section 3, where we introduce the origin of our method, and how to implement each step in detail, and then the theoretical asymptotic behavior of approximations using our method is provided. The practical experiments will be validated by Section 4, where performance and complexity analysis are presented. The summary of our contributions is outlined in Section 5. ## 2 Resampling in SMC for Hidden Markov Model Consider the state-space model, which is also known as a hidden Markov model, described by \[X_{0}\sim\mu(X_{0}),\quad X_{t}\mid X_{t-1}\sim f(X_{t}\mid X_{t-1}),\quad Y_{t} \mid X_{t}\sim g(y_{t}\mid X_{t}). \tag{1}\] The initial state \(X_{0}\) follows probability density distribution \(\mu(X_{0})\), \(X_{t},(t=1,2,...n)\) is a latent variable to be observed, the measurements \(Y_{t}\) are assumed to be conditionally independent given \(X_{t}\), the most objective is to estimate \(X_{t}\). The recursive Bayesian estimation can be used and it is described as: (a) Prediction \[\pi(X_{t}\mid y_{1:t-1})=\int f(X_{t}\mid X_{t-1})\pi(X_{t-1}\mid y_{1:t-1})dX _{t-1} \tag{2}\] (b) Update \[\pi(X_{t}\mid y_{1:t})=\frac{g(y_{t}\mid X_{t})\pi(X_{t}\mid y_{1:t-1})}{\int g (y_{t}\mid X_{t})\pi(X_{t}\mid y_{1:t-1})dX_{t}} \tag{3}\] From (2) and (3) the integral part is unreachable, especially, for high-dimensional factors involved, we fail to get the close form of \(\pi(X_{t}\mid y_{1:t})\)[15, 16]. Sequential Monte Carlo is a recursive algorithm where a cloud of particles is propagated to approximate the posterior distribution \(\pi(X_{0:t}\mid y_{1:t})\). Here, we describe a general algorithm that generates at time \(t\), \(N\) particles \(\left\{X_{0:t}^{(i)}\right\}_{i=1}^{N}\) with the corresponding empirical measure \(\hat{\pi}(X_{0:t}\mid y_{1:t})=\sum_{i=1}^{N}w_{t}^{i}\delta_{X_{0:t}}^{(i)}( dX_{0:t})\), a discrete weighted approximation of the true posterior \(\pi(X_{0:t}\mid y_{1:t})\), \(\delta_{X_{0:t}}^{(i)}(dX_{0:t})\) denotes the delta-Dirac mass located at \(X_{t}\), \(dX_{0:t}\) equals to \(X_{0:t}-X_{0:t}^{i}\). The particles are drawn recursively using the observation obtained at time \(t\) and the set of particles \(\left\{X_{0:t-1}^{(i)}\right\}_{i=1}^{N}\) drawn at time \(t-1\), accordingly, where \(\hat{\pi}(X_{0:t-1}\mid y_{1:t-1})\approx\pi(X_{0:t-1}\mid y_{1:t-1})\). The weights are normalized using the principle of importance sampling such that \(\sum_{i=1}^{N}w_{t}^{i}=1\). If the samples \(X_{0:t}^{i}\) are drawn from an importance density \(q(X_{0:t}^{i}\mid y_{1:t})\), we have \[w_{t}^{i}\propto\frac{\pi(X_{0:t}^{i}\mid y_{1:t})}{q(X_{0:t}^{i}\mid y_{1:t })} \tag{4}\] Suppose at time step \(t-1\), we have existed samples to approximate the current posterior distribution \(\pi(X_{0:t-1}\mid y_{1:t-1})\), if we get a new observation \(y_{t}\) at time \(t\), a recursive approximation to \(\pi(X_{0:t}\mid y_{1:t})\) with a new set of samples can be obtained by importance sampling, the corresponding factorization [14] is described by \[q\left(X_{0:t}\mid y_{1:t}\right):=q(X_{t}\mid X_{0:t-1},y_{1:t})q(X_{0:t-1} \mid y_{1:t-1}) \tag{5}\] Then, we can get the new samples \(X_{0:t}^{i}\sim q(X_{0:t}\mid y_{1:t})\) by propagating each of the existing samples \(X_{0:t-1}^{i}\sim q(X_{0:t-1}\mid y_{t-1})\) with the new state \(X_{t}^{i}\sim q(X_{t}\mid X_{0:t-1},y_{t})\). To derive the weight update equation, we follow the ergodic Markov chain properties of the model, the full posterior distribution \(\pi(X_{0:t}\mid y_{1:t})\) can be written recursively in terms of \(\pi(X_{0:t-1}\mid y_{1:t-1})\), \(g(y_{t}\mid X_{t})\) and \(f(X_{t}\mid X_{t-1})\)[14]: \[\pi(X_{0:t}\mid y_{1:t})=\frac{p(y_{t}\mid X_{0:t},Y_{1:t-1})p(X_{0:t}\mid y_ {1:t-1})}{p(y_{t}\mid y_{1:t-1})}\Rightarrow\pi_{1:t}\propto g(y_{t}\mid x_{t} )f(x_{t}\mid x_{t-1})\pi_{1:t-1} \tag{6}\] Where \(\pi_{1:t}\) is short for \(\pi(X_{0:t}\mid y_{1:t})\). By substituting (5) and (6) into (4), we have \[w_{t}^{i}\propto\frac{g(y_{t}\mid X_{t}^{i})f(X_{t}^{i}\mid X_{t-1}^{i})p(X_{0 :t-1}^{i}\mid y_{1:t-1})}{q(X_{t}^{i}\mid X_{0:t-1}^{i},Y_{1:t})q(X_{0:t-1}^{i} \mid Y_{1:t-1})}=w_{t-1}^{i}\frac{g(y_{t}\mid X_{t}^{i})f(X_{t}^{i}\mid X_{t-1 }^{i})}{q(X_{t}^{i}\mid X_{0:t-1}^{i},y_{1:t})} \tag{7}\] We assume the state \(X_{t}\) is ergodic Markovian, thus, \(q(X_{t}^{i}\mid X_{0:t-1}^{i},y_{1:t})=q(X_{t}^{i}\mid X_{t-1}^{i},y_{t})\), from this point, we only need to store the \(X_{t}^{i}\), and obtain the thinning recursively update weight formula [17]: \[w_{t}^{i}\propto w_{t-1}^{i}\frac{g(y_{t}\mid x_{t}^{i})f(x_{t}^{i}\mid x_{t-1}^{i })}{q(x_{t}^{i}\mid x_{t-1}^{i},y_{t})} \tag{8}\] The corresponding empirical posterior filtered density \(\pi(X_{t}\mid y_{1:t})\) can be approximated as \[\hat{\pi}(X_{t}\mid y_{1:t})=\sum_{i=1}^{N}w_{t}^{i}\delta_{X_{t}}^{(i)}(dX_{t}) \tag{9}\] It can be shown that as \(N\rightarrow\infty\), \(\hat{\pi}(X_{t}\mid y_{1:t})\) converges to \(\pi_{t}=\pi(X_{t}\mid y_{1:t})\). Ideally, the importance density function should be the posterior distribution itself, \(\pi(X_{0:t}\mid y_{1:t})\). While the variance of importance weights increases over time, which will decrease the accuracy and lead to the degeneracy that some particles make up negligible normalized weights. The brute force approach to reducing the effect of degeneracy is to increase \(N\) as large as possible. However, as the size of the sample increases, the computation of the recursive step will also be exponentially costly. Generally, we can try two ways to improve: (I) suitable importance density sampling; (2) resampling the weights. Here we focus on the latter. A suitable measure of the degeneracy of an algorithm is the effective sample size \(N_{eff}\) introduced in [14]:\(N_{eff}=\frac{N}{1+Var(w_{t}^{i})}\), \(w_{t}^{*i}=\frac{\pi(X_{t}^{i}\mid y_{1:t})}{q(X_{t}^{i}\mid X_{t-1}^{i},y_{ t})}\), while the close solution is unreachable, it could be approximated [18] by \(\hat{N}_{eff}=\frac{1}{\sum_{i=1}^{N}(w_{t}^{i})^{2}}\). If the weights are uniform, \(w_{t}^{i}=\frac{1}{N}\) for each particle \(i=1,2,...,N\), \(N_{eff}=N\); If there exists the unique particle, whose weight is \(1\), the remaining are zero, \(N_{eff}=1\). Hence, small \(N_{eff}\) easily lead to a severe degeneracy [17]. We use \(\hat{N}_{eff}\) as an indicator to measure the condition of resampling for our experiments in section 4. We will introduce our proposal based on the repetitive deterministic domain traverse in the next section. ## 3 Repetitive Deterministic Domain with Median Ergodicity Resampling ### Multinomial Sampling A Multinomial distribution provides a flexible framework with parameters \(p_{i},i=1,...,k\) and \(N\), to measure the probability that each class \(i\in 1,...,k\) has been sampled \(N_{i}\) times over \(N\) categorical independent tests. It can be used to resample the location in our proposal in two steps. Firstly, we obtain the samples from a uniform generator \(u^{i}\sim U(0,1],i=1,...,N\); secondly, we evaluate the index \(j\) of samples with the generalized inverse rule, if the cumulative sum of samples \(\sum_{i=1}^{j}w_{i}\) larger or equal to \(u^{i}\), this index \(j\) will be labeled, then the corresponding sample \(w_{i}\) will be resampled, this event can be mathematically termed as \(g(w_{i})=\mathbb{I}_{w_{i}=w_{j}}\). ### Deterministic Domain Construction The population of weights is divided into two parts. The first part is the weights, larger than the average \(\frac{1}{N}\), they are considered as the candidate firstly to be sampled, we keep \(r_{i}=\left\lfloor N\hat{w}_{t}^{i}\right\rfloor\) replicates of \(\hat{w}_{t}^{i}\) for each \(i\), where \(\hat{w}_{t}^{i}\) is the renormalized unit. \(r_{i}\) will be filtered one by one from the population, and the corresponding tag \(j\) will be saved into an array. We find, this part also follows the multinomial distribution \(W^{i}\sim\textit{Multinomial}(M;\hat{w}^{1},...\hat{w}^{M})\), We extract the samples from the population with the rule of multinomial sampling shown in section 3.1. This step is the first layer of the traverse from the population, we achieve the first subset, then, we renormalized the weights in the subset, and traverse again to differentiate the larger weights and other units, until we get the feasible size of the set to be considered as the potential deterministic domain. We define the integer part event, \(g(\hat{w}_{i})=\mathbb{I}_{\hat{w}_{i}=\hat{w}_{j}}\), similarly for the following repetitive part, \(\bar{g}(\bar{w}_{i})=\mathbb{I}_{\bar{w}_{i}=\bar{w}_{j}}\). We count the units involved in the occurrence of the event \(g(\hat{w}_{i})\) and \(\bar{g}(\bar{w}_{i})\), then extract these units based on the tags \(j\), which forms the final deterministic domain. ### Repetitive Ergodicity in Deterministic Domain with Median Schema Our goal is to retract and retain units with large weights, while the remaining ones with low weights can be effectively replaced in the populations. We set the desired number of resampled units as the size of populations under the premise of ensuring unit diversity as much as possible. We normalized all the units to keep the same scaled level for comparison, after that, the units with higher weights above the average level will appear as real integers (larger than zero) by \(Ns=\textit{floor}(N.\ast w)\), the remaining will be filtered to zero. This is the prerequisite for the deterministic domain construction. In \(Ns\) subset, there exist multiple categorical units, that follow the multinomial distribution. We sample these termed large units with two loops, the outer loop is to bypass the index of the unit zero, and the inner loop is to traverse and sample the subset where different large units distribute, there more large weights will be sampled multiple times. The last procedure is to repetitively traverse in the deterministic domain, where each unit will be renormalized and the corresponding cumulative summation is used to find the index of the unit with the rule of the inverse cumulative distribution function. Each desired unit will be drawn by the multinomial sampler to rejuvenate the population recursively. The complexity of our method is \(\mathcal{O}(M)\). As the size of the deterministic domain \(M\ll N\) (the size of population), given a feasible size of particles, our algorithm is faster than the state of the art. The total implement schema is shown in Algorithm 1. ### Theoretical Asymptotic Behavior of Approximations #### 3.4.1 Central limit theorem Suppose that for each \(t\in[1,T]\), \(\tilde{X}_{t}^{(1)},...,\tilde{X}_{t}^{(M)}\) are independent, where \(\tilde{X}_{t}^{(m^{\prime})},m^{\prime}\in[1,M]\) denotes the median of the originator particles. For others \(\tilde{X}_{t}^{(i)},i\neq m^{\prime}\) belong to the deterministic domain; the probability space of the sequence recursively changes with \(t\) for sequential Monte Carlo, such a collection is called a triangular array of particles. Let \(S_{m}:=\tilde{X}_{t}^{(1)}+...+\tilde{X}_{t}^{(M)}\). We expand the characteristic function of each \(\tilde{X}_{t}^{(i)}\) to second-order terms and estimate the remainder to establish the asymptotic normality of \(S_{m}\). Suppose that both the means and the variance are finite; we have \[E(X_{t}^{(i)})=\int\Psi(x)\pi(x)dx,\delta_{t,i}^{2}(\Psi)=E[(X_{t}^{(i)}-E(X_ {t}^{(i)}))^{2}]. \tag{10}\] **Theorem 1** For each \(t\) the sequence \(\tilde{X}_{t}^{(1)},...,\tilde{X}_{t}^{(M)}\) sampled from the originator particles \(X_{t}^{(1)},...,X_{t}^{(N)}\), suppose that are independent, where \(\tilde{X}_{t}^{(m^{\prime})},m^{\prime}\in[1,M]\) denotes the median of the originator particles. For the rest \(\tilde{X}_{t}^{(i)},i\neq m^{\prime}\) belong to the deterministic domain; let \(\Psi\) be a measurable function and assume that there exists \(\mathbf{\tilde{X}_{t}}\subset\mathfrak{K}\) satisfying \[\int_{x\in\mathcal{R}}\pi(dx)\mathbb{E}_{x}\left[\sum_{t=1}^{T}|\Psi(\mathbf{ X_{t}})|^{2+\epsilon}\right]<\infty \tag{11}\] and \[\text{sup}_{x\in\Re}\mathbb{E}_{x}\left[\sum_{t=1}^{T}\left|\Psi(\mathbf{X_{t}}) \right|\right]<\infty,\mathbb{E}_{\pi_{t}}[\Psi]:=\int_{\Re}\pi(dx)\mathbb{E}_{x }\left[\sum_{i=1}^{N}\Psi(\mathbf{X}^{(i)})\right]<\infty. \tag{12}\] If \(\mathbf{\tilde{X}_{t}}\) is aperiodic, irreducible, positive Harris recurrent with invariant distribution \(\pi\) and geometrically ergodic, and if, in addition, \[\delta_{t,i}^{2}(\Psi):=\int\pi(dx)\mathbb{E}_{x}\left[\left(\Psi(\tilde{X}_{t }^{(i)})-\mathbb{E}_{\pi_{t}}[\Psi]\right)^{2}\right]<\infty,s_{m}^{2}=\lim_{M \rightarrow\infty}\sum_{i=1}^{M}\delta_{t,i}^{2}(\Psi), \tag{13}\] \(\{\Psi(\tilde{X}_{t}^{(i)})\text{ satisfies}\) \[\lim_{M\rightarrow\infty}\sum_{i=1}^{M}\left\{\Psi(\tilde{X}_{t}^{(i)})- \mathbb{E}_{\pi_{t}}[\Psi]\right\}\sim N(0,s_{m}^{2}). \tag{14}\] **Proof** Let \(Y_{t,i}=\Psi(\tilde{X}_{t}^{(i)})-\mathbb{E}_{\pi_{t}}[\Psi]\), by [19], \(\left|e^{iy}-\sum_{k=0}^{M}\frac{(iy)^{k}}{k!}\right|\leq\min\{\frac{(y)^{M+1} }{(M+1)!},\frac{2(y)^{M}}{M!}\}\), when \(M=2\), we have \[\left|e^{iy}-(1+iy-\frac{1}{2}y^{2})\right|\leq\min\{\frac{1}{6}\left|y\right| ^{3},\left|y\right|^{2}\}. \tag{15}\] We first assume that \(\Psi(\cdot)\) is bounded. From the property of characteristic function, the left-hand side can be written as \(\left|\mathbb{E}\left[e^{(i\lambda Y_{t,i})}|\Re\right]-(1-\frac{\lambda^{2} \delta_{t,i}^{2}(\Psi)}{2})\right|\) Therefore, the corresponding character function \(\varphi_{t,i}(\lambda)\) of \(Y_{t,i}\) satisfies \[\left|\varphi_{t,i}(\lambda)-(1-\frac{\lambda^{2}\delta_{t,i}^{2}(\Psi)}{2}) \right|\leq\mathbb{E}\left[min\{\left|\lambda Y_{t,i}\right|^{2},\frac{1}{6} \left|\lambda Y_{t,i}\right|^{3}\}\right]. \tag{16}\] Note that the expected value exists and is finite, the right-hand side term can be integrated by \[\int_{\left|Y_{t,i}\right|\geq\epsilon\delta_{t,i}\sqrt{M}}\min\{\left|\lambda Y _{t,i}\right|^{2},\frac{1}{6}\left|\lambda Y_{t,i}\right|^{3}\}dx \tag{17}\] As \(M\rightarrow+\infty\), \(\{Y_{t,i}\}\rightarrow\emptyset\), then, \(E\left[min\{\left|\lambda Y_{t,i}\right|^{2},\frac{1}{6}\left|\lambda Y_{t,i} \right|^{3}\}\right]\to 0\), which satisfies Lindeberg condition: \[\lim_{M\rightarrow\infty}\sum_{i=1}^{M}\frac{1}{s_{n}^{2}}\int_{\left|Y_{t,i} \right|\geq\epsilon\delta_{t,i}\sqrt{M}}Y_{t,i}^{2}dX=0 \tag{18}\] for \(\epsilon>0,s_{m}^{2}=\sum_{i=1}^{M}\delta_{t,i}^{2}(\Psi)\). \[\lim_{M\rightarrow\infty}\left|\varphi_{t,i}(\lambda)-(1-\frac{\lambda^{2} \delta_{t,i}^{2}(\Psi)}{2})\right|=0. \tag{19}\] By [19] \[\varphi_{t,i}(\lambda)=1+i\lambda\mathbb{E}[X]-\frac{1}{2}\lambda^{2}\mathbb{ E}[X^{2}]+o(\lambda^{2}),\lambda\to 0. \tag{20}\] By page358 Lemma 1 [19]. \[\begin{array}{l}\left|\prod_{i=1}^{M}e^{-\lambda^{2}\delta_{t,i}^{2}(\Psi)/ 2}-\prod_{i=1}^{M}(1-\frac{1}{2}\lambda^{2}\delta_{t,i}^{2}(\Psi))\right|\leq \sum_{i=1}^{M}\left|e^{-\lambda^{2}\delta_{t,i}^{2}(\Psi)/2}-1+\frac{1}{2} \lambda^{2}\delta_{t,i}^{2}(\Psi)\right|\leq\\ \sum_{i=1}^{M}\left[\frac{1}{4}\lambda^{4}\delta_{t,i}^{4}(\Psi)\sum_{j=2}^{ \infty}\frac{\frac{1}{2^{j-2}}\lambda^{2j-4}\delta_{t,i}^{2j-4}(\Psi)}{j!} \right]\leq\sum_{i=1}^{M}\frac{1}{4}\lambda^{4}\delta_{t,i}^{4}(\Psi)e^{\left| \frac{1}{2}\lambda^{2}\delta_{t,i}^{2}(\Psi)\right|}\end{array} \tag{21}\] Thus, \[\prod_{i=1}^{M}e^{-\lambda^{2}\delta_{t,i}^{2}(\Psi)/2}=\prod_{i=1}^{M}(1- \frac{1}{2}\lambda^{2}\delta_{t,i}^{2}(\Psi))+o(\lambda^{2})=\prod_{i=1}^{M}e^ {-\lambda^{2}\delta_{t,i}^{2}(\Psi)/2}+o(\lambda^{2})=e^{-\frac{\lambda^{2} \delta_{t,i}^{2}}{2}}+o(\lambda^{2}) \tag{22}\] The characteristic function \(\prod_{i=1}^{M}\varphi_{t,i}(\lambda)\) of \(\sum_{i=1}^{M}Y_{t,i}=\sum_{i=1}^{M}\left\{\Psi(\mathbf{X_{t}^{(i)}})-\mathbb{ E}_{\pi_{t}}[\Psi]\right\}\) is equal to \(e^{-\frac{\lambda^{2}\delta_{t,i}^{2}}{2}}\), thus, (14) holds. #### 3.4.2 Asymptotic Variance 1 The sample median can be defined as \[X_{t}^{(m^{\prime})}=\left\{\begin{array}{c}X_{t}^{(\frac{1}{2}(1+N))},N=2r+ 1,r\in\mathbb{R}^{+}\\ \frac{1}{2}(X_{t}^{(\frac{1}{2}N)}+X_{t}^{(\frac{1}{2}N+1)}),N=2r,r\in\mathbb{ R}^{+}.\end{array}\right. \tag{23}\] Define \[\mathbb{E}_{q_{k,t+1}}(\Psi\mid\mathbf{X_{t}})=\left\{\begin{array}{c}\int \Psi(\mathbf{X_{t}})\prod_{l=t+1}^{k}q_{l}(\mathbf{X_{l}}\mid\mathbf{X_{l-1}}) \mathbf{X_{l}},\text{if }\;t<k,\\ q(\mathbf{X_{k}})\;\;\text{otherwise}.\end{array}\right. \tag{24}\] and \[w_{ts}=\frac{\pi_{t}(X_{t}\mid X_{t-1},Y_{1:t})}{\pi_{s}(X_{s}\mid X_{s-1},Y_{1: s})\prod_{l=s+1}^{t}q_{l}(X_{l}\mid X_{l-1,Y_{1:t}})}. \tag{25}\] **Theorem 2**: Under the integrability conditions of theorem 1, suppose that \(\tilde{X}_{t}^{(i)}\to U[0,1]\), \(\lim_{M\to\infty}\sum_{i=1}^{M}\left\{\Psi(\tilde{\mathbf{X}}_{t}^{(i)})-\mathbb{ E}_{\pi_{t}}[\Psi]\right\}\sim N(0,s_{m}^{2})\), satisfying \[s_{m}^{2}\leq(M-1)\cdot V_{t,t_{0}}(\Psi)+\frac{1}{8r+12}, \tag{26}\] where the originator particle size \(N=2r+1,V_{t,t_{0}}(\Psi)=\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{E}_{\pi_{s}} \mathbb{E}_{q_{s+1}}[\mathbb{E}_{q_{s+2}}\cdots\mathbb{E}_{q_{t}}\{(\Psi- \mathbb{E}_{\pi_{t}}(\Psi))w_{ts}\}]^{2}\) and \[V_{t,t_{0}}(\Psi)>\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{E}_{q_{t}}\left[(\Psi- \mathbb{E}_{\pi_{t}}(\Psi))w_{ts}\right]^{2}. \tag{27}\] **Proof** We decompose the original sequence \(\tilde{X}_{t}^{(1)},...,\tilde{X}_{t}^{(M)}\) in descending order into two parts, \(\tilde{X}_{t}^{(m^{\prime})}\) denotes the median of the originator particles, the rest \(\tilde{X}_{t}^{(i)},i\neq m^{\prime}\) belong to the deterministic domain; We solve for the variance of these estimators separately. We assume that the population has an infinite number of individuals, The values of the variance of the median for \(2r\) even and \(2r+1\) odd approach the same limit, but the value for the even will be less than the value for the odd [20], Karl Pearson extended it with a more accurate estimation of the variance. Consequently, for the upper bound, here we consider the variance at the case of \(N=2r+1\), denoted by \(V(\Psi^{\prime}),\Psi^{\prime}=\Psi(\tilde{X}_{t}^{\frac{1}{2}(1+N)})\). Next, we derive a more detailed expression separately. For \(V(\Psi^{\prime})\), we first need to find the pdf of \(\tilde{X}_{t}^{r+1}\), intuitively, \[\mathbb{P}(\tilde{X}_{t}^{1+r}\in dx)=\sum_{i=1}^{2r+1}\mathbb{P}(\tilde{X}_{ t}^{i}\in dx,B_{i}) \tag{28}\] where \(B_{i}\) is the event that \(r\) of the \(2r\) values \(V_{1},...,V_{i-1},V_{i+1},...,V_{2r+1}\) are less than \(x\). Because \(V_{i}\) does not appear in the event \(B_{i}\), the event \(\{V_{i}\in dx\}\) is the chance that if we toss a coin \(2r\) times, the probability of \(r\) tails obtained, it can be formulated as \[\mathbb{P}(\tilde{X}_{t}^{1+r}\in dx)=(2r+1)\cdot\mathrm{C}_{2r}^{r}x^{r}(1- x)^{r}dx=(2r+1)\binom{2r}{r}x^{r}(1-x)^{r}dx=\frac{(2r+1)!}{2r!}x^{r}(1-x)^{r}dx. \tag{29}\] Thus, \(\Psi^{\prime}\sim Beta(r+1,r+1)\), The variance of \(\Psi^{\prime}\) is \(V(\Psi^{\prime})=\frac{1}{8r+12}\). The particle in the deterministic domain was resampled on the basis of the originator particle, which has been truncated, satisfying \[\tilde{X}_{0:t}^{(i)}=X_{0:t}^{(i)}\cdot\mathbb{1}\left[\tilde{w}_{t}^{i}\geq \frac{1}{N}\right],i\neq m^{\prime}. \tag{30}\] For any generic function \(\Psi(X_{0:t}^{(i)})\), the corresponding sample mean after resampling \[\bar{\Psi}_{t}=\frac{1}{N}\sum_{i=1}^{N}\Psi_{t}(\tilde{X}_{t}^{(i)}) \tag{31}\] is a consistent estimator of \(\mathbb{E}_{\pi_{s}}[\Psi]\), whose variance \(V_{t,t_{0}}(\Psi)\) is a function of the incremental weights and transition kernels encountered up to time \(t\) from \(t_{0}\). Inspired by [21], the variance of this estimator under large sample sizes can be formulated as \[V_{t,t_{0}}(\Psi)=\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{E}_{\pi_{s}}\mathbb{E} _{q_{s+1}}[\mathbb{E}_{q_{s+2}}\cdots\mathbb{E}_{q_{t}}\{(\Psi-\mathbb{E}_{ \pi_{t}}(\Psi))w_{ts}\}]^{2} \tag{32}\] where \(\mathbb{E}_{\pi_{s}}\) denotes expectation under the posterior distribution \(\pi(X_{0:s}\mid y_{1:s})\), \(\mathbb{E}_{q_{s+1}}\) denotes expectation under importance sampling distribution \(q_{s+1}(X_{s+1}\mid X_{0:s},y_{1:s+1})\), and \(N_{s}\) is the population size at \(t=s\). Consequently, each resampling stage at \(s\) contributes an additional variance component (32). After rejuvenation, many of the previous particles will be discarded, and if we assume the whole population size of particles is stable, then the limit on the proportion of discarded particles satisfies, \[\lim_{t\rightarrow\infty,N\rightarrow\infty}\left(1-\frac{1}{\mathbb{1}\left[ \tilde{w}_{t}^{i}\geq\frac{1}{N}\right]}\right)^{t}\approx\left(1-\frac{ \epsilon}{N}\right)^{t}\approx e^{-\epsilon} \tag{33}\] Although the progressive impoverishment will lead to an increase in variance \(V_{t,t_{0}}\), the rest will maintain a common attribute \(\tilde{w}_{t}^{i}>\frac{1}{N}\) when \(n\rightarrow\infty\). the accumulation of variance components after each rejuvenation will be negligible. As the simulation consistent estimator of \(V_{t}\) is not available from the output samples, we consider the case from importance sampling distribution \(q(X_{s},dX_{s+1})\) of \(X_{s+1}=X_{s}\), the variance can be reduced to \[V_{t,t_{0}}^{\prime}(\Psi)=\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{E}_{\pi_{s}} \mathbb{E}_{q_{t+1}}[\mathbb{E}_{q_{s+2}}\cdots\mathbb{E}_{q_{t}}\{(\Psi- \mathbb{E}_{\pi_{t}}(\Psi))w_{ts}\}]^{2}=\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{ E}_{q_{t}}\left[(\Psi-\mathbb{E}_{\pi_{t}}(\Psi))w_{ts}\right]^{2} \tag{34}\] \[w_{ts}=\prod_{l=s+1}^{t}w_{l,l-1}\propto\prod_{l=s+1}^{t}W_{l}=\prod_{l=s+1}^ {t}\frac{g(y_{l}\mid X_{l}^{i})f(X_{l}^{i}\mid X_{l-1}^{i})}{q(X_{l}^{i}\mid X _{0:l-1}^{i},y_{1:l})} \tag{35}\] A simulation-consistent estimator of \(V_{t,t_{0}}^{\prime}\) is \[\hat{V}_{t,t_{0}}^{\prime}=\frac{1}{M}\sum_{s=t_{0}}^{t}\frac{\sum_{j=1}^{n_{ t}}\{\Psi(\epsilon_{t}^{(j)}-\tilde{\Psi}_{t})^{2}\prod_{l=s+1}^{t}W_{l}^{r(j)} }}{\sum_{j=1}^{n_{t}}\prod_{l=s+1}^{t}W_{l}^{r(j)}} \tag{36}\] where \(r(j)\) is the index of the sample at stage \(r\) that survives as sample \(j\) at stage \(t\). \(\hat{V}_{t,t_{0}}^{\prime}\) provides an indicator of sample size \(n_{t}\) whether it is adequate to resist particle impoverishment. Thus, (26) and (27) hold. **Theorem 3** Suppose that \(\tilde{X}_{t}^{(1)},...,\tilde{X}_{t}^{(M)}\) each with a strictly positive probability density function and continuity on \(\mathbb{R}\), let \(m_{i}^{\prime}\) be the median of each \(\tilde{X}_{t}^{(i)}\), such that the cumulative function of \(\tilde{X}_{t}^{(i)}\) satisfying \(F(m_{i}^{\prime})=\frac{1}{2}\), then the sample median \(\mathcal{M}\) of \(\left\{\tilde{X}_{t}^{(1)},...,\tilde{X}_{t}^{(M)}\right\}\) approximates the \(\mathbb{N}(m_{i}^{\prime},\frac{1}{\sigma_{0}(m_{i}^{\prime})})\) distribution in the precise sense that, as \(M\rightarrow\infty\), \[\lim_{M\rightarrow\infty}\mathbb{P}\left[\frac{\mathcal{M}-m_{i}^{\prime}}{ \sigma_{0}(m_{i}^{\prime})}\leq x\right]=\Phi(x)\ \ \ (x\in\mathbb{R}), \tag{37}\] where \(\sigma_{0}^{2}(m_{i}^{\prime})>4Mf(m_{i}^{\prime})^{2}\). **Proof** We follow (28) and let \(x=\frac{1}{2}+\frac{1}{2}\frac{y}{\sqrt{2r}},dx=\frac{dy}{2\sqrt{2r}}\), we have \({2r\choose r}\frac{1}{2^{2r}}\sim\frac{1}{\sqrt{\pi r}}\), \[\int_{-\infty}^{+\infty}\lim_{r\rightarrow\infty}(1-\frac{y^{2}}{2r})^{r}dy= \int_{-\infty}^{+\infty}e^{-\frac{y^{2}}{2}}dy=\sqrt{2\pi} \tag{38}\] As \(r\rightarrow\infty\), combining (28) and (38), we have \[\frac{(2r+1)!}{2r!}x^{r}(1-x)^{r}dx=\frac{1}{\sqrt{2\pi}}e^{-\frac{y^{2}}{2}}dy \tag{39}\] Thus, the quantity \(\mathcal{M}\) can be expressed by \(\mathcal{M}=\frac{1}{2}+\frac{Y}{2\sqrt{2r}}\), where \(Y\sim\mathbb{N}(0,1)\). Since \(F(X)\in[0,1]\), \(F\) is continous and strictly increasing on \(\mathbb{R}\). \(F(\mathcal{M})\) is the sample median of the \(F(\tilde{X}_{t}^{(i)})\), from the Taylor series, it satisfies \[F(\mathcal{M})=\frac{1}{2}+\frac{Y}{2\sqrt{2r}}>F(m_{i}^{\prime})+(\mathcal{M }-m_{i}^{\prime})f(m_{i}^{\prime}). \tag{40}\] Since \(F(m_{i}^{\prime})=\frac{1}{2}\), it yields to \[\mathcal{M}-m_{i}^{\prime}<\frac{Y}{2f(m_{i}^{\prime})\sqrt{M}}. \tag{41}\] Thus, \(\sigma_{0}^{2}(m_{i}^{\prime})>4Mf(m_{i}^{\prime})^{2}\). We have the same limiting value when \(M=2r\)[20, 22]. #### 3.4.3 Consistency **Theorem 4** Assume that the particle set \(\left\{X_{0:t}^{(i)},w_{t}^{i}\right\},i\in[1,N]\) on the state space \(\Omega\) is consistent, where the convergence of Markovian state transition holds. Then, the uniform weighted sample \(\left\{\tilde{X}_{0:t}^{(i)},\tilde{w}_{t}^{i}\right\},i\in[1,M]\) in the subset of \(\Omega\) drew by the repetitive ergodicity in deterministic domain with median resampler is biased but consistent. **Proof** There is a special case that when the median of particles belong to the deterministic domain, the particles with weight \(\tilde{w}_{t}^{i}\leq\frac{1}{N}\) have been totally discarded, thus, the particle set after resampling is biased. Under the integrability conditions of theorem 1, We invoke Chebyshev's Inequality, \(Y_{t,i}=\Psi(\tilde{X}_{t}^{(i)})-\mathbb{E}_{\pi_{t}}[\Psi],i\neq m^{\prime}\), \[V_{t,t_{0}}(Y_{t,i})=V_{t,t_{0}}(\Psi)=\frac{1}{M}\sum_{s=t_{0}}^{t}\mathbb{E} _{\pi_{s}}\mathbb{E}_{q_{s+1}}[\mathbb{E}_{q_{s+2}}\cdots\mathbb{E}_{q_{t}} \left\{(\Psi-\mathbb{E}_{\pi_{t}}(\Psi))w_{ts}\right\}]^{2} \tag{42}\] As \(M\rightarrow\infty\), \(V_{t,t_{0}}(Y_{t,i})\to 0\), for \(V(\Psi^{\prime})=\frac{1}{8r+12}\), as \(r\rightarrow\infty,V(\Psi^{\prime})\to 0\), consequently, \[\lim_{M\rightarrow\infty,r\rightarrow\infty}s_{m}^{2}\leq\lim_{M\to \infty,N\rightarrow\infty}\left[(M-1)\cdot V_{t,t_{0}}(\Psi)+\frac{1}{8r+12} \right]=0, \tag{43}\] \(\mathbb{E}(Y_{t,i})^{2}=V(Y_{t,i})+[\mathbb{E}(Y_{t,i})^{2}]=0\) \[P(|Y_{t,i}|\leq\epsilon)=P(Y_{t,i}^{2}\geq\epsilon^{2})\leq\frac{\mathbb{E}(Y _{t,i})^{2}}{\epsilon^{2}}=0. \tag{44}\] \(\lim_{n\rightarrow\infty}P(|Y_{t,i}|\leq\epsilon)=0\) for all \(\epsilon\leq 0\). \(Y_{t,i}\) is consistent. As the resampling schema repetitively in a scaled domain, the total variance of our method obtained will be the lowest compared to other resampling methods, which is verified by the experiments. ## 4 Simulation In this part, the results of the comparison of these resampling methods are validated from the experiments with the linear Gaussian state space model and nonlinear state space model, respectively. We ran the experiments on an HP Z200 workstation with an Intel Core i5 and an \(\#82-18.04.1-\) Ubuntu SMP kernel. The code is available at [https://github.com/986876245/Variance-Reduction-for-SMC](https://github.com/986876245/Variance-Reduction-for-SMC). ### Linear Gaussian State Space Model This linear model is expressed by: \[X_{0}\sim\mu(X_{0}),\ \ \ X_{t}\mid X_{t-1}\sim N(X_{t};\phi X_{t-1},\delta_{v }^{2}),\ \ \ Y_{t}\mid X_{t}\sim N(y_{t};X_{t},\delta_{e}^{2}). \tag{45}\] We keep parameters the same as [23] to compare with the different resampling methods. Where \(\theta=\{\phi,\delta_{v},\delta_{e}\}\),\(\phi\in(-1,1)\) describes the persistence of the state, while \(\delta_{v},\delta_{e}\) denote the standard deviations of the state transition noise and the observation noise, respectively. The Gaussian density is denoted by \(N(x;\mu,\delta^{2})\) with mean \(\mu\) and standard deviation \(\delta>0\). In Figure 1, we use 20 particles to track the probability distribution of the state, composed of 100 different times, the ground truth is from the Kalman filter [24], the error denotes the difference between the estimation by SMC and the ground truth. Initially, the expectation of weights for 20 particles is equal to \(\frac{1}{20}\), which means that these particles have equal functions to track the state. For the resampling procedure, we compare the variance from different classical resampling methods, shown in Figure 2. The variance from the deterministic traverse method is the smallest. Thus, the effective particles are more concentrated after resampling based on our proposal. Figure 3 shows the root mean squared(RMSE) error for different resampling strategies, the decreasing rate of our method is higher than that from other methods as the particle increase, given in a feasible domain. The computational complexity is another factor the resampling algorithms are compared on, Figure 4 shows the execution times for different particles distributed, generally, it depends on the machines and random generator, during our simulations, the time consumption is different under the same condition of resampling method and number of particles. Furthermore, we find under the same resampling methods, the time consumed for the small size of particles is much more than that of the larger ones. The computational stability of particles with resampling methods is very sensitive to the units from a specific population. For safety, we conduct multiple experiments to achieve the general complexity trend. In Figure 4, all the experiments are conducted under the same conditions, for large-size particles, the stratified and systematic strategies are favorable. In Table 1, we can find under small-size particles(less than 150), our method performs best. ### Nonlinear State Space Model We continue with a real application of our proposal to track the stochastic volatility, a nonlinear State Space Model with Gaussian noise, where log volatility considered as the latent variable is an essential element in the analysis of financial risk management. The stochastic volatility is given by \[X_{0}\sim N(\mu,\frac{\sigma_{v}^{2}}{1-\rho^{2}}),X_{t}\mid X_{t-1}\sim N(\mu +\rho(X_{t-1}-\mu),\sigma_{v}^{2}),Y_{t}\mid X_{t}\sim N(0,exp(X_{t})\tau). \tag{46}\] where the parameters \(\theta=\{\mu,\rho,\sigma_{v},\tau\}\), \(\mu\in\mathbb{R},\rho\in[-1,1]\), \(\sigma_{v}\) and \(\tau\in\mathbb{R}_{+}\), denote the mean value, the persistence in volatility, the standard deviation of the state process and the instantaneous volatility, respectively. The observations \(y_{t}=\log(p_{t}/p_{t-1})\), also called log-returns, denote the logarithm of the daily difference in the exchange rate \(p_{t}\), here, \(\{p_{t}\}_{t=1}^{T}\) is the daily closing price of the NASDAQ OMXS30 index (a weighted average of the 30 most traded stocks at the Stockholm stock exchange). We extract the data from Quandl for the period between January 2, 2015 and January 2, 2016. The resulting log-returns are shown in Figure 6. We use SMC to track the time-series persistency volatility, large variations are frequent, which is well-known as volatility clustering in finance, from the equation (42), as \(|\phi|\) is close to \(1\) and the standard variance is small, the volatility clustering effect easier occurs. We keep the same parameters as [23], where \(\mu\sim N(0,1),\phi\sim TN_{[-1,1]}(0.95,0.05^{2})\), \(\delta_{v}\sim\text{Gamma}(2,10)\), \(\tau=1\). We use 25 particles to track the persistency volatility, the expectation of weights of particles is \(\frac{1}{25}\), shown in Figure 6, it is stable as the same with Figure 1, the variance is in \(10^{-3}\) orders of magnitude under random sampling mechanism. In Figure 6, the variance from our proposal shows the minimum value at different times, nearly all the plot share the common multimodal feature at the same time, it stems from the multinomial distribution that both of them have when they resample a new unit. ## 5 Conclusion Resampling strategies are effective in Sequential Monte Carlo as the weighted particles tend to degenerate. However, we find that the resampling also leads to a loss of diversity among the particles. This arises because in the resampling stage, the samples are drawn from a discrete multinomial distribution, not a continuous one. Therefore, the new samples fail to be drawn as a type that has never occurred but stems from the existing samples by the repetitive schema. We have presented a repetitive deterministic domain traversal for resampling and have achieved the lowest variances compared to other resampling methods. As the size of the deterministic domain \(M\ll N\) (the size of population), our algorithm is faster than the state of the art, given a feasible size of particles, which is verified by theoretical deduction and experiments of the hidden Markov model in both the linear and the non-linear case. The broader impact of this work is that it can speed up existing sequential Monte Carlo applications and allow more precise to estimates their objectives. There are no negative societal impacts, other than those arising from the sequential Monte Carlo applications themselves. ## Acknowledgments This was supported in part by BRBytes project.
2308.00155
Federated Learning for Data and Model Heterogeneity in Medical Imaging
Federated Learning (FL) is an evolving machine learning method in which multiple clients participate in collaborative learning without sharing their data with each other and the central server. In real-world applications such as hospitals and industries, FL counters the challenges of data heterogeneity and model heterogeneity as an inevitable part of the collaborative training. More specifically, different organizations, such as hospitals, have their own private data and customized models for local training. To the best of our knowledge, the existing methods do not effectively address both problems of model heterogeneity and data heterogeneity in FL. In this paper, we exploit the data and model heterogeneity simultaneously, and propose a method, MDH-FL (Exploiting Model and Data Heterogeneity in FL) to solve such problems to enhance the efficiency of the global model in FL. We use knowledge distillation and a symmetric loss to minimize the heterogeneity and its impact on the model performance. Knowledge distillation is used to solve the problem of model heterogeneity, and symmetric loss tackles with the data and label heterogeneity. We evaluate our method on the medical datasets to conform the real-world scenario of hospitals, and compare with the existing methods. The experimental results demonstrate the superiority of the proposed approach over the other existing methods.
Hussain Ahmad Madni, Rao Muhammad Umer, Gian Luca Foresti
2023-07-31T21:08:45Z
http://arxiv.org/abs/2308.00155v1
# Federated Learning for Data and Model Heterogeneity in Medical Imaging ###### Abstract Federated Learning (FL) is an evolving machine learning method in which multiple clients participate in collaborative learning without sharing their data with each other and the central server. In real-world applications such as hospitals and industries, FL counters the challenges of data heterogeneity and model heterogeneity as an inevitable part of the collaborative training. More specifically, different organizations, such as hospitals, have their own private data and customized models for local training. To the best of our knowledge, the existing methods do not effectively address both problems of model heterogeneity and data heterogeneity in FL. In this paper, we exploit the data and model heterogeneity simultaneously, and propose a method, MDH-FL (Exploiting Model and Data Heterogeneity in FL) to solve such problems to enhance the efficiency of the global model in FL. We use knowledge distillation and a symmetric loss to minimize the heterogeneity and its impact on the model performance. Knowledge distillation is used to solve the problem of model heterogeneity, and symmetric loss tackles with the data and label heterogeneity. We evaluate our method on the medical datasets to conform the real-world scenario of hospitals, and compare with the existing methods. The experimental results demonstrate the superiority of the proposed approach over the other existing methods. Keywords:Federated Learning Medical Imaging Heterogeneous Data Heterogeneous Model. ## 1 Introduction Federated Learning (FL), initially introduced by [20], has become a popular machine learning technique because of distributed model training without sharing the private data of participating hosts. In FL, participants (i.e., clients) including organizations and devices generally have heterogeneous data and heterogeneous models that are customized according to the tasks and local data. In real-world applications, data from multiple sources are heterogeneous and contain non-independent and identically and distributed data (non-IID). Moreover, data from multiple source may produce diverse labels and classes that is more challenging for the convergence of FL model. Traditional training methods based on centralized data cannot be used in practical applications due to privacy concerns and data silos at multiple locations [10]. FL has the ability to train a global model by allowing multiple participants to train collaboratively with their decentralized private data. In this way, private data of an individual participant are never shared with the central server and other participant in FL environment. Most common FL algorithms are FedProx [14] and FedAvg [20] that aggregate the model parameters obtained from the participating clients. Most of the existing methods [13, 28] using these algorithms consider the homogeneous data and same architecture of the local model used by all participants. In practical applications, each participant has its own data and might need to design its own customized model [24, 8] due to specific and personalized requirements [14]. Such heterogeneity in data and model is natural in healthcare organizations that design custom models for specific tasks as illustrated in Fig. 1. In such environment, hospitals are hesitant to reveal their data and model architecture due to privacy concerns and business matters. Thus, numerous methods have been proposed to perform FL with such heterogeneous data [7, 30] and clients [15, 16, 12]. FedMD [12] is a method that implemented knowledge distillation based on class scoring calculated by local models of clients trained on public dataset. FedDF [16] is another method that performs ensemble distillation by leveraging the unlabeled data for every model architecture. Such existing methods are dependent on shared models and mutual consensus. However, a mutual consensus is another challenge in which each client is unable to set its learning direction to adjust the deviations among all participants. Moreover, designing additional models enhance the processing overhead, and eventually affect the performance. Thus, FL containing heterogeneous data and models without depending on global sharing and consensus is critical and challenging. The methods discussed above are mostly dependent on the assumption that every participant has homogeneous, independent and identically distributed (IID) data that is not possible in real-world scenarios. More specifically, in collaborative learning each participant has its own data and requires a customized model for the specific nature of data and task. As FL has many participants with heterogeneous models and data, each model suffers data diversity effecting the overall performance of the global model. Existing methods such as [4, 27] have been presented that designed robust loss function to minimize the negative impact of heterogeneous labels in data. The current existing methods either tackle the data heterogeneity or model heterogeneity. In FL, it is required that a model should be robust and learn sufficiently from the data during local update. To tackle with the heterogeneous participants and data containing diverse labels are the prominent challenges in FL. Model heterogeneity in FL causes the diverse noise patterns and decision boundaries. Moreover, data heterogeneity based on non-IID and label diversity creates difficulty in convergence of the global model during global learning phase in FL. It is required for each client to concentrate on the contribution of other participants and align the learning to produce a robust global model. In this paper, we propose a solution for the heterogeneous data and model in FL. 1) For the model heterogeneity, a model distribution (i.e., logits output) is aligned by learning the knowledge distribution and feedback from other clients using public data. In this way, each participant learns with its own strategy without depending on the public model. 2) To tackle with the data heterogeneity having diverse labels, an additional symmetric loss function as proposed in [29], is used to minimize the diversity impact on model learning. Our main contribution are as follows. * We explore the real-world scenario of data and model heterogeneity in hospitals implementing decentralized collaborative model training. * We use knowledge distillation for the alignment of model output (i.e., logits) to solve the problem of model heterogeneity and to produce an efficient global model in FL. * We utilize an additional symmetric loss function to optimize the model learning based-on heterogeneous data containing diverse labels. * We evaluate the proposed method on hematological cytomorphology clinical datasets with heterogeneous model and data scenarios, and experimental results show the supremacy of the proposed method over the existing FL methods. ## 2 Related Work ### Federated Learning Federated Learning (FL), firstly proposed by [20] is a machine learning method in which multiple clients train a global model without sharing their private local Figure 1: Participating hospitals (i.e., clients) contain heterogeneous local models trained on heterogeneous data and diverse labels. Each client has its own data and custom model as per requirements and tasks. data to preserve the privacy. Initially, FedAvg was used to aggregate the parameters of local models trained on local data [20]. A method similar to FedAvg has been proposed in [14] that can customize the local calculations with respect to the iterations and devices used in FL. In [28], weights of the layers in a client model are collected to accomplish one-layer matching that produce weight of every layer in the global model. Knowledge distillation has been utilized for the communication of FL heterogeneous models in [12]. In this method, for each client, class scores obtained from the public dataset are collected on the server to calculate the aggregated value to be updated. In [16], unlabeled data leveraging ensemble distillation is used for the model fusion. Global parameters are dynamically assigned as a subset to the local clients according to their capabilities in [2]. An algorithm has been introduced in [15] to produce a global model from the learning of local representations. We summarize that existing methods assume that all clients have homogeneous data without consideration of any type of heterogeneity. No research have been conducted for the mitigation of diverse impact of data and model heterogeneity simultaneously during the collaborative learning in FL. ### Model and Data Heterogeneity Numerous methods have been presented to tackle with data heterogeneity, but not much research have been conducted for the model heterogeneity and label diversity in the scenario of FL. Some existing methods use loss functions for the optimization such as [4, 27]. A convex classification calibration loss has been proposed by [27] that is robust for incorrect classes and labels. Some loss functions are evaluated by [4] that prove the robustness of MAE to perturbed classes in deep learning. Estimation of the probability for every class flipped to some other class has been utilized in existing methods [32, 22, 25]. In [32], corrupted data is transformed into Dirichlet distribution space and label regression technique is used to infer the correct classes, and finally data modeling and classifier are trained together. Some existing methods extract clean samples, re-weight each instance, or apply some transformation on the heterogeneous data for model training [9, 31, 5]. A method JoCoR has been proposed in [31] that uses Co-Regularization for the joint loss estimation. In this method, the samples with minimum loss are selected to update the model parameters. MentorNet is another method proposed by [9] that comes up with a technique used to weight a sample such as used in StudentNet and MentorNet. Co-teaching method has been proposed in [5] that selects data for cross training of the two deep networks simultaneously. To avoid the model overfitting specific samples, robust regularization is used in [21, 1, 33]. A method Mixup has been proposed in [33] to regularize the deep network by training the convex pairs of instance and their corresponding labels. Regularization is used by [1] to minimize the impact of corrupted data while not affecting the training of actual samples. A regularization method has been introduced in [21] that depends on the virtual adversarial loss and adversarial direction that do not require any label information. Most of the existing methods that solve he problem of data heterogeneity and corrupted data are based on centralized data and a single model. However, server is not able to access the local data of a client directly in FL environment. Moreover, heterogeneous clients have diverse patterns and decision boundaries. ## 3 Federated Learning with Heterogeneous Data and Models In FL with heterogeneous participants \(P\) and a server, we consider \(C\) as the number of all clients where \(|C|=P\). Thus, the \(p^{th}\) participant \(c_{p}\in C\) has its local data \(d_{p}=\{(x_{i}^{p},y_{i}^{p})\}_{i=1}^{N_{p}}\) where \(|x|^{p}=N_{p}\). Moreover, \(y_{i}^{p}\in\{0,1\}^{N_{p}}\) is a one hot vector containing ground truth labels. Furthermore, a local model \(\Theta_{p}\) owned by a client \(c_{p}\) has different architecture and \(f(x^{p},\Theta_{p})\) represents the logits output produced by the network \(f(.)\) using input \(x^{p}\) calculated with \(\Theta_{p}\). The server has a public dataset \(d_{0}=\{x_{i}^{0}\}_{i=1}^{N_{0}}\) that may belongs to the client data for different classification tasks. In FL, overall process is divided into local training and collaborative learning in which local training is performed by \(E_{l}\) rounds and collaborative learning is performed by \(E_{c}\) rounds. Our purpose is to perform FL with heterogeneous (i.e., non-IID) data containing diverse labels and heterogeneous clients, so a client has its heterogeneous data \(\tilde{d}=\{(x_{i}^{p},\tilde{y}_{i}^{p})\}_{i=1}^{N_{p}}\) in which \(\tilde{y}_{i}^{p}\) denotes the heterogeneous annotations. Each client has different noise patterns and decision boundaries due to model heterogeneity that can be expressed as \(f(x,\Theta_{p_{1}})\neq f(x,\Theta_{p_{2}})\). Thus, each client \(c_{p}\) must also consider the heterogeneity of other clients \(c_{p_{0}}\neq p\), other than heterogeneity in its own dataset. The overall objective is to find an optimal solution for model parameters \(\Theta_{p}=argmin\ \mathcal{L}(f(x^{p},\Theta_{p}),y^{p})\). The architecture of the proposed method is shown in Fig. 2. Each client is trained on its private dataset and subsequently on public dataset to use the knowledge distillation and alignment of knowledge distribution as given in Eq. (3). Moreover, each local client is updated and optimized using symmetric loss given in Eq. (8). ### Model Heterogeneity The knowledge distribution represented as \(D_{p}^{t_{c}}=f(d_{0},\Theta_{p}^{t_{c}})\) is produced for the client \(c_{p}\). To estimate the variance in knowledge distribution, Kullback-Leibler (\(\mathcal{KL}\)) divergence is used by each client as proposed in [3]. \(KL\) divergence represents the deviation between two probability distributions. If there are two clients \(c_{p_{1}}\) and \(c_{p_{2}}\) having knowledge distributions \(D_{p_{1}}^{t_{c}}=f(d_{0},\Theta_{p1}^{t_{c}})\) and \(D_{p_{2}}^{tc}=f(d_{0},\Theta_{p_{2}}^{t_{c}})\) respectively, then the difference between their knowledge distributions can be formulated as: \[\mathcal{KL}(D_{p_{1}}^{e_{c}}||D_{p_{2}}^{e_{c}})=\sum D_{p_{1}}^{e_{c}}\log( \frac{D_{p_{1}}^{e_{c}}}{D_{p_{2}}^{e_{c}}}) \tag{1}\] If the difference between two knowledge distributions \(D_{p_{1}}^{t_{c}}\) and \(D_{p_{2}}^{t_{c}}\) is higher, there is more opportunity for the clients \(c_{p_{1}}\) and \(c_{p_{2}}\) to learn from each other, and vice versa. If the \(\mathcal{KL}\) difference is minimized due to the probability distributions \(D_{p_{1}}^{t_{c}}\) and \(D_{p_{2}}^{t_{c}}\), it is assumed a technique that allows a client \(c_{p_{1}}\) for learning from the client \(c_{p_{2}}\). Thus, knowledge distribution difference for a client \(c_{p}\) can be expressed as: \[\mathcal{L}_{pl}^{p,c_{c}}=\sum_{p_{0}=1,p_{0}\neq p}^{P}\mathcal{KL}(D_{p_{0} }^{c_{c}}||D_{p}^{c_{c}}) \tag{2}\] where \(p_{0}\) is a participant other than \(c_{p}\). Moreover, knowledge distribution difference is calculated for a client \(c_{p}\), so other participants can access the knowledge of \(c_{p}\) without leakage of model architecture and data privacy. All participants are prompted for the collaborative learning due to significant difference in their knowledge distributions. Thus, each participant aligns its knowledge distribution by learning from other participants. This process can be mathematically formulated as follows. \[\Theta_{p}^{e_{c}}\leftarrow\Theta_{p}^{e_{c}-1}-\alpha\nabla_{\Theta}(\frac{ 1}{P-1}\cdot\mathcal{L}_{pl}^{p,e_{c}-1}) \tag{3}\] Where \(\alpha\) represents the learning rate. ### Data and Labels Heterogeneity We utilize the Symmetric Cross Entropy proposed in [29] to minimize the effect of local noise in model learning. Cross Entropy (CE) is a very common loss function Figure 2: Proposed approach containing local training and global learning in FL. Local models are updated with Kullback-Leibler loss based on knowledge distribution (Eq. 3), and Symmetric loss (Eq. 8). In local training phase, private models are individually trained on private datasets, and in global or collaborative learning, local clients are updated through loss functions (i.e., \(\mathcal{KL}\) and \(\mathcal{L}_{f}\)). used in most of the classification tasks. CE is deformation of \(\mathcal{KL}\) divergence, so \(\mathcal{KL}\) can be formulated in term of CE. For example, if \(p\) and \(g\) are predicted and label class distribution respectively, the \(\mathcal{KL}\) divergence can be formulated as: \[\mathcal{KL}(g||p)=\underbrace{\sum g(x)\log(g(x))}_{\text{entropy of g}}- \underbrace{\sum g(x)log(p(x))}_{\text{cross entropy}} \tag{4}\] The equation (4) contains entropy of \(g\) and cross entropy terms. Thus, CE loss for the input \(x\) is represented as: \[\mathcal{L}_{c}=-\sum_{i=1}^{N}g(x_{i})\log(p(x_{i})) \tag{5}\] Cross Entropy loss (\(\mathcal{L}_{c}\)) has limitations due to label noise. It does not make overall classes to learn enough from all categories due to various simplicity levels in the classes. To converge the model for such difficult classes, extra communication rounds are required for additional learning. In such scenario, there is a possibility of overfitting to the heterogeneous labels that eventually reduces the overall efficiency of the model. Generally, a model has limited ability for some categories to classify correctly. Moreover, a model prediction is reliable up to some extent due label noise. Thus, if \(g\) is not a real class distribution, reliability of prediction \(p\) as a true class distribution is limited. To solve this problem, a Reverse Cross Entropy (RCE) loss function proposed in [29], on the basis of \(p\) is exploited to align the predicted class distribution by the model. RCE loss for the input \(x\) is formulated as: \[\mathcal{L}_{rc}=-\sum_{i=1}^{N}p(x_{i})\log(g(x_{i})) \tag{6}\] It is feasible to learn the difficult classes for the model if both \(\mathcal{L}_{c}\) and \(\mathcal{L}_{rc}\) are combined, and overfitting can be avoided. This combined loss is named as Symmetric loss [29] that can be expressed as: \[\mathcal{L}_{s}=\lambda\mathcal{L}_{c}+\mathcal{L}_{rc} \tag{7}\] Where \(\lambda\) is used to control the overfitting to noise. Thus, \(\mathcal{L}_{c}\) fits the model on each class and \(\mathcal{L}_{rc}\) tackles with the label noise. A client aligns its local knowledge with the knowledge of other participants using a local learning process. A local model updated with its local data to prevent the local knowledge forgetting. In the process of local training, label noise redirects the model to wrong learning that causes convergence failure for the model. To solve this problem, symmetric loss (\(\mathcal{L}_{s}\)) is used to compute the loss between given label and the predicted pseudo-label by the model. Local update for a model can be expressed as: \[\Theta_{p}^{el}\leftarrow\Theta_{p}^{e_{l}-1}-\alpha\nabla_{\Theta}\mathcal{L }_{s}^{p,e_{l}-1}(f(x^{p},\Theta_{p}^{e_{l}-1}),\tilde{y}^{p}) \tag{8}\] where \(e_{p}\in E_{l}\) denotes the \(e_{p}\)-th epoch in model training. A client leverages \(\mathcal{L}_{s}\) to update its model that strengthens the local knowledge, and avoids the overfitting to label noise. Thus, model learning is promoted with the \(\mathcal{L}_{s}\) loss. ## 4 Experimental Results ### Datasets and Models In the experiments, two hematological cytomorphology clinical datasets, INT_20 dataset [26] and Matek_19 dataset [19] are used for the single-cell classification in Leukemia (i.e., cancer detection). INT_20 dataset [26] is used as a public dataset on the server, and Matek_19 dataset [19] is distributed to the clients as their local private datasets. INT_20 dataset has 26379 samples of 13 classes containing \(288\times 288\) colored blood images. Matek_19 dataset contains a total of 14681 samples of 13 classes having blood images with resolution of \(400\times 400\). In each experiment, four clients are set up for the collaborative learning and Matek_19 dataset is equally divided to these clients using Dirichlet distribution (i.e., Dir (\(\gamma\))) to make non-IID dataset [17]. The size of public data on the server and private data on each client is \(N_{0}=26379\) and \(N_{p}=3670\) respectively. For the homogeneous clients, ResNet-12 [6] is used for the training of all clients, and for the heterogeneous scenario, ShuffleNet [34], ResNet10 [6], Mobilenetv2 [23], and ResNet12 [6] are assigned to the clients for local training on the private datasets. To produce labels diversity in data, a matrix \(\mathcal{M}\) for the label transition is used represented as \(M_{ij}=flip(\tilde{y}=j|y=i)\) that shows that label \(y\) is moved to a heterogeneous class \(j\) from a class \(i\). As the real-word scenario, a client \(c_{p}\) selects \(N_{p}\) examples randomly from the private data (Matek_19), so each client has different noise proportion in its local data. Pair flip [5] and Symmetric flip [27] are the two common categories of matrix \(\mathcal{M}\). In Pair flip, a label of original class is swapped with a same wrong category, and in Symmetric flip, a class label is swapped with a wrong class label having same probability. Other implementation configuration is given in Table 1. In Table 1, \(E_{c}\) is used as epoch for global or collaborative learning, and \(E_{l}\) is used as local epoch in local training. Adam [11] is used as optimizer with the learning rate \(\alpha\). In each experiment, \(\lambda=0.1\) is fixed to control the overfitting to label diversity. Different diversity rate \(\mu\) is used to check the performance of the model with varying data and label heterogeneity. Moreover, label flip percentage \begin{table} \begin{tabular}{c|c} **Hyperparameter** & **Value(s)** \\ \hline \(E_{c}\) (global epochs for collaborative learning) & 40 \\ \(E_{l}\) (local epochs for local training) & \(\frac{N_{p}}{N_{0}}\) \\ Optimizer & Adam [11] \\ \(\alpha\) (Initial learning rate for optimizer) & 0.001 \\ \(b\) (batch size) & 16 \\ \(\lambda\) & 0.1 \\ \(\mu\) (labels diversity rate) & \(\{0.1,0.2,0.3\}\) \\ flip \%age in data \(\tilde{d}\) & 20 \\ \(\gamma\) (data heterogeneity rate) & 0.5 \\ \hline \end{tabular} \end{table} Table 1: Federated Learning hyperparameters. for the heterogeneous data \(\tilde{d}\) is fixed as 20 in all the experiments, where \(\gamma=0.5\) is the data heterogeneity rate. ### Comparison with state-of-the-art methods We perform experiments to evaluate and compare the proposed method with existing methods on the basis of accuracy. Table 2 shows the results of different methods using non-heterogeneous training models with \(\mu=0\) (i.e., no label diversity) in local datasets. Performance of each individual client is given in terms of accuracy (%age), and in the last column average accuracy is given for each method. It is evident that the proposed method performs better when using non-heterogeneous models and homogeneous data without label diversity for model training. Table 3 shows the comparison of the proposed method with similar existing methods. We use different labels-diversity techniques for the datasets used with heterogeneous models for the training. Performance of each method is decreased with the increasing labels-diversity rate. Moreover, there is a remarkable difference among all methods when the type of labels diversity is changed. This is because heterogeneous data or labels lead to wrong learning and communication of participating clients. Moreover, heterogeneous models produce different noise patterns that eventually decrease the model performance. Results from the Table 2 and Table 3 are computed when using heterogeneous models. However, these results are computed from the experiments without using additional loss functions. ## 5 Conclusion In this paper, a real-world problem of model and data heterogeneity in medical imaging has been explored. To solve the problem of heterogeneous data and labels diversity, an additional symmetric loss has been used to optimize the model trained on local and private data. To tackle with the heterogeneous participants in FL, Kullback-Leibler has been exploited to align the different noise patterns produced by the heterogeneous participants. Moreover, each participating client uses the knowledge distribution of other participants to improve the performance of global FL model. Experimental results conclude that the proposed method outperforms the existing similar methods. #### 5.0.1 Acknowledgements This work was supported by the Departmental Strategic Plan (PSD) of the University of Udine Interdepartmental Project on Artificial Intelligence (2020-2025). \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Symmetric flip**} & \multicolumn{3}{c}{**Pair flip**} \\ \cline{3-8} & \(\mu=0.1\) & \(\mu=0.2\) & \(\mu=0.3\) & \(\mu=0.1\) & \(\mu=0.2\) & \(\mu=0.3\) \\ \hline SL-FedL [17] & 76.21 & 72.16 & 67.02 & 78.24 & 73.87 & 68.44 \\ FedDF [16] & 78.53 & 74.47 & 68.77 & 78.91 & 74.22 & 69.65 \\ Swarm-FHE [18] & 72.44 & 66.88 & 59.94 & 73.68 & 67.20 & 60.72 \\ FedMD [12] & 79.78 & 74.18 & 68.11 & 80.85 & 76.15 & 73.26 \\ Ours & **83.69** & **79.82** & **72.93** & **84.06** & **80.10** & **73.94** \\ \hline \end{tabular} \end{table} Table 4: Training results computed with different methods. Heterogeneous models and data are used for each experiment. Two losses (i.e., \(\mathcal{L}_{s}\) and \(\mathcal{KL}\)) are used to minimize the heterogeneity impact and to improve the overall performance of the global model. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \multirow{2}{*}{**Noise Rate (\(\mu\))**} & \multicolumn{3}{c|}{**Symmetric flip**} & \multicolumn{3}{c}{**Pair flip**} \\ \cline{3-10} & **Method** & \(\Theta_{1}\) & \(\Theta_{2}\) & \(\Theta_{3}\) & **Average** & \(\Theta_{1}\) & \(\Theta_{2}\) & \(\Theta_{3}\) & **Average** \\ \hline \multirow{4}{*}{0.1} & SL-FedL [17] & 69.48 & 71.89 & 74.59 & 71.99 & 70.04 & 71.98 & 75.55 & 72.52 \\ & FedDF [16] & 71.77 & 74.45 & 72.47 & 72.90 & 73.02 & 74.58 & 73.24 & 73.61 \\ & Swarm-FHE [18] & 71.15 & 67.08 & 67.54 & 68.59 & 71.89 & 67.66 & 71.93 & 70.49 \\ & FedMD [12] & 72.88 & 75.76 & 73.37 & 74.00 & 73.17 & 75.91 & 74.42 & 74.50 \\ & Ours & **79.38** & **76.95** & **79.36** & **78.56** & **80.04** & **77.22** & **79.78** & **79.01** \\ \hline \multirow{4}{*}{0.2} & SL-FedL [17] & 66.53 & 68.23 & 71.15 & 68.64 & 66.27 & 68.76 & 71.88 & 68.97 \\ & FedDF [16] & 68.65 & 70.14 & 68.64 & 69.14 & 69.94 & 70.09 & 69.60 & 69.88 \\ & Swarm-FHE [18] & 65.35 & 62.89 & 62.45 & 63.56 & 65.82 & 63.56 & 68.26 & 65.88 \\ & FedMD [12] & 67.22 & 70.32 & 69.10 & 68.88 & 69.18 & 70.65 & 71.84 & 70.56 \\ & Ours & **74.06** & **71.77** & **73.94** & **73.26** & **78.27** & **75.44** & **76.67** & **76.79** \\ \hline \multirow{4}{*}{0.3} & SL-FedL [17] & 62.14 & 65.06 & 66.85 & 64.68 & 62.16 & 64.78 & 66.76 & 64.57 \\ & FedDF [16] & 62.87 & 66.23 & 62.97 & 64.02 & 66.09 & 67.35 & 67.58 & 67.01 \\ \cline{1-1} & Swarm-FHE [18] & 59.44 & 55.91 & 54.34 & 56.56 & 59.96 & 58.75 & 61.16 & 59.96 \\ \cline{1-1} & FedMD [12] & 61.87 & 63.78 & 64.93 & 63.53 & 67.45 & 66.33 & 67.48 & 67.09 \\ \cline{1-1} & Ours & **66.54** & **65.60** & **66.20** & **66.11** & **73.25** & **68.12** & **69.12** & **70.16** \\ \hline \end{tabular} \end{table} Table 3: FL training results computed on heterogeneous models and heterogeneous data for different methods.
2309.08967
The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects
Recommendation systems are widely used in web services, such as social networks and e-commerce platforms, to serve personalized content to the users and, thus, enhance their experience. While personalization assists users in navigating through the available options, there have been growing concerns regarding its repercussions on the users and their opinions. Examples of negative impacts include the emergence of filter bubbles and the amplification of users' confirmation bias, which can cause opinion polarization and radicalization. In this paper, we study the impact of recommendation systems on users, both from a microscopic (i.e., at the level of individual users) and a macroscopic (i.e., at the level of a homogenous population) perspective. Specifically, we build on recent work on the interactions between opinion dynamics and recommendation systems to propose a model for this closed loop, which we then study both analytically and numerically. Among others, our analysis reveals that shifts in the opinions of individual users do not always align with shifts in the opinion distribution of the population. In particular, even in settings where the opinion distribution appears unaltered (e.g., measured via surveys across the population), the opinion of individual users might be significantly distorted by the recommendation system.
Nicolas Lanzetti, Florian Dörfler, Nicolò Pagan
2023-09-16T11:44:51Z
http://arxiv.org/abs/2309.08967v2
# The Impact of Recommendation Systems on Opinion Dynamics: ###### Abstract Recommendation systems are widely used in web services, such as social networks and e-commerce platforms, to serve personalized content to the users and, thus, enhance their experience. While personalization assists users in navigating through the available options, there have been growing concerns regarding its repercussions on the users and their opinions. Examples of negative impacts include the emergence of filter bubbles and the amplification of users' confirmation bias, which can cause opinion polarization and radicalization. In this paper, we study the impact of recommendation systems on users, both from a microscopic (i.e., at the level of individual users) and a macroscopic (i.e., at the level of a homogenous population) perspective. Specifically, we build on recent work on the interactions between opinion dynamics and recommendation systems to propose a model for this closed loop, which we then study both analytically and numerically. Among others, our analysis reveals that shifts in the opinions of individual users do not always align with shifts in the opinion distribution of the population. In particular, even in settings where the opinion distribution appears unaltered (e.g., measured via surveys across the population), the opinion of individual users might be significantly distorted by the recommendation system. ## I Introduction Over the past few years, recommendation systems have become an essential component of online services, including e-commerce platforms and social networking sites. Their primary objective is to filter through the vast amount of information available and guide users towards the most relevant content. Recommendation systems make use of diverse machine learning methods to assess the relevance of items and provide personalized content based on the recorded online behaviors of users. These techniques enable the systems to not only measure the absolute relevance of items but also tailor the recommendations to the users' expected tastes [1]. While recommendation systems have been a remarkable technological advancement, their impact on users' behavior has raised questions. Personalization is a key feature of these systems that improves the user experience but also poses concerns: Excessive personalization may limit the range of perspectives available to users, leading to "filter bubbles" [2]. These bubbles can induce opinion polarization and radicalization, which can be harmful [3]. Although later research [4] has downplayed concerns about the negative effects of personalization, evidence suggests that it has the potential to strengthen users' prejudices. For instance, numerous studies showed that it aggravates confirmation bias, which is the human propensity to seek and consider information that confirms their beliefs and ideas [5, 6]. This bias can lead to an unconscious one-sided argument-building process, reinforcing users' preconceived notions. Therefore, it is reasonable to conclude that personalization may exacerbate the confirmation bias phenomenon, potentially leading to further polarization and division among users. Since empirical evidence supports the idea that confirmation bias is extensive, strong, and multiform, and its effects may be amplified by curation algorithms [3], a recent stream of literature [7, 8, 9, 10, 11] has started exploring the impact of the closed-loop dynamics between personalized recommendations and user preferences and opinions. For example, [9, 10] examined how this loop can reinforce user preferences and lead to polarization and filter bubbles, with [10] focusing mainly on the interaction between the user and the recommendation system, while [9] also included the effect of social influence by considering users being embedded in their social network. Differently, [8, 11] studied how to disentangle feedback loops in order to improve recommendation accuracy. The overall goal of this research field is to overcome the potential negative consequences of personalization by designing recommendation algorithms that can influence users' opinions and preferences in a more beneficial way [12, 13]. While these works generally show that the closed loop between opinion dynamics and recommendation systems bears the potential to steer individuals' opinions, it remains unclear (i) to what extent it may also steer the opinion distribution of a population (thus leading to significant concerns with regard to, e.g., political debates) and (ii) if seemingly similar opinion distributions (at the population level, e.g., surveys) can hide substantial individual shifts. To shed light on these questions, this paper builds on the recent model of [10] and examines the micro- and macroscopic impact of the closed loop between users and a recommendation system. This way, we can both study the impact of recommendation systems on the opinion distribution of a population and determine if shifts in opinions at the individual level can be concealed by their cumulative effect on the population level. ContributionsWith our work, we formalize the interaction between users and recommendation systems both from a microscopic (i.e., one user interacting with a recommendation system) and macroscopic (i.e., a homogeneous population interacting with a recommendation system) perspective. This way, we identify several tractable yet insightful instances, which we can investigate analytically and help us shed light on the impact of recommendation systems on users' opinions. Among others, our analytical analysis and numerical simulations uncover and explain a discrepancy between micro- and macroscopic behaviors, whereby the opinion of individual users is highly impacted by the recommendation system while, macroscopically, the opinion distribution remains unaffected. This insight reveals that, even when population surveys (e.g., exit polls) do not indicate opinion shifts, individuals' beliefs might be highly impacted by the recommendation systems. OrganizationThis paper unfolds as follows. In Section II, we present our model of the closed loop between a user and a recommendation system. We study its properties in Section III and present numerical results in Section IV. Section V draws the conclusions of this paper. Proofs are deferred to the appendix. ### _Notation and background material_ We denote by \(\mathbb{R}_{\geq 0}\) the non-negative real numbers. The space of (Borel) probability distributions over \(\mathbb{R}\) is \(\mathcal{P}(\mathbb{R})\) and the space of probability distributions over \(\mathbb{R}\) with finite second moment is \(\mathcal{P}_{2}(\mathbb{R})\coloneqq\{\mu\in\mathcal{P}(\mathbb{R}):\int_{ \mathbb{R}}\left|x\right|^{2}\mathrm{d}\mu(x)<+\infty\}\). The mean of a probability distribution \(\mu\in\mathcal{P}(\mathbb{R})\) is \(\mathbb{E}^{\mu}[x]\) and its variance is \(\mathrm{Var}(\mu)\). The pushforward of a probability distribution \(\mu\in\mathcal{P}(\mathbb{R})\) via a (Borel) map \(f:\mathbb{R}\rightarrow\mathbb{R}\) is defined by \((f_{\#}\mu)(A)=\mu(f^{-1}(A))\) for each Borel set \(A\subset\mathbb{R}\); if \(X\sim\mu\), then \(Y=f(X)\sim f_{\#}\mu\). The convolution of two probability distributions \(\mu,\nu\in\mathcal{P}(\mathbb{R})\) is denoted by \(\mu*\nu\); if \(X\sim\mu\) and \(Y\sim\nu\) are independent, then \(X+Y\sim\mu*\nu\). The (type-\(p\)) Wasserstein distance between two probability distributions \(\mu,\nu\in\mathcal{P}(\mathbb{R})\) is defined by \[W_{p}(\mu,\nu)\coloneqq\left(\min_{\gamma\in\Gamma(\mu,\nu)}\int_{\mathbb{R} \times\mathbb{R}}|x-y|^{p}\mathrm{d}\gamma(x,y)\right)^{\frac{1}{p}},\] where \(\Gamma(\mu,\nu)\subset\mathcal{P}(\mathbb{R}\times\mathbb{R})\) is the set all probability distributions over \(\mathbb{R}\times\mathbb{R}\) with marginals \(\mu\) and \(\nu\) (referred to as transport plans) [14]. The Wasserstein distance is the minimum cost to transport \(\mu\) onto \(\nu\) when transporting a unit of mass from \(x\) to \(y\) costs \(\left|x-y\right|^{p}\). Accordingly, a transport plan \(\gamma\in\Gamma(\mu,\nu)\) encodes the allocation of probability mass: If \((x,y)\) is in the support of \(\gamma\), then some of the probability mass at \(x\) is displaced to \(y\), or, equivalently, \(\gamma(A\times B)\) is the mass transferred from the set \(A\subset\mathbb{R}\) to the set \(B\subset\mathbb{R}\). Finally, a sequence of probability distributions \((\mu_{n})_{n\in\mathbb{N}}\subset\mathcal{P}_{2}(\mathbb{R})\) converges weakly in \(\mathcal{P}_{2}(\mathbb{R})\) if \(\int_{\mathbb{R}}\phi(x)\mathrm{d}\mu_{n}(x)\rightarrow\int_{\mathbb{R}}\phi( x)\mathrm{d}\mu(x)\) for all continuous functions \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) with at most quadratic growth (i.e., \(\phi(x)\leq A(1+\left|x\right|^{2})\) for some \(A>0\)). ## II Model In this section, we present our model. It consists of two interconnected parts: the user model and the recommendation system model; see Fig. 1. We start with a detailed description of each component. Then, we illustrate the behavior of our model in a numerical example. Our model is based on the mathematical framework from [10]. ### _Modeling of the users' opinion dynamics_ We consider a large homogeneous population of users. The opinion of each user evolves according to the Friedkin-Johnson model [15] \[x_{k+1}=\alpha x_{0}+\beta x_{k}+(1-\alpha-\beta)u_{k}, \tag{1}\] where \(x_{k}\in\mathbb{R}\) is the user's opinion at time \(k\), \(x_{0}\) is her/his opinion bias and initial opinion, and \(u_{k}\in\mathbb{R}\) is the recommendation at time \(k\). The parameters \(\alpha\in[0,1]\) and \(\beta\in[0,1)\) (with \(\alpha+\beta\leq 1\)) arbitrate between the impact of the user's bias, the current opinion, and the received recommendation on the future opinion. All users in the population share the same parameters \(\alpha\) and \(\beta\) but have different biases. In particular, the bias/initial opinion distribution of the population is \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R})\). Similarly, we denote by \(\mu_{k}\) the opinion distribution at time \(k\) or, equivalently, the probability distribution associated with the opinion of a generic user. Given the recommendation, the users produce a reward according to monotonically decreasing function \(r:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\), so that the reward at time \(k\) is \(r(\left|x_{k}-u_{k}\right|)\). Intuitively, the closer the recommendation is to the user's opinion, the more the user appreciates the content and thus the higher the benefit for the recommendation system (e.g., more clicks, more time on the platform). The reward is the only observable quantity; in particular, users' opinions are private and not revealed to the recommendation system. ### _Modeling of the recommendation system_ The recommendation system aims at maximizing the reward. To do so, it outputs the recommendation that has generated the largest reward throughout the entire past until time \(k\), with the exception of exploration steps, happening at the beginning of the horizon (\(k=0\)) and every \(T\) steps (\(k=nT\) for \(n\in\mathbb{N}\)). Thus, for \(k\notin\{0,T,2T,\ldots\}\) \[u_{k}=\operatorname*{argmax}_{u_{0},\ldots,u_{k-1}}\{r(\left|x_{0}-u_{0}\right| ),\ldots,r(\left|x_{k-1}-u_{k-1}\right|)\}.\] When exploring, the recommendation system samples a recommendation from the recommendation distribution \(\rho\in\mathcal{P}_{2}(\mathbb{R})\); i.e., \(u_{k}\sim\rho\) for \(k\in\{0,T,2T,\ldots\}\). This strategy is in line with the classical \(\varepsilon\)-greedy action selection in multi-armed bandit problems in reinforcement learning, which also outputs the reward-maximizing action but explores with some probability \(\varepsilon\) (instead of at fixed time steps) [16, SS2]. Fig. 1: Closed loop between a user (whose opinion dynamics are detailed in Section II-A) and a recommendation system (whose algorithm is detailed Section II-B). ### _Discussion of the model_ A few comments are in order. First, we consider a homogeneous population, whereby all users have the same \(\alpha\) and \(\beta\). This way, we can study the effect of the recommendation systems on _ensembles_ of users, rather than on a specific user with given \(\alpha\) and \(\beta\), bias/initial opinion \(x_{0}\), and realizations of the random recommendations. When presenting our results in Section IV, we consider various populations to illustrate the roles of \(\alpha\) and \(\beta\). Second, without loss of generality, we assume \(\beta\neq 1\). If \(\beta=1\) (i.e., users are infinitely stubborn), we trivially conclude \(x_{k+1}=x_{k}\) and \(\mu_{k}=\mu_{0}\) for all \(k\). Third, we do not include peer-to-peer interactions within the population, in line with our focus being the interaction between a recommendation system and its users. In any case, our model and analysis can be extended to include influences between the users (e.g., via graphons [17]). Nonetheless, we argue that, if interactions are to be considered, they do not happen _between_ the users, but rather _through_ the recommendation system. An example is collaborative filtering, where the system generates recommendations for a given user based on other users considered similar [18]. Fourth, there are of course many other models of recommendation systems based on more sophisticated algorithms, which include more general exploration strategies, collaborative filtering, priors on the users, machine learning techniques, etc. We leave their study to future research. Fifth, as we shall see below, the reward function \(r\) does not impact our analysis, as long as a mild (and, arguably, realistic) monotonicity assumption is satisfied. Finally, to ease the notation, we assume that \(\alpha\), \(\beta\), and the recommendation distribution \(\rho\) are time-invariant. Provided this time-varying behavior is sufficiently regular, our model and analysis extend to the time-varying setting. ### _Illustrative example_ To illustrate our model, we consider a population with \(\alpha=0.1\) and \(\beta=0.7\). For simulation purposes, we consider 5000 users, whose bias/initial opinion \(x_{0}\) is distributed uniformly on \([0,2]\) (blue, \(x\)-axis in Fig. 2). The recommendation distribution \(\rho\) is a zero-mean Gaussian with a standard deviation of \(0.5\) (red in Fig. 2) and exploration happens every \(5\)th time step. We run the system for \(50\) time steps. Our results are in Fig. 2. At a macroscopic level, the opinion distribution shifts towards the recommendation distribution so that the final opinion distribution \(\mu_{N}\) is a slightly asymmetric Gaussian (right plot in Fig. 2). At a microscopic level, the opinion of most (but not all) users is lower than their initial opinion (central plot in Fig. 2), which collectively contributes to the centralization of the opinion distribution observed at a macroscopic level. ## III Analysis In this section, we investigate some of the theoretic properties of our model. We start with a formulation of the dynamics in a tree structure, which immediately unveils the combinatorial nature of the problem. Thereafter, we study the more tractable yet insightful limit cases of \(T=+\infty\) (no exploration) and \(T=1\) (exploration at every time step). ### _The general case: A combinatorial problem_ A general analysis of the closed-loop system is combinatorial. After each exploration step, there are two possibilities: (i) the exploration input led to a higher reward and thus the recommendation system sticks to that recommendation for the subsequent steps, at least until the next exploration phase; (ii) the exploration input led to a lower reward and is therefore discarded, and the recommendation system "goes back" to the last recommendation. Accordingly, the probability of successful exploration coincides with the probability of sampling a recommendation (strictly) closer to the current opinion compared to the current recommendation. Formally: **Lemma 1** (Probability of successful exploration).: _Let \(u_{k}\) be the current recommendation and \(x_{k}\) be the opinion at time \(k\). Then, the probability of successful exploration is_ \[p(x_{k},u_{k}) =F_{\rho}(x_{k}+|x_{k}-u_{k}|)-\rho(\{x_{k}+|x_{k}-u_{k}|\})\] \[\quad-F_{\rho}(x_{k}-|x_{k}-u_{k}|),\] _where \(F_{\rho}:\mathbb{R}\rightarrow[0,1]\) is the cumulative distribution function of the recommendation distribution \(\rho\), and \(\rho(\{a\})\) is the probability of sampling the recommendation \(a\in\mathbb{R}\)._ Lemma 1 predicates that the probability of successful exploration is controlled by the difference between the current opinion and recommendation, together with the properties of the recommendation distribution \(\rho\). For instance, if \(\rho\) is uniform between \(-1\) and \(+1\), then \(\rho(\{x\})=0\) for all \(x\in\mathbb{R}\) and \(F_{\rho}(x)=\frac{1}{2}(1+\max\{-1,\min\{1,x\}\})\), so that \(p(x_{k},u_{k})=\frac{1}{2}(1+\max\{-1,\min\{1,x_{k}+|x_{k}-u_{k}|\})\})-\frac{1} {2}(1+\max\{-1,\min\{1,x_{k}-|x_{k}-u_{k}|\}\})\). If \(x_{k}\pm|x_{k}-u_{k}|\in[-1,+1]\), then \(p(x_{k},u_{k})=|x_{k}-u_{k}|\), showing that, at least in the case of a uniform recommendation distribution, the probability of successful exploration is precisely \(|x_{k}-u_{k}|\). With Lemma 1, the dynamics are as in Fig. 3, which explains why the analysis of the closed-loop system becomes intractable after a few exploration steps. Fig. 2: Simulation of the closed-loop system over a horizon of 50 time steps. The central plot shows the final opinion of each user, as a function of their initial opinion. The top histogram shows the bias/initial opinion distribution; the one on the right is the final opinion distribution. For reference, we include, in solid red, the recommendation distribution. Thus, in the sequel, we restrict our analysis to two tractable yet insightful limit cases: exploration only at the initial time and exploration at every time step. ### _Special case: No exploration_ Suppose now that the recommendation system does not perform exploration; i.e., \(T\to+\infty\). In this case, the initial recommendation, which is random, will be applied at all time steps, regardless of the reward returned by the user. As a result, each user's opinion converges to a convex combination of the bias and the received recommendation (at the initial time). Macroscopically, the opinion distribution approaches a "convex combination" of the bias distribution and the recommendation distribution: **Proposition 2** (No exploration).: _Let \(T=+\infty\). Let \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R})\) be the bias/initial opinion distribution, \(\mu_{k}\in\mathcal{P}_{2}(\mathbb{R})\) be the opinion distribution at time \(k\), and \(\rho\in\mathcal{P}_{2}(\mathbb{R})\) be the recommendation distribution. Then, the opinion distribution \(\mu_{k}\) converges weakly in \(\mathcal{P}_{2}(\mathbb{R})\) to the opinion distribution_ \[\mu\coloneqq\left(\frac{\alpha}{1-\beta}x\right)_{\#}\mu_{0}*\left(\left(1- \frac{\alpha}{1-\beta}\right)x\right)_{\#}\rho. \tag{2}\] Proposition 2 proves weak convergence of the opinion distribution (i.e., convergence of the integral of each continuous function which grows at most quadratically) and does not prove strong convergence (i.e., \(\mu_{k}(A)\to\mu(A)\) for every Borel set \(A\)). Namely, we prove that all macroscopic quantities converge (e.g., \(\phi(x)=x\) in the definition of weak convergence in \(\mathcal{P}_{2}(\mathbb{R})\) yields convergence of the expected value), and refrain from conducting a microscopic analysis for each and every (infinitesimal) user of the population. In particular, if \(\alpha=0\), then the opinion distribution asymptotically converges to the recommendation distribution. While simple, Proposition 2 unveils a fundamental discrepancy between a _macroscopic_ analysis, aiming to study the opinion distribution of the population, and a _microscopic_ analysis, which considers the change of opinions for individual users. We illustrate the phenomenon in the following analytic example and discuss it more in detail when presenting our numerical results in Section IV. **Example 1** (Microscopic vs. macroscopic behavior).: Suppose that the bias/initial opinion distribution \(\mu_{0}\) coincides with the recommendation distribution \(\rho\) and that all distributions are zero-mean Gaussian with standard deviation \(\sigma>0\). By Proposition 2 (together with the expressions for the pushforward and convolution of Gaussians), the opinion distribution converges to a zero-mean Gaussian with standard deviation \(\sigma\sqrt{(\frac{\alpha}{1-\beta})^{2}+(1-\frac{\alpha}{1-\beta})^{2}}\), so that the (type-2) Wasserstein distance between the bias/initial opinion distribution \(\mu_{0}\) and the final distribution \(\mu\) (defined in (2)) reads \[W_{2}(\mu_{0},\mu)^{2}=\sigma^{2}\Bigg{|}1-\sqrt{\left(\frac{\alpha}{1-\beta} \right)^{2}+\left(1-\frac{\alpha}{1-\beta}\right)^{2}}\Bigg{|}^{2},\] where we used the closed-form expression for the Wasserstein distance between Gaussians. Conversely, from a microscopic perspective, the opinion of a user with bias \(x_{0}\) who receives the recommendation \(u_{0}\) converges to \(\frac{\alpha}{1-\beta}x_{0}+(1-\frac{\alpha}{1-\beta})u_{0}\). Thus, the opinion shift is \((1-\frac{\alpha}{1-\beta})(x_{0}-u_{0})\), and the expected squared opinion shift for a user is \[\Delta =\Bigg{|}1-\frac{\alpha}{1-\beta}\Bigg{|}^{2}\int_{\mathbb{R}} \int_{\mathbb{R}}|x_{0}-u_{0}|^{2}\mathrm{d}\mu_{0}(x_{0})\mathrm{d}\rho(u_{0})\] \[=2\bigg{|}1-\frac{\alpha}{1-\beta}\Bigg{|}^{2}\sigma^{2}.\] For \(\alpha\to 1-\beta\), we recover the trivial case \(1-\alpha-\beta=0\), whereby the recommendation has weight 0 in the opinion dynamics (1). Accordingly, both \(W_{2}(\mu_{0},\mu)^{2}\) and \(\Delta\) converge to 0, and the micro- and macroscopic behaviors align. For \(\alpha\to 0\) and fixed \(\beta\), instead, \(W_{2}(\mu_{0},\mu)^{2}\to 0\); i.e., the initial opinion distribution \(\mu_{0}\) (which, by assumption, equals \(\rho\)) coincides with the final opinion distribution \(\mu\) (which, by Proposition 2, also equals \(\rho\)). However, \(\Delta\to 2\sigma^{2}>0\). Thus, microscopically, each user's opinion is highly impacted by the recommendation system, while, macroscopically, the opinion distribution of the population is unaltered. Finally, Proposition 2 considers the limit setting where the recommendation system explores only at the beginning and then sticks to the first, random, recommendation. Nonetheless, the intuition remains valid for sufficiently large values of exploration, as suggested by the numerical results in Fig. 4. ### _Special case: Continuous exploration_ We consider now the opposite limit case, where the recommendation system explores at each time step; i.e., \(T=1\) and the recommendation received by a user is therefore random at each time step. In this setting, the opinion distribution converges to a "convex combination" of the bias/initial opinion distribution and a distribution "similar" to a Gaussian distribution. Perhaps surprisingly, this result is independent of the initial opinion distribution and, importantly, of the recommendation distribution. Formally: Fig. 3: Dynamics in the simple case of two possible recommendations (\(\pm 1\), i.e., \(\rho=p\delta_{-1}+(1-p)\delta_{+1}\) for \(p\in[0,1]\)) and deterministic bias/initial opinion \(x_{0}\in\mathbb{R}\) (i.e., \(\mu_{0}=\delta_{x_{0}}\)). The transitions are \(x_{k+1}=\alpha x_{0}+\beta x_{k}+(1-\alpha-\beta)u_{k}\); the quantity associated with each arrow is the probability of that transition. **Proposition 3** (Continuous exploration).: _Let \(T=1\). Let \(\mu_{0}\in\mathcal{P}_{2}(\mathbb{R})\) be the bias/initial opinion distribution, \(\mu_{k}\in\mathcal{P}_{2}(\mathbb{R})\) be the opinion distribution at time \(k\), and \(\rho\in\mathcal{P}_{2}(\mathbb{R})\) be the recommendation distribution. Then, the opinion distribution \(\mu_{k}\) converges weakly in \(\mathcal{P}_{2}(\mathbb{R})\) to some opinion distribution \(\mu\in\mathcal{P}_{2}(\mathbb{R})\) with_ \[\begin{split}\mathbb{E}^{\mu}[x]&=\frac{\alpha}{1- \beta}\mathbb{E}^{\mu_{0}}[x]+\left(1-\frac{\alpha}{1-\beta}\right)\mathbb{E}^ {\rho}[x]\\ \mathrm{Var}(\mu)&=\frac{\alpha^{2}}{(1-\beta)^{2}} \operatorname{Var}(\mu_{0})+\frac{(1-\alpha-\beta)^{2}}{1-\beta^{2}} \operatorname{Var}(\rho).\end{split} \tag{3}\] _Moreover, regardless of the bias/initial opinion distribution \(\mu_{0}\) and the recommendation distribution \(\rho\),_ \[\mu=\left(\frac{\alpha}{1-\beta}x\right)_{\#}\mu_{0}*\left(\left(1-\frac{ \alpha}{1-\beta}\right)x\right)_{\#}\bar{\rho}, \tag{4}\] _where \(\bar{\rho}\) is "almost Gaussian", in the sense that its normalized distribution \(\hat{\rho}:=((x-\mathbb{E}^{\bar{\rho}}[x])/\sqrt{\operatorname{Var}(\bar{ \rho})})_{\#}\bar{\rho}\) satisfies_ \[W_{1}(\hat{\rho},\Phi)\leq\left(\frac{18}{\pi}\right)^{\frac{1}{3}}\left( \frac{1-\beta^{2}}{e\beta^{2}}\right)^{\frac{1}{12}}\sum_{\xi\neq 0}^{1}\frac{ |C_{\rho}(\xi)-C_{\hat{\Phi}}(\xi)|}{\left|\xi\right|^{3}}, \tag{5}\] _where \(\Phi\) is the zero-mean Gaussian probability distribution with unit variance and \(C_{\mu}(\cdot)\) is the characteristic function of \(\mu\). Finally, if \(\mu_{0}\) and \(\rho\) are Gaussian distributions, then \(\mu\) is Gaussian with the mean and variance in (3)._ Proposition 3 predicates that the opinion distribution converges to a "convex combination" of the bias/initial condition distribution and a distribution which is "almost Gaussian", regardless of the recommendation distribution (that is, even for very "non-Gaussian" recommendation distributions). This effect is reminiscent of the central limit theorem in probability theory, which, however, does not apply here (indeed, we consider trajectories of a stochastic dynamic system and not the sum of i.i.d. random variables). In particular, if \(\alpha=0\), the opinion distribution will asymptotically be "almost Gaussian". The upper bound (5), which is valid for all \(\alpha\) and \(\beta\) and is non-trivial (i.e., finite) for all distributions with finite third moment [19], indicates that the vicinity to a Gaussian distribution increases continuously as \(\beta\) approaches 1. When all distributions are Gaussian, the result becomes exact. Among others, Proposition 3 explains why, for short exploration times, the final opinion distribution does not depend on the recommendation distribution and why, for small values of \(\alpha\), it resembles a Gaussian distribution, even if all underlying distributions are not Gaussian; see Fig. 5, where \(\alpha=0\), \(\beta=0.8\), and \(T=3\). ## IV Numerical Results Our numerical analysis1 concerns the discrepancy between the micro- and macroscopic behavior of the opinion distribution, as a function of \(\alpha,\beta\) and the exploration time \(T\). We consider homogeneous populations with \(\alpha\in\{0,0.1,0.2\}\), \(\beta\in\{0,0.1,\ldots,1-\alpha\}\), and bias/initial opinion \(x_{0}\) uniformly distributed between \(-2\) and \(2\), and recommendation systems with exploration time \(T\in\{1,\ldots,21\}\) and recommendations distributed according to the standard zero-mean Gaussian distribution with unit variance. For each setting, we perform 20 exploration cycles. To quantify _microscopic_ opinion shifts, we average the difference between users' initial and final opinions (i.e., \(\frac{1}{M}\sum_{i=1}^{M}|x_{0,i}-x_{N,i}|\) with \(x_{0,i}\) and \(x_{N,i}\) being the initial and final opinion of user \(i\in\{1,\ldots,M\}\)). To quantify _macroscopic_ opinion shifts, instead, we use the (type-1) Wasserstein distance between bias/initial opinion distribution \(\mu_{0}\) and final opinion distribution \(\mu_{N}\). Footnote 1: Python implementation: [https://gitlab.ethz.ch/lnicolas/impact-of-recommendation-systems-on-opinion-dynamics](https://gitlab.ethz.ch/lnicolas/impact-of-recommendation-systems-on-opinion-dynamics). Our results are summarized in Fig. 6. First, for small values of \(\alpha\), our simulations suggest a qualitative discrepancy between micro- and macroscopic changes: When the microscopic change is largest (yellow), the macroscopic change is lowest (blue), and vice versa. The trend becomes less marked and eventually disappears as \(\alpha\) increases, as already observed in Example 1 for the limit case \(T=+\infty\). This discrepancy suggests that, even if the opinion distribution can be proven to be relatively stable (e.g., through surveys), the opinions Fig. 4: The (type-1) Wasserstein distance between the final opinion distribution and the distribution (2) is small (dark blue in the colormap) already for relatively small values \(T\), especially for \(\beta\) small (i.e., the distance is small). Thus, at least qualitatively, Proposition 2 remains valid also in non-asymptotic regimes. For this simulation, we used the same setting as in Section II-D. Fig. 5: Simulation of the closed-loop system over a horizon of 100. As suggested by our theoretic results in the case of infinitely frequent exploration (and with \(\alpha=0\)), the opinion distribution approaches (but does not generally converge to) a Gaussian distribution, even though the bias/initial opinion distribution is uniform and the recommendation distribution (plotted in solid red) is bimodal (a mixture of Gaussians with mean \(\pm 1\) and standard deviation 0.4). of individual users may be significantly impacted by the recommendation system. Second, for \(\alpha\ll 1-\beta\), we observe similar behavior along isolines of \(\beta^{T-1}=c\in\mathbb{R}_{\geq 0}\) in the sense that settings with similar \(\beta^{T-1}\) yield similar micro- and macroscopic behaviors (see Fig. (a)a). Intuitively, this is a consequence of the probability of successful exploration being modulated by \(|x_{k}-u_{k}|\) (cf. Lemma 1), since between two exploration steps \(k\) and \(k-T\): \[|x_{k}-u_{k}| \overset{(\ref{eq:1})}{=}\left|\beta^{T-1}(x_{k+1-T}-u_{k})+ \frac{\alpha-\alpha\beta^{T-1}}{1-\beta}(x_{0}-u_{k})\right|\] \[\overset{\alpha\ll 1-\beta}{\approx}\beta^{T-1}|x_{k+1-T}-u_{k}|.\] Thus, settings with similar \(\beta^{T-1}\) result in similar probabilities of successful exploration and thus in similar outcomes. ## V Conclusions We studied the impact of recommendation systems on opinion dynamics from a microscopic (i.e., at the individual level) and macroscopic (i.e., at the population level) perspective. We analyzed theoretically and numerically the interaction between users and a recommendation system. Among others, our work explains why and in which circumstances we observe a fundamental discrepancy between micro- and macroscopic effects, whereby the opinions of individual users are drastically affected by the recommendation system whereas the opinion distribution of the population is unchanged. In future research, we would like to (i) further investigate the properties of our model, (ii) consider more sophisticated recommendation systems (e.g., collaborative filtering [18]), and (iii) model the dynamics of opinion distribution directly in the probability space (e.g., see [20]).
2308.16870
Learning Driver Models for Automated Vehicles via Knowledge Sharing and Personalization
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization. The innate variability in the transportation system makes it exceptionally challenging to expose AVs to all possible driving scenarios during empirical experimentation or testing. Consequently, AVs could be blind to certain encounters that are deemed detrimental to their safe and efficient operation. It is then critical to share knowledge across AVs that increase exposure to driving scenarios occurring in the real world. This paper explores a method to collaboratively train a driver model by sharing knowledge and borrowing strength across vehicles while retaining a personalized model tailored to the vehicle's unique conditions and properties. Our model brings a federated learning approach to collaborate between multiple vehicles while circumventing the need to share raw data between them. We showcase our method's performance in experimental simulations. Such an approach to learning finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication. Code and sample dataset are made available at the project page https://github.com/wissamkontar.
Wissam Kontar, Xinzhi Zhong, Soyoung Ahn
2023-08-31T17:18:15Z
http://arxiv.org/abs/2308.16870v1
# Learning Driver Models for Automated Vehicles via Knowledge Sharing and Personalization ###### Abstract This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization. The innate variability in the transportation system makes it exceptionally challenging to expose AVs to all possible driving scenarios during empirical experimentation or testing. Consequently, AVs could be blind to certain encounters that are deemed detrimental to their safe and efficient operation. It is then critical to share knowledge across AVs that increase exposure to driving scenarios occurring in the real world. This paper explores a method to collaboratively train a driver model by sharing knowledge and borrowing strength across vehicles while retaining a personalized model tailored to the vehicle's unique conditions and properties. Our model brings a federated learning approach to collaborate between multiple vehicles while circumventing the need to share raw data between them. We showcase our method's performance in experimental simulations. Such an approach to learning finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication. Code and sample dataset are made available at the project page [https://github.com/wissamkontar](https://github.com/wissamkontar). ## 1 Introduction The curse of variability stands as a critical barrier in the development and deployment of Automated Vehicles (AVs) in the open world. Variability in the open world stems from the innate dynamic, stochastic, and unpredictable nature of the transportation system. The driving task can change significantly depending on the traffic state (congestion, free flow, etc.), weather conditions (foggy, snowing, etc.), and roadway design (divided highway, one-way, etc.). It even depends on what other agents (pedestrians, bikers, busses, etc.) are present. Human drivers adapt and respond to these variations instinctively, but replicating such situational knowledge for an AV to maneuver safely in any given scenario, is extremely challenging in light of limited data availability and difficulty of real-world experimentation. Additionally, AVs are designed with some desired performance in mind. This means that AV's data can exhibit significant heterogeneity. This creates another challenge in learning driver models, as one needs to decouple the unique and common features from AV data to create personalized driver models for each vehicle. In a recent release of large-scale AV dataset from Waymo [3] we see limited variability in the testing environment. For instance, the Waymo open dataset shows the majority of trip logs in sunny (99.3%), daytime (80.6%), and urban scenarios [5]. Another large dataset from nuScenes includes more diverse driving environments and scenarios as some driving logs come from different cities (Boston and Singapore) and locations (urban, residential, and industrial), and does include some rainy and cloudy weather conditions. However, most open world experiments and data collection (e.g., Cruise, Lyft, Aurora) are being done in dedicated routes and locations with limited exposure to variability in the transportation system. Dedicated experiments in similar driving scenarios allow for a deeper understanding of the AV performance in certain scenarios. However, a breadth of exposure is critically needed to train on extensive scenarios and encounters. The constraint on data availability is compounded by the difficulty of conducting real-world experiments. The development and testing of AVs require extensive resources and come with inherent safety concerns. Consequently, we see many experiments with exposure to limited scenarios, likely leaving AVs blind to a wide array of encounters. It is thus critical to develop methods to share knowledge across AVs to increase exposure to a wide range of scenarios occurring in the open world. Recent literature has also uncovered how AVs can ex hibit different behaviors on the road based on the underlying design and control logic of these vehicles. For instance, we show in [11, 10], how the car-following (CF) behavior of an AV exhibits a range of behavior depending on the underlying control parameters on spacing, desired speed, and acceleration constraints. Thus, data from AVs -specifically multi-class AVs (i.e., those designed with different performances in mind)- is extremely heterogeneous. This heterogeneity is in fact a blessing rather than a curse since it gives us the opportunity to train a tailored model for each vehicle that considers its own unique characteristics. Such a process is referred to as personalization hereafter. This work is concerned with developing a framework through which different AVs can share knowledge from their encounters and borrow strength from each other, yet retain a personalized model tailored to their conditions and unique properties. A driver model here refers to a model capable of safely and effectively maneuvering the AV in various driving scenarios. This driver model controls the vehicle's perception, decision-making (on acceleration, braking, and steering), and navigation. However, the main scope of this work is in presenting how the sharing of information (through parameter transfer) while retaining a personalized model is done, and not on designing an optimal driver model. ### Motivation One can question the need to share knowledge through collaboration between vehicles as opposed to just pooling the data and learning one model. We argue that learning one-size-fits all can lead to misleading results, and dilutes the distinction between (i) different experienced driving scenarios, and (ii) heterogeneous driving behavior from subject vehicles. For instance, consider the basic unit of a driver model - the car following (CF) model - whereby an AV uses its sensor data to regulate its own acceleration/speed such that it follows its leader in a safe manner. The design of such a CF controller for an AV requires specific consideration of desired spacing, speed preference, and comfortable acceleration. We can see such a model in play when we consider the widely adopted linear controller in Adaptive Cruise Control (ACC) and even full self driving systems [17, 13, 11]. In a linear controller, the desired spacing is first modeled based on the constant time gap policy as \[d_{v}^{*}(t)=v_{v}(t)\times\tau_{v}^{*}+\delta_{v}^{*} \tag{1}\] where \(d_{v}^{*}(t)\) is the desired equilibrium spacing of vehicle \(v\) at any time \(t\); \(v_{v}(t)\) is the respective speed of vehicle \(v\); \(\tau_{v}^{*}\) is the constant time gap; and \(\delta_{v}^{*}\) is the standstill distance. Accordingly, the deviation from the equilibrium spacing can be written as \(\Delta d_{v}(t)=d_{v}(t)-d_{v}^{*}(t)\), where \(d_{v}(t)\) represents the actual spacing between vehicle \(v\) and its leader (\(v-1\)) at time \(t\), and the speed difference between vehicle \(v\) and its leader (\(v-1\)) is \(\Delta v_{v}(t)=v_{v-1}(t)-v_{v}(t)\). As such, a system state for the AV controller can be described by \(\mathbf{x}_{v}(t)=[\Delta d_{v}(t),\Delta v_{v}(t),a_{v}(t)]^{T}\) and input state as \(u_{v}(t)\) \[\begin{split}& u_{v}(t)=\mathbf{K}_{v}^{T}\mathbf{x}_{v}(t)\\ &\mathbf{K}_{v}^{T}=[k_{sv},k_{vv},k_{av}]\end{split} \tag{2}\] where \(k_{sv}\), \(k_{vv}\), \(k_{av}\) are the feedback gains for the deviation from equilibrium spacing (\(\Delta d_{v}(t)\)), speed difference (\(\Delta v_{v}(t)\)) and acceleration (\(a_{v}(t)\)), respectively. The parameter setting of \(\mathbf{K}_{v}^{T}\) denotes the regulation magnitude for each component in the system state \(\mathbf{x}_{v}(t)\), and thus regulates the AV behavior. We refer readers to [11] for an in-depth analysis on this phenomenon. It follows that one can design an AV with specific consideration of \(d_{v}^{*}\), and \(\mathbf{K}_{v}\) in mind. The dichotomy of these parameters is that they are (i) influenced by designer preference (or rider preference as shown in time headway parameter \(\tau_{v}^{*}\) in commercial ACC), and (ii) influenced by the driving environment. The authors have a prior work that analyzes in depth how an AV can be designed with specific performance in mind, see [11]. However, if data from heterogeneous AVs (i.e., AVs that differ in design and desired performance) is pooled together to train a single driver model, heterogeneity is lost since the training aims to achieve generalization and the underlying assumption is that data is homogeneous. One may work with stochastic models by estimating parameter distributions to regain some heterogeneity in the data. However, personalized models for each AV still cannot be tracked when the data are pooled. Accordingly, personalizing desired speed, headway, and spacing is unenforceable. It is important to note that in some cases data pooling is not even achievable. Commercial AV data can be protected by propriety rights and privacy concerns. Thus accessing raw data for training purposes may not be available. Accordingly, this work is motivated by the need to (i) share driving knowledge between different vehicles to increase the exposure of an AV to different driving scenarios/environments, (ii) retain a personalized model for each vehicle under heterogeneous behavior, and (iii) bypass the need to access raw data for training driving models. We follow the motivation discussed here with a simulation study (presented in Section 3.3) that shows how pooling data is not ideal under heterogeneity and how our approach can tackle this problem. ### Related work With the advent increase of computational power and sheer amount of data collected in today's systems, federated learning became a powerful tool with the intent of _processing the data where it was created on the edge_. The edge here refers to single device, vehicle, or the like. As a consequence, traditional Internet of Things (IoT) applications have shifted to a decentralized approach termed Internet of Federated Things (IoFT) [9]. This brought about multiple analytical and computational tools that define how devices are set to collaborate with each other and share information. One of the early forefront tools in federated data analytics is the federated averaging (FedAvg), which was tailored for deep learning applications [12]. The idea here is simple; devices in a network structure would collaborate to learn a global deep learning model with the coordination of the central sever. Local devices perform iterations of stochastic gradient descent (SGD) using their data to obtain local parameters of their deep learning model, and send those parameters to a central server. Then the central server takes an average of those parameters to update the global model. Since then, several works have refined federated data analytics and tailored it to different applications. Notably, [19, 20] scales the application of federated learning into Gaussian processes, and general linear models. [16] presents Fed-ensemble, bringing ensemble methods to federated learning, improving generalization and uncertainty quantification. [15] tailors a federated algorithm to learn unique and shared features for principal component analysis. And several other models exist, which are reviewed here [9, 18, 1]. While the application of federated learning in transportation system has seen some momentum, it has yet to expand. Recently, [2] presented a survey review of federated learning for connected and automated vehicles. They note several applications of federated learning for in-vehicle human monitoring, steering wheel prediction, object detection, motion control, and vehicle trajectory prediction. Most relevant to this work are the ones related to vehicle trajectory prediction. For example, [4] uses an encrypted federated network algorithm to learn driver behavior and predict trajectories. [14] designs a federated deep reinforcement learning for trajectory planning. However, only few efforts have been invested with limited scope, and the application domain of vehicle trajectory prediction via federated learning remains largely unexplored. The focus of this paper is substantially different from available work and explores the potential of federated learning in a different direction. Specifically, what we aim to do is to share knowledge across vehicles to learn driving scenarios that might have been missed due to the variability in the transportation system, while also retaining personalization for each vehicle. ### Main contribution We summarize our contributions in the following: * **Modeling**: In our model, we acknowledge that data coming from each vehicle in uniquely heterogeneous given variability of driving scenarios, and personal preferences, and thus learning one-size-fits-all model is not ideal. Instead our model is personalized to encode unique encounter data and allows for transferring knowledge of unseen driving scenarios from one-vehicle to another. * **Algorithm**: We showcase a training algorithm based on federated learning, where vehicles only need to share iterations of personalized model parameters thus preserving privacy and minimizing communication cost. * **Application**: We present an application of such framework on learning traffic oscillations by knowledge transfer between three vehicles, and another application on training personalized models under heterogeneity. The rest of the paper is organized as follows: Section 2 details our methodology. In Section 3 we introduce and analyze two simulation studies for knowledge sharing and personalization under heterogeneity in behavior. Finally, Section 4 concludes. ## 2 Methodology In this section we discuss the problem setting and the model formulation. ### Problem Setting Consider the problem setting illustrated in Fig. 1. Multiple vehicles are being tested in different driving environments, and each has a different personalized driver model. All of the driving encounters experienced by the vehicles are of interest to us, as the ultimate goal is to design a global driving model capable of maneuvering the vehicle in different driving environments. In perspective, to build such a model, one can extract the data for each vehicle alone, then train a local (i.e., using the vehicle's own data) driving model. This approach yields a single driving model for each vehicle that is blind to some driving encounters observed by other vehicles (due to different driving environments). Figure 1: Problem Illustration What we seek to accomplish here is to allow vehicles to collaboratively train a global driving model, that allows for (i) a personalized model that focuses on their own local data, and (ii) knowledge sharing by transfer of information from one vehicle to another. The underlying assumption here is that data from each vehicle is uniquely heterogeneous. This heterogeneity is due to the unique encounter the vehicle is exposed to and its personalized design (e.g., specific desired speed, acceleration constraints, etc.). To achieve this we structure our driving model learning process as shown in Fig. 2. ### Problem Formulation Suppose there exist \(v\geq 2\) vehicles. Each vehicle \(v\in[V]:=\{1,...,V\}\) has local dataset expressed as \(D_{v}=\{\mathbf{X_{v}},\mathbf{y_{v}}\}\) with the cardinality of \(N_{v}\). We have an output \(\mathbf{y}_{v}=[y_{1},...,y_{N}]^{T}\), and an input \(\mathbf{X}_{v}=[x_{1}^{T},...,x_{N}^{T}]\). Here, \(\mathbf{X}_{v}\) represents the state of the subject vehicle and nearby vehicles, usually a vector of data informed from on-board sensors. For instance, the state data can be relative speed between subject vehicle and leading vehicle, spacing, acceleration, jerk, and others. \(\mathbf{y}_{v}\) represents an action by the AV, as acceleration magnitude for example. Formally, learning a driver model \(f_{v}\) given training data \(\mathbf{X}_{v}\) and \(\mathbf{y}_{v}\) is a function defined as: \[y_{v}(x_{v})=f_{v}(x_{v};\theta_{v}) \tag{3}\] where \(f(*)\) can hold any functional form. Most notably, \(f(*)\) can be a general deep learning network, Gaussian process, reinforcement learning, control model, or a linear model. The main interest here is parameter \(\theta_{v}\), which parameterize \(f(*)\) based on training data. The training data are ultimately tied to driving scenarios that vehicle \(v\) was exposed to. One can notice that predicting an accurate output \(f(x_{v}^{*}:\theta_{v})\) ultimately depends on accurate estimation of \(\theta_{v}\). Training \(f(*)\) entails minimizing a general loss function, \(\mathcal{L}(\theta_{v};x_{v};y_{v})\). There exist several optimizers to solve the minimization problem of \(\mathcal{L}\) (e.g., Adams [8], Stochastic Gradient Descent (SGD), etc.). However, most widely adopted is the SGD approach as it offers generalization power and is extremely efficient [7]. In typical fashion in SGD, the model training is performed in successive iterations. At each iteration of training \(t\), a subset of training data (indexed by \(\xi\)) \(\mathbf{X}_{v\xi}\) and \(\mathbf{y}_{v\xi}\) is taken to update model parameters according to the below: \[\theta_{v}^{(t+1)}\leftarrow\theta_{v}^{(t)}-\eta^{(t)}g_{v}(\theta_{v}^{(t)} ;\xi^{(t)}) \tag{4}\] where \(\eta^{(t)}\) is the learning rate and \(g_{v}\) is the stochastic gradient of the loss function \(\mathcal{L}\). The outcome is a driving model \(f(x_{v};\theta_{v})\) parameterized by optimal \(\theta_{v}\). ### Knowledge Sharing through Federated Learning The above is a general approach to learning a data-driven driver model, based on local data. However, in our approach we do not want to solely rely on vehicle \(v\)'s own data that is tied to the driving scenarios it was exposed to, but want to integrate knowledge from other AVs that might have encountered different driving scenarios. As such, we present a collaborative learning scheme that aims at learning a global parameter \(\hat{\theta}\) that minimizes a global loss function defined as: \[\mathcal{L}(\hat{\theta}):=\sum_{v=1}^{V}\alpha_{v}\mathcal{L}(\theta_{v};D_{ v}) \tag{5}\] where \(\alpha_{v}\) is the weight parameter for vehicle \(v\) in the collaborative training scheme. \(\alpha_{v}=\frac{N_{v}}{\sum_{v=1}^{V}N_{v}}\), such that \(\sum_{v=1}^{V}\alpha_{v}=1\). Consequently, at each communication round, each vehicle runs steps of SGD to estimate its local parameters: \[\theta_{v}^{(t+1)}\leftarrow\theta_{v}^{(t)}-\eta^{(t)}g_{v}(\theta_{v}^{(t) };\xi_{v}^{(t)}) \tag{6}\] Afterwards, the global coordinator aggregates the model parameters \(\hat{\theta}\) according to the below rule and sends back \(\hat{\theta}\) to each vehicle. Essentially, the global coordinator here averages out the local parameters coming out from each individual vehicle's driver model. \[\hat{\theta}=\sum_{v=1}^{V}\alpha_{v}\theta_{v} \tag{7}\] In this scheme, we have all vehicles participating during the training rounds to send their local parameters to the global coordinator. Note that an underlying assumption is that all vehicles have the same functional form \(f(*)\) of the driver model. Thereafter, the global coordinator sends back Figure 2: Learning Structure the updated parameters to each vehicle. Knowledge from each vehicle's driver model is shifted around to all collaborating vehicles. We summarize the above in Algorithms 1 & 2. ``` 1Data: number of vehicles \(V\), their initial model parameters \(\theta\), number of sharing rounds \(s\in S\), and weight parameters \(\alpha_{v}\) 2for\(s=1:S\)do 3 Global coordinator broadcasts \(\theta\); 4for\(v\in V\)do 5\(\theta_{v}^{(0)}=\theta\) ; 6 Perform SGD (Algorithm 2) to update local vehicle parameters; 7 8 endfor 9Aggregation:\(\hat{\theta_{s}}=\sum_{v=1}^{V}\alpha_{v}\theta_{v}\) ; 10 Set \(\theta=\hat{\theta_{s}}\); 11 endfor 12 Return \(\hat{\theta_{S}}\) ``` **Algorithm 1**Knowledge Sharing through Federation (Learning a Global Model) ### Personalization Consequently, knowledge sharing was achieved by learning a global model parameterized by \(\hat{\theta}\) and shared with all participating vehicles. It follows that each vehicle needs to personalize \(\hat{\theta}\) based on its own local data. Through this process we encode heterogeneous behavior by each vehicle that is described by its own design parameters. Such personalization can be achieved in numerous ways. However the most commonly used is the regularization concept, where each vehicle uses its own local data \(D_{v}\) to minimize a penalized least squares loss function defined as: \[\small min_{\theta_{personal}}\frac{1}{N_{v}}\sum_{n=1}^{N_{v}}\mathcal{L}(f( x_{v};\theta_{personal}),y_{v})-\omega||\hat{\theta}-\theta_{personal}||_{2}^{2} \tag{8}\] where \(\omega\) is a positive coefficient, and \(\hat{\theta}\) is the global parameter learned from Algorithms 1 & 2. This approach personalized \(\theta_{personal}\) to each vehicle \(v\in V\) while retaining global knowledge by encouraging a solution close to \(\hat{\theta}\). This regularization encourages \(\theta_{personal}\) not to drift to far away from \(\hat{\theta}\), and it shown statistically to reduce over fitting and the bias-variance trade off. Accordingly, each vehicle re-runs Algorithm 2, with \(g_{v}\) now based on the regularized loss function defined in Eq. 8. Note that one does not necessarily need to do as many iteration steps (\(S\)) as done in Algorithms 1 & 2, but a fraction of the steps can suffice. ## 3 Simulation Analysis We provide here a simulation experiment to demonstrate the applicability of our approach in (i) collaborative knowledge sharing to learn un-encountered driving scenarios, (ii) personalization of driving models under heterogeneity. ### Simulation Setup Consider that we have different AVs denoted by \(v\). In here, we adopt a simple driver model setup as the goal is not to analyze the prediction performance of the underlying driver model, nor to design a driver model (several work in the literature exist on that), but rather to see how much knowledge sharing and personalization can be achieved through our proposed training structure. Accordingly, we define our driver model as a car-following model that predicts the AV speed (i.e., the output \(y_{v}\)) based on an input of leader speed (\(x_{v}\)). We use a Guassian Process Regression (\(\mathcal{GP}\)) to design such driver model. Note that our framework (knowledge sharing and personalization) works under any driver model, so users can replace the \(\mathcal{GP}\) with a model of choice. We formulate the driver model as follows: \[f_{v} \sim\mathcal{GP}(m(\cdot),c(\cdot,\cdot)) \tag{9}\] \[y_{v} =f(x_{v})+\epsilon,\epsilon^{i.i.d}\sim\mathcal{N}(0,\sigma_{ \epsilon}^{2}) \tag{10}\] where \(x\in X\) is the input, and \(m(\cdot):X\rightarrow\mathbb{R}\) is the prior mean function, \(c(\cdot,\cdot):X\times X\rightarrow\mathbb{R}\) is the prior covariance function, and \(\epsilon\) is the observational noise with variance \(\sigma_{\epsilon}\). We further consider the zero mean function, and we assume the covariance function \(c(\cdot,\cdot)=\sigma_{o}k(\cdot,\cdot)\) for a kernel function \(k(\cdot,\cdot):X\times X\rightarrow\mathbb{R}\), where \(\sigma_{o}\) is the output variance. We adopt the well known Radial Basis Function (RBF) as our kernel that is parameterized by the length scale \(l\). Accordingly, we denote \(\theta_{v}=(\sigma_{0},l,\sigma_{\epsilon})^{T}\in\mathbb{R}^{3}\) the parameters to be estimated. To estimate \(\theta_{v}\), we use SGD (Algorithm 2) to minimize the scaled negative log marginal likelihood function defined as \[\mathcal{L}(\theta_{v},X_{v},y_{v})=\frac{-1}{n}logp(y_{v}|X_{v},\theta_{v}) \tag{11}\] Accordingly, the collaboration here is based on \(\theta_{v}=(\sigma_{0},l,\sigma_{\epsilon})^{T}\in\mathbb{R}^{3}\). At every sharing round \(S\) (see Algorithm 1), each vehicle performs steps of SGD on its \(\mathcal{L}(\theta_{v},X_{v},y_{v})\) to output a local set of parameters \(\theta_{v}\) (see Algorithm 2). At the aggregation step, a global coordinator then computes \(\hat{\theta}\) based on the law described in Eq. 7, and sets \(\theta_{v}=\hat{\theta}\). This scheme of collaborative learning and sharing of parameters is what encodes knowledge transfer between participating vehicles. It is important to note that at no point during the collaboration, raw data (\(X_{v}\) and \(y_{v}\)) are shared between vehicles, which makes this approach very computationally efficient and privacy aware. Additionally, the parameters \(\theta_{v}\) are dependent on the assumed functional form \(f(*)\) of the driver model. For instance, if one is using neural network predictor, then \(\theta_{v}\) would be the weights of the neural net. Finally, each vehicle re-runs Algorithm 2 based on the regularized loss function described in Eq. 8 to compute the final \(\theta_{personalized}\) used in the prediction of AV speed (i.e., predictions from the driver model). ### Experiment 1: Knowledge Sharing to Learn Traffic Oscillations Now we consider a specific experimental setup for which we want to demonstrate how knowledge is transferred between vehicles. Consider now that we have three automated vehicles (\(V=3\)). We assume each vehicle was operating under completely different driving environment. To signify this, we consider three different scenario-based datasets shown in Fig. 3: (i) vehicle 1 is operating under constant speed, (ii) vehicle 2 experiences a deceleration maneuver, and (iii) vehicle 3 experiences an acceleration maneuver. Note that these experiments are extracted from Waymo dataset [6]. We use this dataset to give realism to our experimental setup. Each of the three driving scenarios runs for \(19.7\)-seconds long run at a \(10Hz\) resolution. Consequently, each vehicle has a local dataset of \(N_{v}=197\). In Fig. 3 the top row shows the driving scenario for each vehicle. The blue curve shows a human driver (leader) oscillation, and the red curve shows the resulting response of AV follower (the Waymo vehicle). Recall that our driver model is based on speed input and output. Thus, training our model requires an input, \(X_{v}\), of the leading vehicle (blue curve) speed. And predicts an output speed \(y_{v}\) of an AV (the red curve). The three driving encounters in our setup (Fig. 3), represent portions of a full traffic oscillation - constant speed followed by a deceleration and acceleration maneuver to reach a constant speed again. Such oscillations are ubiquitous in traffic systems and are certain to occur in the open world. The goal here is to allow the three vehicles to train a driving model in a collaborative fashion, in such a way that they would individually be able to respond to a traffic oscillation knowing that none of them had seen such a driving scenario. After collaborative training, we test each vehicle on the full oscillation scenario, seen in Fig. 4. The full oscillation in Fig. 4, represents an observed empirical oscillation created by a human driven vehicle (HDV) - extracted from the Waymo dataset. #### 3.2.1 Simulation Results After training the driver model with our approach that focuses on knowledge sharing and personalization, we test against the oscillation shown in Fig. 4, whereby the input is the leader speed, and output is the AV speed. The goal here is to see whether each vehicle (1-2-3) is able to produce Figure 3: Driving Scenarios Observed by each Vehicle an oscillation. Results are shown in Fig. 5. Interestingly, we see that "After Knowledge Sharing" (blue curve) each of the vehicle was able to produce a traffic oscillation even though none of the vehicles have full knowledge of an oscillation (recall Fig. 3). This is not the case when we look at "Without Knowledge Sharing" (red curve). However, it is notable that the oscillations "After Knowledge Sharing" are not perfect. This is rather expected given the limited training data, as the focus is not on building a driver model rather on knowledge sharing. In practice, usually a driver model would have a complex set of input based on speed of leader, position of leader, and multiple other factors. But, success here is denoted as the ability of each vehicle (1-2-3) to produce an AV response of an oscillation when subjected to one. The effect of knowledge sharing is specifically amplified when looking at vehicle 1 profile in Fig. 5. Remember that vehicle 1 had only access to its local data which is a constant speed profile. Specifically, it shows that the driver model of vehicle 1 can extract knowledge on deceleration and acceleration phases from vehicles 2 & 3, respectively. This gives it knowledge to respond to an oscillation. On the contrary, when we train the driver models of vehicles 1-2-3 without knowledge sharing, we notice that they can be blind to some maneuvers (deceleration/acceleration) that inhibits their ability to fully produce an oscillation, see red curves in Fig. 5. ### Experiment 2: Knowledge Sharing and Personalization under Heterogeneity Now we revisit the problem of data pooling and heterogeneity in AV behavior from the _Motivation_ (Section 1.1). We consider here two AVs, each having a different design setup and thus driving behavior. Note that such design of AVs is not uncommon in the real world, see [11]. * Aggressive AV: is an AV that is designed to be very responsive to the leader speed and prioritizes the minimization of speed different \(\Delta v_{v}(t)\). Specifically, for the aggressive AV we set \(\mathbf{K}_{v}=[0.01,10,-0.01]\), with \(\bar{r}_{v}^{*}=0.5\), and \(\delta_{v}^{*}=5m\). * Passive AV: is an AV that is designed to very passive to the leader speed and prioritizes the minimization in the deviation from target spacing \(d_{v}^{*}(t)\). Specifically, we set \(\mathbf{K}_{v}=[10,0.01,-0.01]\), with \(\tau_{v}^{*}=2.5\), and \(\delta_{v}^{*}=7m\). Further, we consider a specific leader (HDV) speed oscillation and then based on the linear controller explained Figure 4: Testing Scenario: Full Oscillation Figure 5: Prediction Results of each Vehicle on the Testing Scenario (Full Oscillation): Before and After Knowledge Sharing in Section 1.1 and for the settings described above, we simulate an AV speed profile. This is shown in Fig. 6. Note that the leader (HDV) profile is common between the two generated AV profiles. One can directly notice the difference in behavior between the two AVs. The aggressive AV (red) nearly masks its leader (black), while the passive AV (green) is much less reactive. It follows then that the data (i.e., output \(y_{v}\)) generated from control design of these two AVs exhibit heterogeneous features. #### 3.3.1 Simulation Results When output data from these two heterogeneous vehicles is pooled together to learn one global driving model, it masks the underlying difference in behavior. This is further illustrated in Fig. 7. When we train a driver model based on pooling the speed data from the aggressive and passive AVs, we get a behavior shown in Fig. 7 (light blue color). The prediction fails to distinguish between an aggressive and passive behavior as it tries to achieve generalization. However, in our proposed learning structure, the personalization step produces \(\theta_{personalized}\) that results in different driver models for each vehicle rather than one-model-fits-all approach. This allows vehicles to retain their desired behavior which still sharing knowledge. Fig. 8 shows the predictions for both aggressive and passive AVs based on our personalized knowledge sharing approach. Table 1 further shows the Root Mean Squared Error (RMSE) based on speed for each model. It is evident that under heterogeneity our proposed learning structure can better encode different driving behaviors and thus better fit personalized driver models. \begin{table} \begin{tabular}{l l l} RMSE(speed) & Aggressive & Passive \\ \hline Pooled Model & 0.53 & 2.46 \\ \hline Knowledge Sharing \& Personalization & 0.12 & 0.22 \\ \hline \end{tabular} \end{table} Table 1: Prediction Error Figure 8: Prediction from Driver Models based on Knowledge Sharing and Personalization Figure 6: Speed Profiles for Experiment 2 Setup. Note: Leader HDV (black) and Aggressive AV (red) Curves Nearly Overlap each other Figure 7: Prediction from a Driver Model based on Pooled Data ### Limitations and Remarks 1. **On the driver model function:** Here we explored a \(GP\) function; however in practice other models might be more widespread such as deep learning or reinforcement learning methods. While our training framework remains malleable enough to apply for different functions, it is important to extend to other driver models. 2. **On parameter sharing:** In our framework, all vehicles share the same driver model (\(GP\)) with same feature input (leader speed) and output (AV speed). However, if one wants to collaboratively share knowledge between vehicles with different functional form of the driver model (i.e., one vehicle has neural network and other reinforcement learning models) an alternative parameter sharing scheme is needed. A way to approach this is to decompose \(\theta_{s}:=\theta_{functional}+\theta_{shared}\). \(\theta_{functional}\) are unique parameters that relevant to the vehicle driver model, while \(\theta_{shared}\) come from a global learned model. Additionally, one can regularize the \(\theta\) (i.e., assign weights on \(\theta_{functional}\) and \(\theta_{shared}\)) so that a vehicle can put more value on its own driver model and preferences. 3. **On aggregation (\(\hat{\theta}\) in Eq. 7):** Our aggregation strategy (averaging) is based on the widely used FedAvg. However, other strategies exist and can be designed to suit the desired application. 4. **On personalization under complex driver models:** We note that in our approach here we use a very simplified driver model (only taking speed data), however in complex driver models it can become hard to encode personalization for each vehicle. Since in this case, you can have multiple layers of designed parameters that are all contributing to the behavioral change of the vehicle. This is rather a complex problem to solve, as one would first need to decompose unique and common features between vehicles. This problem is currently understudy by the team. 5. **On when to pool data:** In some cases pooling data can be beneficial. For instance, consider the setup in Experiment 1 (Section 3.2). If vehicles 1-2-3 have similar design (driver model with same parameter setting), one can pool all the data from different scenarios and train one-modal-that-fits-all. However, pooling data might not even be applicable given that access to some data is restricted (privacy concerns of propriety rights). As such our approach circumvents this by only sharing parameter values and never raw data. ## 4 Conclusions In this work we present a way of learning and training driver models for AVs in a collaborative way. Different vehicles can share knowledge between each other through a collaborative iterative process that entails sharing and discovering optimal parameters that minimize a desired global loss function. We also show how vehicles can share knowledge while retaining a personalized model tailored to their own data. We showcase two experimental applications of the designed model. In the first experiment, three vehicles collaborate to learn a speed oscillation, by decomposing and transferring knowledge between each other. In the second experiment, we show how under heterogeneous AV behavior, learning a driver model while pooling data is not ideal and thus personalization yields better results. Several limitations and extensions of this work are yet to be tackled and are highlighted above. A large scale experimentation and benchmarking of driver models with or without knowledge sharing and personalization is important, yet goes beyond what this paper can provide, and is left for future work by the authors. We hope that our endeavors in this modeling direction would spur interest and motivate further work on designing complex AV driver model's that can safely and efficiently maneuver in the open world. ## 5 Acknowledgements This research was sponsored by the Unites States National Science Foundation through Award CMMI 1932932 and the University of Wisconsin-Madison.
2309.14396
Guess & Sketch: Language Model Guided Transpilation
Maintaining legacy software requires many software and systems engineering hours. Assembly code programs, which demand low-level control over the computer machine state and have no variable names, are particularly difficult for humans to analyze. Existing conventional program translators guarantee correctness, but are hand-engineered for the source and target programming languages in question. Learned transpilation, i.e. automatic translation of code, offers an alternative to manual re-writing and engineering efforts. Automated symbolic program translation approaches guarantee correctness but struggle to scale to longer programs due to the exponentially large search space. Their rigid rule-based systems also limit their expressivity, so they can only reason about a reduced space of programs. Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness. In this work, we leverage the strengths of LMs and symbolic solvers in a neurosymbolic approach to learned transpilation for assembly code. Assembly code is an appropriate setting for a neurosymbolic approach, since assembly code can be divided into shorter non-branching basic blocks amenable to the use of symbolic methods. Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence of the transpilation input and output. We test Guess & Sketch on three different test sets of assembly transpilation tasks, varying in difficulty, and show that it successfully transpiles 57.6% more examples than GPT-4 and 39.6% more examples than an engineered transpiler. We also share a training and evaluation dataset for this task.
Celine Lee, Abdulrahman Mahmoud, Michal Kurek, Simone Campanoni, David Brooks, Stephen Chong, Gu-Yeon Wei, Alexander M. Rush
2023-09-25T15:42:18Z
http://arxiv.org/abs/2309.14396v2
# Guess & Sketch: Language Model Guided Transpilation ###### Abstract Maintaining legacy software requires many software and systems engineering hours. Assembly code programs, which demand low-level control over the computer machine state and have no variable names, are particularly difficult for humans to analyze. Existing conventional program translators guarantee correctness, but are hand-engineered for the source and target programming languages in question. Learned transpilation, i.e. automatic translation of code, offers an alternative to manual re-writing and engineering efforts. Automated symbolic program translation approaches guarantee correctness but struggle to scale to longer programs due to the exponentially large search space. Their rigid rule-based systems also limit their expressivity, so they can only reason about a reduced space of programs. Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness. In this work, we leverage the strengths of LMs and symbolic solvers in a neurosymbolic approach to learned transpilation for assembly code. Assembly code is an appropriate setting for a neurosymbolic approach, since assembly code can be divided into shorter non-branching basic blocks amenable to the use of symbolic methods. Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence of the transpilation input and output. We test Guess & Sketch on three different test sets of assembly transpilation tasks, varying in difficulty, and show that it successfully transpiles 57.6% more examples than GPT-4 and 39.6% more examples than an engineered transpiler. We also share a training and evaluation dataset for this task. ## 1 Introduction The increasingly heterogeneous landscape of hardware architectures and their instruction set architectures (ISAs) marks a large and growing need to develop support for cross-ISA software management. This challenge is especially relevant for legacy software which has been compiled down to hardware-specific programs, and must be re-written to run on any other hardware. Many high-usage source code files also contain in-lined assembly code, which requires porting to alternate hardware architectures. Automated cross-ISA software support has been of interest in the computer architecture community for decades (Armengol-Estape et al., 2023; Wang et al., 2018; Bellard, 2005; Ardestani & Renau, 2013; Sanchez & Kozyrakis, 2013). Emulators, virtual machines, and containerized applications allow users to run software on different host hardware by simulating the architecture of the hardware platform that the software is compiled for. However, this option can be unwieldy and compute-inefficient. Assembly-to-assembly _transpilation_1 (Ami; occ, 1989), the process of automatically porting software from one ISA to another, offers a way to generate software that can be natively executed on the new hardware. However, current transpilation tools are engineered for the specific source and target hardware architecture, so they scale poorly as new ISAs are introduced. Footnote 1: “_Transpiler_” describes the general code translation task that our method targets, but we note that the focus of this paper is assembly-to-assembly transpilation. Neural machine learning techniques are a natural fit for transpilation. Assembly program translation pairs can be generated by cross-compiling C or C++ programs using different existing compilers and compiler flags, providing vast amounts of training data. Pairs have the same semantics since they originate from the same high-level program. Assembly code syntax is rigid but simple compared to natural language and most high-level programming languages, settings that existing language models have been shown to perform well in (Devlin et al., 2019; Feng et al., 2020; Radford and Sutskever, 2018; Lewis et al., 2019; Chen et al., 2021). Evaluation in this setting can also be done automatically by comparing execution of the input code and the resulting code. However, a key weakness of language models in this setting is their inability to perform long-tail logical reasoning (Kandpal et al., 2022; Miceli-Barone et al., 2023). Assembly code transpilation requires reasoning about the complex semantics of program flows. It is also challenging to handle different implementations of semantically equivalent operations on different ISAs. Motivated by the symbolic properties of logical reasoning in the problem of transpilation, we propose a neurosymbolic method to transpilation. Purely symbolic methods are built on correctness guarantees, but generally can only handle short programs before encountering computational intractability. Classical synthesis techniques struggle to scale past \(\sim 6\) lines of assembly code (Hu et al., 2023). Purely neural language modeling approaches are powerful general translators but have critical failure points that cause program breakdown. We argue for the value of a mixed-method, i.e. neurosymbolic, approach that uses probabilistic language models to obtain helpful information for transpilation, then passes such information to an ISA semantics-aware solver to complete the transpilation process. Our method, Guess and Sketch, uses core properties from the language model to extract symbolic methods for transpilation. During the neural Guess phase, a trained language model produces candidate translations for a given input, identifies potential errors in the output, and extracts semantically-aligned subsequences from the input and output sequences. Potentially erroneous aligned subsequences are passed to the symbolic Sketch phase, where the input subsequence is used as a specification to correct the output subsequence. We demonstrate the feasibility of our method by porting assembly programs from ARMv8 to RISC-V and vice-versa, but note that our method can generalize to various source and target languages. In order to test our method, we introduce a new benchmark consisting of 3 transpilation problems varying in difficulty and domain. We identify weaknesses in engineered symbolic approaches to the task. We also find that existing neural network approaches, using both fine-tuned and pre-trained off-the-shelf large language models, struggle with transpilation. In contrast, our method combines the strengths of both neural and symbolic approaches and successfully transpiles 57.6% more examples than GPT-4, 39.6% more examples than an engineered transpiler, and 13.2% more examples than the most competitive baseline. ## 2 Related Work Learned code translation.Code transpilers (or transpilers) translate from one programming language to another. The core challenge in this space is preserving operational semantics across the source and target language, while operating within the strict syntax and vocabulary of both. One approach to this task is to train neural machine translation systems with paired code sequences for the task, such as language model (Lewis et al., 2019) or tree-to-tree neural networks (Chen et al., 2018). Approaches such as Transcoder (Roziere et al., 2020) have also presented an unsupervised approach to neural source code-to-source code translation, in which they only require monolingual training data and take advantage of three training objectives: cross-lingual masked language modeling, denoising auto-encoding, and back-translation. Follow-up works use the LLVM intermediate representation (Roziere et al., 2022) and automatically-generated unit tests (Szafraniec et al., 2023) to further improve this approach. Older statistical approaches have mined parallel code from repositories and generated grammar-based statistical machine translation models (Nguyen et al., 2013; Karaivanov et al., 2014; Koehn et al., 2007). These outputs of these prior learned approaches are the generation directly extracted from the model. Guess and Sketch instead incorporates knowledge of the semantics of the source and target languages in a symbolic solver that improves semantic correctness the produced output. Additionally, as far as we are aware, we are the first to present a learned approach for learning assembly translation, a lower-level programming language than other higher-level programming languages such as Python, Java, and even C. Emulators and engineered transpliers.Executing code on a platform different than the one for which it was created is a long-desired task. Apple's Rosetta (app) software was designed to ease the transition of applications between hardwares by automatically translating binary executables from the previously supported to the new ISA. Specifically, Rosetta in 2006 supported the transition from PowerPC to Intel processors. Rosetta 2 released in 2020 enabled translation from x86-64 based processors to support by Apple silicon. Emulators and virtualizers allow users to execute code designed for another target hardware by simulating the target hardware ISA atop the host hardware. QEMU (Bellard, 2005) is one popular emulator and virtualizer that can emulate various architectures on certain host architectures. Other assembly transpilers have been written to translate assembly from one language to another, such as from ARM to RISC-V (Schorr et al., 2020). However, these emulators and transpilers take years to develop. Guess & Sketch, on the other hand, leverages the translation abilities of a learned model to perform a bulk of the transpilation. Neurosymbolic program synthesis.Program synthesis is the task of generating computer programs according to some correctness specification (Lee et al., 2021). In the context of program translation, the correctness specification is the semantics of the input program itself. We discuss here some works that take a combined neural and symbolic approach to the program synthesis task, similar to our own approach. Nye et al. (2019) train an LSTM-based model to generate program sketches from some input specification, then use the generated sketch and specification to search for a satisfying program. Guo et al. (2022) devise a top-down grammar-based method to selectively expand nonterminals in a program syntax tree. The incomplete program tree is converted to a sketch that is passed to the symbolic sketch solver to generate a full program. Unlike these previous works, our method infers the sketch using attributes of a single autoregressive language model. The benefit of our approach is over directly producing the sketch or generating based on a grammar is that we avoid encoding specific sketch and language technicalities into the training process. ## 3 Background ### Transpilation The task of transpilation is to take an input program \(P_{x}\), represented as sequence of tokens \(x\), and produce the semantically-equivalent program \(P_{y}\) represented as sequence of tokens \(y\). Let \(\mathcal{D}\) be the domain of all program inputs. For simplicity we represent programs as functions that map inputs to a deterministic measurable output, either an integer or program failure: \(P_{*}:\mathcal{D}\rightarrow(\mathcal{I}\cup\bot)\). Semantic equivalence can be measured by checking that for all inputs in \(\mathcal{D}\), both programs produce the same execution outputs: \(x\equiv y:\forall d\in\mathcal{D}:P_{x}(d)=P_{y}(d)\). In practice, we test the full programs on a feasible subset of \(\mathcal{D}\) determined by the objective of the source program. When working with programs, we will also assume we can partition the tokens into \(\mathcal{B}_{x}\) non-overlapping subsequences \(x=x_{b_{1}},\ldots,x_{b_{|\mathcal{B}_{x}|}}\) where each \(b\in\mathcal{B}_{x}\) defines a span over \(x\). Subsequences are defined so that they can individually be converted to programs \(P_{x_{b}}\). Details for identifying such subsequences for assembly and translating them into a program representation conducive for symbolic reasoning in a sketch solver are shared in Appendix A.1. ### Generative Language Models Let \((x,y)\in(\mathcal{V}^{L},\mathcal{V}^{L})\) denote an input and output sequence pair where \(\mathcal{V}\) is the shared vocabulary of tokens and \(L\) is the maximum length. The objective of a (conditional) generative language model is to autoregressively produce the correct output \(y\) from input \(x\): \[\operatorname*{arg\,max}_{y\in\mathcal{V}^{L}}\prod_{t}p(y_{t}|y_{<t},x)\] Modern language models are based on the Transformer architecture (Vaswani et al., 2017). Transformers use attention (Parikh et al., 2016), a routing mechanism that provides a distribution over the input tokens used for predicting the next word. Intuitively, attention learns to indicate which part of the input to weigh more for each output. We can extract the model's attention between the input sequence \(x\) and output sequence \(y\) as a series of stochastic matrices at each layer mapping every output index to a probability distribution over input indices2: \(M\in\Delta^{|y|\times|x|}\). Footnote 2: In encoder-decoder models this comes from cross-attention, for decoder-only models by renormalizing self-attention. ### Sketching Sketching (Solar-Lezama, 2009; Solar-Lezama et al., 2006a) is an approach to program synthesis in which a _partial program_ outlines the high-level implementation, then a synthesizer populates the omitted low-level details by ensuring that the resulting code passes some given correctness specification. Partial programs are expressed in a procedural programming language augmented with a single added construct: a symbolic constant expressed as a hole, denoted \(\bullet\). Programs expressed in this form, with holes as placeholders for concrete values, are _sketches_. In our notation, the partial program sequence is composed of tokens from the vocabulary and an added hole token: \(\mathcal{S}=(\mathcal{V}\cup\{\bullet\})^{*}\). Program sequences \(x\) are compiled by a semantics-aware translator into representations \(P_{x}\) in the procedural programming language understandable by the solver. The correctness specification is set by source program \(P_{x}\). The goal of the synthesizer is to identify the mapping \(\phi:\mathcal{S}\rightarrow\mathcal{V}^{*}\) that populates the holes of the partial program sequence \(s\) to produce the full program sequence \(\phi(s)\) whose corresponding program is semantically equivalent to the source program: \(\forall d\in\mathcal{D}:P_{\phi(s)}(d)=P_{x}(d)\). The synthesis engine reduces the resulting programmatic sketch representation to a constraint satisfaction problem solved using counterexample guided inductive synthesis (Solar-Lezama et al., 2006b) to find values for the holes. ## 4 Neurosymbolic Transpilation: Guess & Sketch Given an input program \(P_{x}\) represented as sequence \(x\in\mathcal{V}^{L}\), our goal is to learn to generate a semantically-equivalent output sequence \(y\in\mathcal{V}^{L}\) which represents program \(P_{y}\): \(P_{x}\equiv P_{y}\). Programs are comprised of function definitions that are generally independent from one another, so functions are individually translated then stitched back together. See details in Appendix A.1. The challenge of our neurosymbolic approach is that language models operate on prefixes, performing inference by producing one token at a time, while sketch-based methods reason with partially complete sequences. **To meaningfully pass information between the language model and the Figure 1: In the Guess (top) phase, the full input sequence \(x\) (blue) is passed to a trained language model (LM), which produces a candidate translation \(y\) (orange), identifies potential mistakes (red), and extracts subsequence alignment (purple) from attention between the input and output (attn map). In the Sketch (bottom) phase, aligned input and output subsequences are passed to a symbolic solver \(\lambda\) to correct errors identified in the Guess phase. The final output \(y^{\prime}\) is constructed by recombining corrected subsequences. symbolic solver, we must extract relevant sequence-level information from the language model for the solver to reason over with.** Specifically, the solver needs candidate output translations and their semantic alignment in the input. Our method breaks the problem into stages that can be better solved by the complementary strengths of neural and symbolic methods: a probabilistic machine learning language model produces candidate translations, then alignment and confidence information is extracted and passed to a semantics-aware solver to filter the search spaces for a correct solution. The pipeline for the Guess & Sketch approach is illustrated in Figure 1. ### Guess: Structured Candidates from a Generative Model The Guess phase produces guesses as tuples. For an input sequence \(x\), Guess produces tuples composed of: a candidate transpilation \(y\), alignments between subsequences: \(A\in\mathcal{B}_{x}^{[\mathcal{B}_{y}]}\), and potential token-level errors in the prediction: \(E\in\{0,1\}^{[y]}\). Candidates.To produce candidate sequences we follow a standard generative approach. We first train a generative language model on paired source language and target language program sequences. Once trained, candidate transpilations are produced by querying the model: \[y\in\operatorname*{\mathrm{top}}_{y\in\mathcal{V}^{L}}p(y|x) \tag{1}\] Alignment.Since the input and target output sequences are intended to be globally semantically equivalent, we assume output sequences locally align to input sequences. While there is not a one-to-one equivalence between tokens, subsequences of the two programs can be matched. We use this subsequence matching and the transformer attention to determine the alignment used by the sketch system. A sample extracted alignment matrix, along with the truth alignment matrix, is shown in Figure 2. Alignment is represented as a vector between subsequences: \(A\). To extract the alignment from the language model, we average the transformer attention matrices connecting \(x\) and \(y\) at single layer to form a stochastic matrix \(M\in\Delta^{|y|\times|x|}\). We then set the alignment \(A_{b_{j}}=b_{i}\) for the input subsequence with the highest aggregate attention score. Aggregate attention score is given by norm of the submatrices i.e. \(\forall b_{j^{\prime}}\in\mathcal{B}_{x}:\|M_{b_{j},b_{i}}\|\geq\|M_{b_{j^{ \prime}},b_{i}}\|\). Guesses and Errors.The generative model is also used to identify tokens where it is most likely guessing. First we check if the output token \(j\) is predicted with probability less than some value \(\gamma\): \[p(y_{j}|y_{<j},x)<\gamma \tag{2}\] These low-confidence prediction points correlate to long-tail code phenomena, i.e. instances that arise rarely in the data distribution, and are where the model may have made a translation mistake. The second case is if the general model is confident, but the program violates a domain specific heuristic, specifically if the token or its aligned input subsequence reference some entity not described in scope. If either of these conditions are satisfied, the tokens in question are marked as potentially erroneous: \(E\in\{0,1\}^{[y]}\). ### Sketch: Reason Over Aligned Candidates The Sketch phase produces a full synthesized transpilation using information from the Guess phase with symbolic program solver methods. Note that we cannot run a symbolic solver over the entire program, so we focus on solving for errors in individual subsequences \(\mathcal{B}_{y}\). Figure 2: True subsequence alignment (l), attention (r), and projected subsequence alignment (r) from the Guess model. Create the sketch.We create a sketch \(s\) for each subsequence \(b\in\mathcal{B}_{y}\) that has an possible error from the first stage. The sketch is created from \(y_{b}\) by replacing each position in \(j\in b\) that also satisfies \(E_{j}\neq 0\) with a hole \(\bullet\). The correctness specification is set by the program represented by the aligned input subsequence \(x_{b_{x}}\) where \(A_{b}=b_{x}\). Correctness specifications must be based on complete semantics, so for input subsequences with out-of-scope references, we extract the definition of the referenced entity from the full program. The retrieved entity definition is used to complete the semantics of the correctness specification. A semantics-aware translator lifts the sketch and correctness specifications into their sketch solver programmatic representations \(P_{s}\) and \(P_{x_{b_{x}}}\), respectively. Details about this translation process for our assembly language experiments are shared in Appendix A.1. Solve the sketch.To solve the sketch is to find a mapping \(\phi\) that correctly populates all holes of the partial program sequence \(s\) to satisfy the correctness specification: \(\forall d\in\mathcal{D}:P_{x_{b_{x}}}(d)=P_{\phi(s)}(d)\). If a solution populating all holes of the partial program sequences is found by the sketch solver, it is applied to \(s\) and the updated subsequence \(\phi(s)\) replaces the subsequence in the full program sequence. If the subsequence had an out-of-scope reference, the solver would have also resolved a definition of the referenced entity. The resolved referenced entity definition is also updated in the full program. In cases where a sketching solution cannot be found, Guess & Sketch resorts to the original prediction. With this approach, the correctness of Guess & Sketch is always lower-bounded by the correctness of the initial guess. This full process is summarized in Algorithm 1. ``` procedureGuess & Sketch(x) for\(y,A,E\in\) GUESS\((x)\)do\(\triangleright\) produce candidates, alignments, potential errors for\(b\) in \(\mathcal{B}_{y}\)do if\(P_{y}\equiv P_{x}\)thenreturn\(y\) if\(E_{j}\) for any \(j\in b\)then\(\triangleright\) identify potential error \(b_{x}\gets A_{b}\)\(\triangleright\) get aligned input index \(s\leftarrow\) PLACE_HOLES\((y_{b},E)\)\(\triangleright\) produce sketch sequence \(\phi\leftarrow\operatorname*{arg\,max}_{\phi}\mathbb{1}(P_{x_{b_{x}}}\equiv P_{ \phi(s)})\)\(\triangleright\) solve for solution (synthesizer) if\(\phi\) success then \(y\leftarrow\) UPDATE\((b,\phi(s))\)\(\triangleright\) update subseq. ``` **Algorithm 1**Guess & Sketch Pseudocode ## 5 Experimental Setup DatasetOur experiments focus on transpilation between real programs compiled to different ISAs, specifically the ARMv8 and RISC-V assembly languages. ARMv8 and RISC-V are both reduced instruction set architectures (ISAs), and have some similarities in instructions (Hennessy & Patterson, 2011). We construct training and evaluation datasets for this task. Training data is composed of 307,916 ARMv8 and RISC-V assembly file pairs compiled from C code files from The Stack (Kocetkov et al., 2022). All selected source C files can be independently compiled to assembly using the standard C libraries (e.g. stdlib, stdlib, stdio). The C files are compiled to both ARMv8 and RISC-V target architecture assembly files under the -O0, -O1, -O2, and -O3 optimization flags using cross-compilers aarch64-linux-gnu-gcc and riscv64-linux-gu-gcc. The resulting dataset is shared on HuggingFace3. Figure 3: Test sets for transpilation. Length is measured as number of lines in the assembly file, and is averaged across both ARMv8 and RISC-V architectures under the -O0 optimization flag. Inference of the system is evaluated on 3 different test sets, summarized in Table 3. Code is emulated in Docker images with QEMU Bellard (2005). _Project Euler_ is constructed from 45 C implementations of Project Euler mathematical challenge problems4. _Benchmarks_ is 16 C implementations of programs in The Computer Language 23.03 Benchmarks Game5. _Unix Command_s is 11 C implementations of Basic Unix commands6. Footnote 4: [https://github.com/eagletmt/project-euler-c](https://github.com/eagletmt/project-euler-c) Footnote 5: [https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html](https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html) Footnote 6: [https://github.com/yadu007/Basic-Unix-Commands-Implementation](https://github.com/yadu007/Basic-Unix-Commands-Implementation) For verification, all test sets are cross-compiled to the ARMv8 and RISC-V architectures under the -O0 flag. System performance is measured by execution output match. We sample the top \(100\) candidate guesses for a given full assembly file. SystemWe experiment with two different types of generative language models: a smaller transformer encoder-decoder model with a bidirectional encoder and autoregressive decoder based on the BART architecture (Lewis et al., 2019), and a larger transformer decoder-only models pre-trained on code (Li et al., 2023; Roziere et al., 2023). The first model class is trained from scratch where the second is pretrained. All language models are trained on one NVIDIA RTX A6000 GPU. The encoder-decoder models are trained for 156 hours total and the pre-trained decoder-only models are fine-tuned for 240 hours total. Pre-trained models are fine-tuned with LoRA (Hu et al., 2022). Details of training are shown in Table 4. All resulting models are shared on Huggingface 78. The \(\gamma\) value we use as the threshold for weak guesses is \(0.9\). Footnote 7: [https://huggingface.co/celinelee/bartlarge_ristcoarm_cloze2048](https://huggingface.co/celinelee/bartlarge_ristcoarm_cloze2048) Footnote 8: [https://huggingface.co/celinelee/bartlarge_armtorisc_cloze2048](https://huggingface.co/celinelee/bartlarge_armtorisc_cloze2048) The symbolic solver is built with Rosette (Torlak & Bodik, 2013), a programming language for synthesis and verification built on top of the Z3 (de Moura & Bjorner, 2008) SMT solver. BaselinesWe consider several alternate approaches to code translation and assembly transpilation. With _Few-shot learning_(Brown et al., 2020), we prompt GPT-4 (OpenAI, 2023) with instructions and a couple examplar input-output assembly pairs to obtain a transpilation for a given input assembly file. The prompt for the Few-shot experiments is composed of an instruction to translate from the specified source to the specified target architecture ISA, and \(4\) pairs of implementations in the respective source and target hardware architectures. See details of the specific prompt in Appendix D.1. _Transpilers_ are manually-engineered transpilers that convert the given source assembly to the given target assembly. These are programmatically written for the specified source-to-target-hardware, so for source-target hardware pairs for which we cannot find a transpiler, we cannot obtain numbers for this baseline. We use the engineered ArmV8-to-RISCV64 transpiler written by members of the IBM Research Haifa team 9. We did not find a transpiler from RISC-V to ARMv8. LM only methods, _FT StarCoder_(Li et al., 2023), _FT CodeLlama_(Roziere et al., 2023), _Encoder-Decoder_(Lewis et al., 2019), are the purely neural approaches to machine translation, in which we train or fine-tune a language model with the paired assembly data. The _Encoder-Decoder_ method is equivalent to just the Guess method of our approach. Footnote 9: [https://github.com/schormr/arm2riscv](https://github.com/schormr/arm2riscv) ## 6 Results and Analysis Performance of our methods on the test sets are shown in Table 1. Guess & Sketch outperforms all alternative approaches. The Few-shot approach, even with the largest existing language model today, GPT-4, cannot successfully perform most transpilations. Guess & Sketch even outperforms the engineered Transpiler, which fails to translate programs for which it cannot recognize even one instruction. We run several Guess-only models, comparing from-scratch training to pre-trained models. Interestingly, the fine-tuned pre-trained large language models perform much worse than even just the trained smaller encoder-decoder model. The best-performing baselines is the Encoder-Decoder approach, which we use for the full Guess & Sketch. Further experiments testing the performance gain of Guess & Sketch over the Encoder-Decoder approach on more test programs are shared in Appendix B, and support the same 10% increase in correct transpilations. Error AnalysisTable 2 classifies assembly transpilation errors under one of several categories, determined by bottleneck failure reason: mathematic, copying, ISA, references, logic, memory, and length. See descriptions of each in Appendix C and examples in Appendix C.1. The encoder-decoder model (Guess) makes few ISA mistakes, but runs into a number of errors in semantics and out-of-scope references, some of which are resolved by the solver in Guess & Sketch. However, unless the semantics of all of its erroneous subsequences are resolved, an incorrect transpilation is not corrected. That is, even though mathematically erroneous subsequences are being resolved across the examples in the test sets, if the bottleneck problem is not resolved or not all errors are properly aligned and solved, the transpilation still fails. Interestingly the other approaches fail to transpile or compile before even reaching semantics. For few-shot, the model generates invalid instructions, despite the prompt including a translation instructions as well as multiple exemplar transpilations. Fine-tuning models generate invalid assembly from pretraining despite the fine-tuning phase. On the other hand, the manually engineered transpiler is unable to process many examples at all. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{RISC-V to ARMv8} & \multicolumn{3}{c}{ARMv8 to RISC-V} \\ \cline{2-7} Method & Proj. Euler & Benchmx & Unix Cmds & Proj. Euler & Benchmx & Unix Cmds \\ \hline Few-shot (GPT4) & 11.1\% & 0 & 18.2\% & 4.44\% & 0 & 27.3\% \\ Transplinar & - & - & - & 24.4\% & 12.5\% & 54.5\% \\ FT StarCoder & 8.9\% & 0 & 36.4\% & 8.9\% & 0 & 36.4\% \\ FT CodeLaMa & 11.1\% & 0 & 36.4\% & 2.2\% & 0 & 36.4\% \\ Encoder-Decoder & 68.9\% & 6.3\% & 36.4\% & 66.7\% & 6.25\% & **81.2\%** \\ **Guess \& Sketch** & **80\%** & **18.8\%** & **81.2\%** & **75.6\%** & **25.0\%** & **81.2\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Main Transpilation results on full program accuracy (Project Euler, Benchmarks, and Unix Commands test sets). \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & Few-shot & Starcoder & CodeLlama & Transplier & Enc-Dec & Guess \& Sketch \\ \hline Process & Length & 2 & 7 & 7 & 0 & 6 & 6 \\ & Failure & 0 & 0 & 0 & 34 & 0 & 0 \\ \hline Compile & ISA & 62 & 50 & 57 & 0 & 2 & 2 \\ & References & 3 & 5 & 5 & 0 & 11 & 1 \\ \hline Semantics & Copying & 0 & 0 & 0 & 0 & 1 & 1 \\ & Logic & 1 & 5 & 3 & 0 & 3 & 3 \\ & Memory & 10 & 10 & 9 & 0 & 2 & 2 \\ & Math & 7 & 3 & 0 & 0 & 2 & 3 \\ \hline Correct & & 5 & 10 & 6 & 11 & 61 & 70 \\ \hline \hline \end{tabular} \end{table} Table 2: Analysis of failures by different transpilation methods. Collected on the Project Euler test set. Categories are listed in order of bottleneck precedence. Figure 4: Example outputs. Figure 4 shows two example outputs. The left shows a guess that is resolved. The language model output (bottom, left) predicts tokens for the incorrect global memory reference, highlighted in yellow. According to the model cross-attention, these tokens most align to those of the corresponding fmov instruction in the input assembly (top), highlighted in purple. However, in the predicted full assembly program, no memory location is produced with the double-word IEEE representation for the desired float 5.0e+0. After resolution with Guess & Sketch, a correct memory location is generated and the memory reference is updated (bottom, right), highlighted in green. The example on the right shows a problem that Guess & Sketch does not resolve. The LM output (bottom, left) predicts tokens for the register values with low confidence, highlighted in red. A correct solution is shown (bottom, right). The register use and logic flow is inconsistent. SamplingAside from solving more examples in the test dataset, Guess & Sketch also reduces the number of samples needed from the underlying LM. For a set of test examples, they are correctly transpiled using the encoder-decoder approach only after sufficiently many samples. Using Guess & Sketch, a handful of these are successfully transpiled with fewer samples. Table 3 shows the average number of samples from the LM used by the encoder-decoder approach and the Guess & Sketch approach during evaluation of the Project Euler test set. Examples that achieve a correct transpilation after the \(k^{th}\) sample are logged to use \(k\) samples, and examples that do not achieve a correct transpilation within \(100\) samples use \(100\) samples. ## 7 Limitations While Guess & Sketch is significantly more effective than the baseline approaches, there are still several remaining open challenges. * The Sketch method is dependent on alignment with the source sequence. If Guess fails to provide an accurate alignment than the sketch may be unable to correct the output issue. * Memory management issues are hard for the sketch solver. These include reasoning about values on the stack at any given point in the program, register choice decisions that are incorrectly propagated during autoregressive generation, and loading memory addresses into the register. * The best performing model is a mid-size encoder-decoder, which is strong at pattern matching, but likely cannot perform programmatic reasoning. Potentially larger code models could better solve some of the symbolic transpilation issues, if instruction hallucinations could be reduced. * Guess & Sketch is limited in length by the context length of generative language models. Using convolutional methods such as SLeD (Ivgi et al., 2022) could resolve these mistakes in practice. ## 8 Conclusion In this work, we present Guess & Sketch, a neurosymbolic approach to assembly-to-assembly transpilation. Guess & Sketch extracts alignment and confidence information from a language model to guide a symbolic solver. We demonstrate the efficacy of this approach on three different test sets of assembly programs in the ARMv8 and RISC-V architectures. Future work to build on this approach is to identify and use patterns in the decoder attention of the language model that may be helpful for the solver, such as live variable analysis (Aho et al., 2006) patterns. Other future work may include transpiling to or from higher levels of code optimization and devising a mechanism to reason about more elements of the machine state, such as values on the stack. \begin{table} \begin{tabular}{l|c c} \hline \hline & \multicolumn{2}{c}{Project Euler} \\ & RISC-V to ARMv8 & ARMv8 to RISC-V \\ \hline Encoder-Decoder & 30.1 & 34.3 \\ Guess \& Sketch & **21.3** & **25.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Average number of samples used by the encoder-decoder and Guess & Sketch approaches for the Project Euler test set. The range for \(k\) is \(k=[1,100]\). (Lower is better.) ## Acknowledgments Justin Chiu, Amrit Baveja, Hao Tang, Yair Schiff, Omer Gul, Kevin Ellis, Ameesh Shah, Sahil Bhatia, Adwait Godbole
2310.00280
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration
Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge. Benefiting from ultra-large-scale training corpora, a single LLM can manage typical NLP tasks competently. However, its performance in executing reasoning tasks is still confined by the limitations of its internal representations. To push this boundary further, we introduce Corex in this paper, a suite of novel general-purpose strategies that transform LLMs into autonomous agents pioneering multi-model collaborations for complex task-solving. Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes, which collectively work towards enhancing the factuality, faithfulness, and reliability of the reasoning process. These paradigms foster task-agnostic approaches that enable LLMs to ''think outside the box,'' thereby overcoming hallucinations and providing better solutions. Through extensive experiments across four different types of reasoning tasks, we demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods. Further results and in-depth analysis demonstrate the cost-effectiveness of our method, facilitating collaboration among different LLMs and promoting annotation efficiency.
Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu, Lingpeng Kong
2023-09-30T07:11:39Z
http://arxiv.org/abs/2310.00280v3
# Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration ###### Abstract Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge. Benefiting from ultra-large-scale training corpora, a single LLM can manage typical NLP tasks competently. However, its performance in executing complex reasoning tasks is still confined by the limitations of its internal representation. To push this boundary further, we introduce Corex in this paper, a suite of novel general-purpose strategies that transform LLMs into autonomous agents, pioneering multi-model collaborations for complex task-solving. Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes, which collectively work towards enhancing the factuality, faithfulness, and reliability of the reasoning process. These paradigms foster task-agnostic approaches that enable LLMs to "think outside the box," thereby overcoming hallucinations and providing better solutions. Through extensive experiments across four different types of reasoning tasks, we demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods. Further results and in-depth analysis demonstrate the cost-effectiveness of our method, facilitating collaboration among different LLMs and promoting annotation efficiency1. Footnote 1: Code and data will be available at this link. _"A problem shared is a problem halved."_ --English Proverb ## 1 Introduction Large Language Models (LLMs) have succeeded in advancing the state-of-the-arts for a series of Natural Language Processing (NLP) tasks (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Touvron et al., 2023; Zhao et al., 2023a, _inter alia_). Recent research (Wei et al., 2022a) indicates that scaling up models (Kaplan et al., 2020) can yield improvements in both performance and sample efficiency across a broad spectrum of downstream tasks. Notwithstanding their remarkable proficiency in language understanding and instruction following (Ouyang et al., 2022), the reasoning abilities of LLMs, often seen as a hallmark for assessing their potential, still present challenges (Suzgun et al., 2023; Huang and Chang, 2023). Concurrently, there is a prevailing view that merely increasing the size might not adequately address their inherent limitations in solving reasoning tasks (Rae et al., 2022). In response to this challenge, Wei et al. (2022b) put forth chain-of-thought (CoT) prompting that an LLM generates a series of intermediate steps toward a final answer, contrasting the use of "answer-only" prompts. Subsequently, various approaches have been put forward, such as self-consistency decoding (Wang et al., 2023d) which utilizes a majority voting mechanism to determine the final answer, and program-aided language models (PAL; Gao et al., 2022; Chen et al., 2022a) that leverage code generation to reduce errors in computations. Besides, curated prompts necessitate task-specific designs (Zheng et al., 2023a) have also been utilized to elicit more accurate predictions. Nevertheless, these approaches are confined within a static black box (Yao et al., 2023b), wherein the LLM relies exclusively on its internal representation for generating responses and is prone to generating unreliable answers (Ji et al., 2023; Yin et al., 2023). These shortcomings underscore that relying solely on crafting decoding strategies and specialized prompts may not serve as a silver bullet for addressing complex reasoning tasks (Qiao et al., 2023). Alternatively, enabling models to "think outside the box" emerges as a promising yet underexplored pathway. Within the realm of well-established sociological concepts, multiple cognitive processes interact and cooperate will produce a combined effect that is greater than the sum of their individual contributions (Luppi et al., 2022). This principle is echoed within artificial intelligence (Li et al., 2023). Although the study of intelligent agents has been explored for decades (Minsky, 1988; 2007), the advent of LLMs has rejuvenated interest and introduced novel challenges in this domain. An emerging perspective is that encouraging collaboration and communication between models could potentially pave the way for a new stage for enhancing complex reasoning capabilities. In this study, we propose _Corex_, a suite of human-inspired strategies that leveraging multi-model _collaboration_ to elicit reasoning for _complex_ task-solving. To facilitate synergies between models, we first assign distinct personas to different models, followed by the design of various collaborative paradigms. This collective intelligence-based method aims to conquer prevalent obstacles in the current landscape of reasoning, as exemplified in Figure 1. It also endeavors to alleviate common issues observed in majority voting-based methods like self-consistency, where accurate responses might be overwhelmed by incorrect ones and exorbitant costs. To be specific, _Corex_ configures LLMs as a group of autonomous agents, adopting the paradigms shown in Figure 2 for multi-model collaboration: (1) Debate, utilizing group-based debates among models to effectively enhance the factuality (Du et al., 2023) of generated content and minimize fallacies and hallucinations; (2) Review, enabling models to scrutinize reasoning chains or generated codes from their counterparts to ensure the correctness of generated contents, coupled with potential refinements; (3) Retrieve, aiming to enable the model to identify the most faithful option from a pool of candidate chains, facilitates a higher degree of alignment with the final response. The comparison between _Corex_ and recent works is listed in Table 1, where our approach is task-agnostic, requiring no prior knowledge or iterative processes during the reasoning phase, which makes it broadly applicable to a wide array of scenarios. We conduct extensive experiments across four types of tasks: mathematical reasoning, symbolic reasoning, commonsense reasoning, and semi-structured reasoning. The results illustrate that our method achieves substantial performance gains over previous strong baselines. Moreover, each mode distinctly excels in different categories of tasks, showcasing its specific strengths. Further analysis Figure 1: A depiction of three prevalent errors observed across LLMs when employing _CoT_ and _PAL_ to conduct reasoning tasks. Figure 2: An intuitive illustration of _Corex_, employs LLMs as agents to collaboratively solve a problem. The strategies encompass the Debate, Review, and Retrieve modes, leveraging both the reasoning process and code synthesis. This framework facilitates interactions between models that foster a collaborative environment for the derivation of a well-reasoned answer. reveals that, compared to existing schemes based on majority voting and curated prompts, Corex significantly reduces the reasoning overhead of the models, achieving cost-effectiveness. ## 2 Related works Chain-of-Thought Prompting Elicits LLM Reasoning.Chain-of-Thought (CoT; Wei et al., 2022) prompting, as one of the celebrated capabilities of recent LLMs, is a pivotal breakthrough for performing complex multi-step reasoning when provided with limited examples. Further variants show that CoT can be improved by adding certain "magic phrases" (Kojima et al., 2022), automated demonstrations construction (Zhang et al., 2023), reasoning in different modalities (Zhang et al., 2023; Yang et al., 2023; Yao et al., 2023), and applying modular approaches (Khot et al., 2023). For robustness, researchers transform problems into interleaved reasoning chains (Zhou et al., 2023; Lyu et al., 2023) or adopt ensembling (Wang et al., 2022). Notably, self-consistency methods (Wang et al., 2023) select answers from multiple reasoning paths by majority voting, have greatly elevated the performance of LLMs in complex reasoning. This approach has been further optimized by utilizing prompts with higher complexity (Fu et al., 2023). Lately, Yao et al. (2023) employ heuristic-guided search on "trees" constructed from thoughts to assist LLMs in navigating the problem space. External Knowledge & Tool Utilization for LLM Reasoning.While LLMs exhibit significant capabilities, they are limited by a lack of real-world grounded experience (Petroni et al., 2020) and an inability to grasp complex arithmetic reasoning, given that their training is exclusively based on written text. Thus, researchers start utilizing external knowledge to assist models in accomplishing reasoning tasks (Nakano et al., 2022; Schick et al., 2023). For enhanced factuality and faithfulness, He et al. (2022) and Wang et al. (2023) make use of external knowledge bases. Lately, Gao et al. (2023) ensure the factual correctness and verifiability of generated text by providing cited passage. Another line is to delegate reasoning tasks to external tools (Qin et al., 2023), which are commonly used for addressing numerical problems. One of the representatives is program-aided Language model (Gao et al., 2022), known as PAL2. Such an approach utilizes LLMs to interpret NL problems, generating programs as intermediate reasoning steps (Chen et al., 2022) that will be offloaded to a Python interpreter for execution to get final solutions (Ni et al., 2023). This method transforms reasoning into an NL2Code (Zan et al., 2023) task and has been demonstrated to excel when dealing with larger, non-integer numbers and enabling error corrections (Olaussson et al., 2023). Beyond synthesizing programs, Liu et al. (2023) integrate a computational physics engine into the language modeling process for simulation. Moreover, _Chameleon_(Lu et al., 2023) augments LLMs by incorporating both tools and knowledge resources like web engines and image captioners. Footnote 2: The idea of integrating LLMs with external PL interface was proposed by Gao et al. (2022) and Chen et al. (2022) within the same timeframe. We refer to this approach as “PAL” in this paper. Multi-Model Synergy for Task Solving.Utilizing multiple LLMs collectively to solve problems is still in its preliminary stages, with a wealth of opportunities awaiting exploration. The cornerstone of collaboration is constructing a human-like reasoning architecture (Zhu et al., 2023) for LLMs under different environments (Liu et al., 2023). Fu et al. (2023) investigate whether multiple LLMs can autonomously enhance their performance through mutual interactions. Du et al. (2023) and Liang et al. (2023) explore enhancing the factuality of specific tasks, e.g., translation and arithmetic reasoning, by facilitating "debates" among multiple models. LLMs' collaboration has also been applied to software development (Qian et al., 2023) and text evaluation (Chan et al., 2023) by assigning identities to models to simulate the development process. Furthermore, from the perspective of social intelligence, inducing cognitive synergy and having them take on different characters (Wang et al., 2023) during \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Feature** & \begin{tabular}{c} **Corex** \\ (our work) \\ \end{tabular} & \begin{tabular}{c} **MAD** \\ (Liang et al., 2023) \\ \end{tabular} & \begin{tabular}{c} **PHP** \\ (Zheng et al., 2023) \\ \end{tabular} & \begin{tabular}{c} **CoK** \\ (Wang et al., 2023) \\ \end{tabular} & \begin{tabular}{c} **ToT** \\ (Yao et al., 2023) \\ \end{tabular} \\ \hline Task Agnostic? & ✓ & ✗ & ✗ & ✓ & ✓ \\ Multiple Chains? & ✓ & ✗ & ✗ & ✓ \\ Multiple LLMs? & ✓ & ✓ & ✗ & ✗ & ✗ \\ Task Delegation? & ✓ & ✗ & ✗ & ✗ \\ Reference Free? & ✓ & ✓ & ✓ & ✗ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of Corex to other recent prompting strategies. task execution has been proven to have significant potential (Sclar et al., 2023). Recently, the nascent exploration into artificial societies (Park et al., 2023) also seeks to harness collective intelligence to emulate the efficiency of human social structures (Li et al., 2023; Webb et al., 2023). ## 3 Corex We introduce the three main components of Corex in this section, namely the Debate, Review, and Retrieve modes. Let us assume a set of LLM-based agents \(\{A_{1},A_{2},\ldots,A_{n}\}\) participating in multi-model collaboration. Each agent \(A_{i}\) generates the corresponding reasoning chain \(c_{i}\) and its prediction \(p_{i}\) when facing a query \(q\). ### Debate In Debate mode, our agents are divided randomly into two groups, the Red Team and the Blue Team, with one reserved as a judge denoted as \(A_{j}\). The debate process within one team involves several rounds, limited to a maximum of \(T\) rounds of communications. In each round \(t\) (\(t=1,2,\ldots,T\)), the agents engage in iterative discussions3 to refine their reasoning chains and predictions. This dynamic interaction \(g\), allows for the continual modification of viewpoints, as expressed by \(c_{i}^{t}=g(q,c_{i-1}^{t},\ldots,c_{i-k}^{t})\) and predictions \(p_{i}^{t}\). Footnote 3: Due to the context length limit of GPT-3.5-Turbo, only information from the previous round is stored during the debate process. Each team then presents their refined predictions \(p_{\text{red}}^{t}\) and \(p_{\text{blue}}^{t}\) at the end of each round. If both teams consistently agree throughout the debate process, i.e., \(p_{\text{red}}^{t}=p_{\text{blue}}^{t}\), the debate concludes smoothly. However, in the instance of a discrepancy between the teams' predictions, every output from each round is presented to the judge \(A_{j}\). The judge employs a decision-making process \(h\), evaluating the quality and reliability of the reasoning chains and predictions from each round of the debate. The final conclusion is determined by \(h(c_{\text{red}}^{t},p_{\text{red}}^{t},c_{\text{blue}}^{t},p_{\text{blue}}^{t})\) across all rounds, ensuring a comprehensive assessment and a more informed final decision. Diverging from previous works (Liang et al., 2023; Du et al., 2023; Xiong et al., 2023), the debate mode of Corex adopts the concept of group discussions to enhance the factuality of reasoning chains. We opt not to facilitate models in jointly debating their reasoning processes to converge on a single common answer for several reasons: (1) The context length limitations inhibit the ability to fully hold the entire debate process, (2) Despite the tendency of debates to converge to single final answers, these outcomes are not always correct due to incorrect consensus or prevalent biases (Wang et al., 2023c), (3) Given the performance gaps among various LLMs, there is a risk of strong models "monopolizing" the debate, thereby overshadowing the insights from others. Therefore, we aim to preserve both the factuality and the diversity of thoughts among agents and ensure stability throughout the debate process. ### Review Within the scope of reasoning, both CoT and PAL are effective methods with distinct strengths. Grounded in natural language, CoT-based methods stand out for the generality and the clarity of explanations. In contrast, facilitated by programs, PAL guarantees computational accuracy (Zhao et al., 2023b). However, they both exhibit drawbacks due to the reliance on LLMs' internal representations. For CoT and its variants, issues are twofold: (1) Cumulative errors, where mistakes tend to amplify and propagate throughout the reasoning chain; and (2) A plateau in text quality that cannot be substantially improved through prompting (Xu et al., 2022; Li et al., 2023b). Alternatively, PAL faces its own challenges: (1) LLMs might misinterpret questions, which inadvertently results in technically correct yet misguided programs; and (2) Generated codes are not always error-free: LLMs may potentially write buggy codes, such as referencing undefined Figure 3: Illustration of 2 rounds of debate, reasoning chains between agents omitted. variables or engaging in "Division by Zero" operations. Inspired by recent efforts of LLMs per-rating (Zheng et al., 2023b) and collaborative coding practices prevalent in software engineering, we introduce the Review mode to address the aforementioned issues through collaboration. To be specific, a single agent \(A_{p}\) is randomly selected to act as the primary agent. Initially, \(A_{p}\) takes the responsibility of formulating corresponding reasoning chains for \(q\) along with the prediction, and crafting codes if required. This initial collection of solutions is represented as \(S_{p}^{(0)}\) = {\(a_{p},c_{p},m_{p}\)}, where \(a_{p},c_{p}\), and \(m_{p}\) signify the answer, reasoning chain, and codes respectively. \(S_{p}^{(0)}\) is then subjected to iterative reviews by the other agents that function as reviewers in a sequential manner, rigorously scrutinizing both the reasoning chain and the code formulated by \(A_{p}\) or modified by preceding reviewers. It is crucial to highlight that each reviewer receives input from its predecessors, signifying that each subsequent review is grounded on the outcomes and feedback of the preceding ones, fostering a progressively refined solution. The reviewing process is formalized as \(S_{p}^{(i+1)}\) = \(R_{i}(S_{p}^{(i)},F_{i})\), where \(R_{i}\) encapsulates the review outcome at the \(i^{th}\) iteration and \(F_{i}\) represents the feedback received. In essence, the solution set \(S_{p}^{(i+1)}\) results from an enhancement of its preceding version \(S_{p}^{(i)}\), informed by the feedback \(F_{i}\). Following the completion of all review iterations, the outcome is determined by the final iteration of the solution set \(S_{p}^{(n-1)}\). Specifically, the final prediction \(a_{p}^{(n-1)}\) is chosen as the answer for \(q\), and in instances where code is involved, the last revised version \(m_{p}^{(n-1)}\) is executed by a Python interpreter to produce the outcome. ### Retrieve In the final thread of work, we delve into the Retrieve mode to identify the most faithful answer through collaborations. While previous strategies based on majority voting mechanism (Wang et al., 2023d; Fu et al., 2023c) can mitigate the low-diversity issue of techniques such as beam-search (Li & Jurafsky, 2016), they still present the following two significant challenges: (1) Correct answers risk being swayed by incorrect ones. (2) Despite facilitating a notable enhancement in performance, it exponentially escalates the computational burden and tends to reach a performance "saturation point" as the sampled chains increase. We attribute these drawbacks to the limited scope of majority voting techniques that singularly prioritize the prediction while overlooking the faithfulness of reasoning chains (Li et al., 2023c). In response, we propose the Retrieve mode, a paradigm specifically engineered to evaluate whether the answer can be expressed by the content (explanation) generated during reasoning (Jacovi & Goldberg, 2020; Lanham et al., 2023). Concretely, given a query \(q\), we randomly select an agent \(A_{r}\) from the pool of \(n\) agents to act as the retriever. The remaining agents \(\{A_{1},A_{2},\ldots,A_{n-1}\}\) independently perform CoT reasoning about \(q\). Each of these agents derives its own reasoning chains \(c_{i}\) and corresponding predictions \(p_{i}\). Together, they form a candidate pool, denoted by \(\mathcal{P}=\{(c_{i},p_{i})\}_{i=1}^{n-1}\) The retriever \(A_{r}\) then scrutinizes the candidates in \(\mathcal{P}\). For \((c_{i},p_{i})\), \(A_{r}\) evaluates the faithfulness between \(c_{i}\) and \(p_{i}\). Based on this assessment, the retriever assigns a confidence score \(s_{i}\) in the range \([0,1]\), which is denoted as: \(s_{i}=f_{r}(c_{i},p_{i})\) where \(f_{r}\) indicates the retriever's evaluation process. After that, the most faithful response to the question \(q\) is then Figure 4: Illustration of reviewing erroneous code generated by other agents (first round). Figure 5: Illustration of retrieving faithful chains with answers. determined by the highest confidence: \[({c^{\star}},{p^{\star}})=\operatorname*{argmax}_{(c_{i},p_{i})\in\mathcal{P}}s_{i}\] Here, \(({c^{\star}},{p^{\star}})\) denotes the chain-prediction pair that the retriever considers most faithful, which will serve as the final answer for the query \(q\). Retrieve mode enables the selection of the most aligned combination of reasoning chains and answers from a diversified candidate pool. Distinct from previous text quality assessment methods, which rely on the log probability of sequences (Adiwardana et al., 2020) that is computationally inefficient and often unavailable for commercial LLMs, our approach is entirely predicated on model-to-model interactions (Chen et al., 2023) and is reference-free. ## 4 Experiment ### Experimental Setup **Tasks and Datasets.** We evaluate the effectiveness of Corex across four types of reasoning tasks: (1) Arithmetic reasoning over eight mathematical problems, which includes GSM8K (Cobbe et al., 2021), MultiArith (Roy and Roth, 2015), SingleOP/SingleEQ (Koncel-Kedziorski et al., 2016), AddSub (Hosseini et al., 2014), AQuA (Ling et al., 2017), SVAMP (Patel et al., 2021) and GSM-Hard (Gao et al., 2022). (2) Commonsense reasoning covering four datasets, including StrategyQA (Geva et al., 2021), CommonsenseQA (CSQA; Talmor et al., 2019), BooL (Clark et al., 2019) and AI2 Reasoning Challenge (ARC-c) (Clark et al., 2018). (3) Symbolic reasoning incorporating four tasks derived from BigBench (bench authors, 2023; Suzgun et al., 2023), including Date Understanding, Penguins in a Table, Colored Objects, and Repeat Copy. (4) Semi-structured understanding, with a focus on FinQA (Chen et al., 2021), ConvFinQA (Chen et al., 2022) and TAT-QA (Zhu et al., 2021). The detailed description and statistics of tasks are listed in Appendix D. **Baselines.** We compare our method with several widely used strong baselines. (1) Chain-of-Thought prompting (CoT; Wei et al., 2022). (2) Self-Consistency (CoT-SC; Wang et al., 2023), which employs a majority voting mechanism to select the most consistent answer from several reasoning chains as the final answer. (3) Complexity-based consistency (ComplexCoT; Fu et al., 2023) that selects the majority answer from the candidates with higher reasoning complexity. (4) Program-aided language model (PAL; Gao et al., 2022; Chen et al., 2022) that uses LLMs to generate programs as intermediate reasoning steps, while offloading the computation to a Python interpreter. For simplicity and ease of understanding, we denote CoT-SC(x) and ComplexCoT(x) in our experiments and analysis to represent cases utilizing different reasoning paths, where "x" indicates the number of output chains. For all baseline methods, we adhere to the few-shot exemplars to ensure fair comparisons. Details can be found in Appendix B. **Implementation Details.** We access OpenAI and Anthropic models through their respective APIs. Specifically, we employ GPT-3.5-Turbo-0613 for evaluating both Corex and baseline methods in the main experiments. Moreover, in further experiments and analysis involving different LLMs for collaboration, we also incorporate the use of GPT-4-0613 and Claude-Instant-1.2. The details of prompts and hyperparameter settings for both baselines and Corex are in Appendix F. ### Main Results We report the results of Corex over four categories of tasks. For each kind of task, the best results are highlighted in **bold** and the second best results are marked with underline. For Review mode, we use Corex-Review\({}_{\text{NL}}\) and Corex-Review\({}_{\text{Code}}\) to describe the scenarios that use CoT or PAL respectively. All modes within Corex are configured to operate with 5 LLM-based agents, ensuring favorable cost-effectiveness. For Corex-Debate, the upper bound of debate rounds is set to 5. Mathematical Reasoning.Table 2 shows the results across arithmetic tasks with varying difficulties. Our method achieves notable performance improvements on most benchmarks. Broadly, we surpass the performance of CoT-SC(10) when only 5 agents are involved. Moreover, given the task-agnostic nature of Corex, it can tackle highly complex computational challenges like GSM-Hard through code synthesis. For problems of relatively lower complexity, the Retrieve mode can identify answers superior to those from majority voting. **Commonsense Reasoning.** Table 3 showcases the performance of Corex in commonsense and factual reasoning tasks4. We can observe that various modes contribute to performance enhancements. Footnote 4: Due to the nature of commonsense reasoning tasks, the Review mode only utilizes NL reasoning chains. Notably, our approach surpasses ComplexCoT (over 6% on StrategyQA), achieving a significant improvement without resorting to intricate prompt design and example selection. Symbolic Reasoning.We report the results for symbolic reasoning in Table 4. Empirical evidence substantiates that adopting multi-model collaboration can notably outperform most previous baselines on Big-Bench tasks. It is noteworthy that (1) CoT-SC struggles to ensure consistent outputs on the Repeat Copy. Conversely, through the integration of PAL-based collaboration, we manage to attain a remarkably high level of accuracy. (2) Compared to majority voting, both the Review and Retrieve modes enable more judicious answer selection in counting tasks. Semi-structured Reasoning.We demonstrate the results on FinQA and ConvFinQA in Table 5. It can be observed that for these two challenging tasks which require understanding heterogeneous information and performing calculations simultaneously (Lu et al., 2023b), methods such as CoT-SC offer limited gains. However, through various cooperative paradigms, \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Date & Penguin & Colored Objects & Repeat Copy & Avg. \\ \hline CoT & 82.0 & 81.5 & 88.0 & 43.8 & 73.8 \\ CoT-SC(10) & **87.9** & 86.2 & 94.8 & 53.1 & 80.5 \\ PAL & 81.2 & 91.3 & 86.8 & 93.8 & 88.3 \\ \hline Corex-Debate & 83.2 & 85.9 & 91.2 & 62.5 & 80.7 \\ Corex-ReviewNL & 84.0 & 92.0 & 92.4 & 59.4 & 82.0 \\ Corex-ReviewCode & 82.7 & **93.3** & 91.6 & **96.9** & **91.1** \\ Corex-Retrieve & 84.6 & 92.6 & **95.6** & 68.8 & 85.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of accuracy on five symbolic reasoning datasets from Big-Bench (bench authors, 2023; Suzgun et al., 2023) using various Corex modes and other strong baselines. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & GSM8K & SVAMP & MultiArith & SingleOP & SingleEQ & AddSub & GSM-Hard & Avg. \\ \hline CoT & 74.5 & 78.9 & 98.5 & 94.1 & 93.3 & 87.8 & 39.0 & 80.9 \\ ComplexCoT & 79.7 & 80.7 & 97.3 & 94.3 & 92.3 & 86.8 & 39.7 & 81.5 \\ CoT-SC(10) & **82.8** & 84.5 & **99.8** & 95.4 & 95.1 & 89.6 & 45.2 & 84.6 \\ PAL & 76.0 & 83.4 & 96.7 & 90.7 & 95.8 & 87.6 & 62.1 & 84.6 \\ \hline Corex-Debate & 76.2 & 82.6 & 98.7 & 94.8 & 93.7 & 89.7 & 45.9 & 83.1 \\ Corex-ReviewNL & 80.3 & 83.2 & 99.5 & 95.0 & 94.3 & 89.4 & 50.8 & 84.6 \\ Corex-ReviewCode & 79.2 & **85.8** & 98.3 & 93.6 & **96.9** & 89.6 & **63.6** & **86.7** \\ Corex-Retrieve & 82.5 & 85.6 & **99.8** & **96.1** & 96.6 & **90.9** & 53.0 & 86.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of accuracy on seven mathematical reasoning datasets using various Corex modes and strong baselines. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & StrategyQA & CSQA & OpenBookQA & BoolQ & ARC-c & Avg. \\ \hline CoT & 65.3 & 76.7 & 82.6 & 65.1 & 84.2 & 74.8 \\ ComplexCoT & 63.1 & 77.5 & - & - & - & - \\ CoT-SC(10) & 67.1 & 78.1 & 85.2 & 66.6 & 85.7 & 76.5 \\ \hline Corex-Debate & 68.4 & **78.9** & 83.4 & 66.9 & **86.3** & 76.8 \\ Corex-ReviewNL & 66.9 & 77.4 & 84.8 & 66.9 & 86.0 & 76.4 \\ Corex-Retrieve & **69.3** & 77.7 & **87.6** & **68.0** & 85.5 & **77.6** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of performance on commonsense & factual reasoning between various Corex modes and strong baselines. significant performance improvements can be achieved. Due to the context length restriction of GPT-3.5-Turbo, our experiments on TAT-QA utilized GPT-3.5-Turbo-16k, with the respective results being detailed in Appendix C.1, alongside the evaluations on the other tasks. Following our extensive experiments across 18 tasks, it emerges that the Debate mode is competent for tasks utilizing factual knowledge. For mathematical and counting tasks, the Review mode serves to effectively mitigate errors within the reasoning chains and repair flawed code. Across various tasks, the Retrieve mode consistently facilitates performance improvements to varying degrees. ## 5 Analysis In this section, we first aim to make the collaboration process transparent by delving into models' internal behaviors. Then, the influence of different backbones is examined to observe how model capability affects performance. Further, we assess the efficiency of Corex. ### In-Depth Analysis of Corex Strategies Analysis of Interaction Rounds in Debate Mode.We study the number of rounds of communication in the Debate mode of Corex on five tasks, as depicted in Figure 6. Consensus can be reached swiftly for the majority of problems by each team. However, Corex enables LLMs to engage in more exhaustive discussions for problems that are challenging to reach a consensus on (e.g., over 10% of ConvFinQA problems requiring more than 3 rounds), a small proportion of problems require more interactions. Through observation, we also notice that the Debate mode exhibits favorable convergence properties, wherein the interactive process serves as a basis for the judge's decision-making. Performance Enhancement per Review.We explore the incremental performance gains achieved in specific tasks with each review cycle in the Review mode. As is demonstrated in Figure 7, we conduct analyses for Repeat Copy and GSM8K with ReviewCode, as long as BoolQ and Penguin with ReviewNL. The findings indicate that each review contributes to performance enhancement in general, yet occasional deviations leading to performance oscillations are also observed. ### Synergies between Different LLMs Performance Variability with Diverse LLMs as Judges.The backbone LLMs of our agents can be diverse. In this part, we discuss the performance variations when employing different LLMs during the debate process. As shown in Figure 8, we deploy GPT-3.5-Turbo as debaters and examine the dynamics when different LLMs take the role of judges. The observations indicate that the capability of the judge positively correlates with task performance, with this relationship being evident as the complexity of tasks escalates. Empirically, This can be attributed to the judge's role in the debate process, which requires understanding both the question and the reasoning process of both parties. Utilizing Different LLMs as Retrievers.In Retrieve Mode, the role of the retriever can be played by various LLMs. Based on the candidate answers from GPT-3.5-Turbo agents, we here explore the impact of model selection on the performance, as depicted in Figure 9. Unlike the debate mode, our analysis reveals that the model capabilities exert a modest effect on the performance. Given that the performance upper bound is determined by the candidates' capabilities, the outcomes using different LLMs as retrievers show minimal variance on tasks like ARC-c. Notably, our findings indicate that without the need for especially potent models as retrievers, we can still achieve favorable results. ### Cost-Effectiveness of Multi-Model Collaborations By encouraging collaboration between LLMs, we manage to reduce the costs associated with reasoning tasks while achieving comparable or even superior performance. Based on our analysis conducted on AddSub illustrated in Figure 10, it reveals that all three modes of Corex consistently match or surpass the prosses of other strong baselines. Significantly, the computational cost of our approach are substantially diminished in comparison to methods using majority voting. In achieving equivalent performance, the resource consumption of Corex is confined to a mere 5-10% of that expended by other strategies. To substantiate the generality, we've provided additional experiments in Appendix C.2, which further demonstrate a similar trend. Beyond the efficiency of computational costs, another advantage of Corex is its annotation efficiency, which reduces the reliance on curated demonstrations. Further experiments with varying numbers of demonstrations on this aspect can be found in Appendix C.3. ## 6 Conclusion We introduce Corex in this paper, a suite of strategies that transform LLMs into autonomous agents, thereby leveraging multi-model collaboration for complex reasoning. This offers a preliminary exploration into the LLM-based multi-model ecosystems. Through unlocking the synergies among LLMs, Corex empowers reasoning with enhanced factuality, faithfulness, and reliability through various collaboration paradigms. We conduct extensive evaluations across 18 tasks within 4 categories, Figure 10: Cost-effectiveness analysis. the x-axis represents the computational costs, calculated in terms of input/output tokens, while the size of each dot is proportional to the avg. number of inferences by each method. and the results demonstrate superior performance compared to previous solutions. Moreover, our methods also exhibit multiple notable advantages including being task-agnostic, cost-effective, and annotation-efficient. We hope that this work may serve as a foundation for further research, offering novel perspectives in complex reasoning, collective intelligence, and autonomous agents.
2310.20150
Unlearn What You Want to Forget: Efficient Unlearning for LLMs
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data, however, this process might suffer from privacy issues and violations of data protection regulations. As a result, the ability to easily remove data related to individual users from such models while not deteriorating their predictive quality after the removal becomes increasingly important. To address these issues, in this work, we propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals, by introducing lightweight unlearning layers learned with a selective teacher-student objective into the transformers. In addition, we introduce a fusion mechanism to effectively combine different unlearning layers that learns to forget different sets of data to handle a sequence of forgetting operations. Experiments on classification and generation tasks demonstrate the effectiveness of our proposed methods compared to the state-of-the-art baselines.
Jiaao Chen, Diyi Yang
2023-10-31T03:35:59Z
http://arxiv.org/abs/2310.20150v1
# Unlearn What You Want to Forget: Efficient Unlearning for LLMs ###### Abstract Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data, however, this process might suffer from privacy issues and violations of data protection regulations. As a result, the ability to easily remove data related to individual users from such models while not deteriorating their predictive quality after the removal becomes increasingly important. To address these issues, in this work, we propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals, by introducing lightweight unlearning layers learned with a selective teacher-student objective into the transformers. In addition, we introduce a fusion mechanism to effectively combine different unlearning layers that learns to forget different sets of data to handle a sequence of forgetting operations. Experiments on classification and generation tasks demonstrate the effectiveness of our proposed methods compared to the state-of-the-art baselines1. Footnote 1: The codes are avaiable here: [https://github.com/SALT-NLP/Efficient_Unlearning/](https://github.com/SALT-NLP/Efficient_Unlearning/) ## 1 Introduction Utilizing Large Language Models (LLMs) has become the dominant paradigm for various NLP applications Brown et al. (2020); Chowdhery et al. (2022); Kojima et al. (2022); Ouyang et al. (2022); Brown et al. (2020); Radford et al. (2019); Lewkowycz et al. (2022); Qin et al. (2023); Touvron et al. (2023) as LLMs memorize a vast amount of knowledge during pre-training or fine-tuning on a wide range of textual data Brown et al. (2020); Radford et al. (2019); Hoffmann et al. (2022); Webson and Pavlick (2022); Min et al. (2022); Liang et al. (2022); Carlini et al. (2022). However, these data could contain sensitive information such as names, phone numbers, email addresses, and private clinical notes Jang et al. (2022); Kurmanji et al. (2023); Kumar et al. (2022).Extensive studies showed that LLMs could generate private information such as the Editor-in-Chief of MIT Technology Review including his family members, work address, and phone number Carlini et al. (2022). Recently, the EU's General Data Protection Regulation (GDPR) and US's California Consumer Privacy Act (CCPA) have also required the _right to be forgotten_, introducing new regulations that require applications to support the deletion of user-generated content when requested by users Sekhari et al. (2021); Kumar et al. (2022). In light of this, it is essential to provide LLMs with an efficient and effective way to unlearn the information requested by users. Recent attention has been paid to the handling of such unlearning requests for LLMs through retraining and data pre-processing like SISA Bourtoule et al. (2021); Kumar et al. (2022) where training data is stored in different isolated slices and each checkpoint is saved after training on each slice. When a deletion request is received, the respective data point will be removed from the slice, and the model checkpoint up to the data point will be used to further retrain the model. The effect of unlearning is often reflected by the model errors on the deleted data (models cannot predict the deleted data) Kurmanji et al. (2023); Jang et al. (2022). Other works have also explored the design of algorithms that ensure differential privacy (DP) Yu et al. (2021); Li et al. (2021); Anil et al. (2021). However, machine unlearning approaches like SISA Bourtoule et al. (2021) usually require a significantly large amount of storage space Bourtoule et al. (2021), and DP methods could result in a slow convergence and significant deterioration in model performance Nguyen et al. (2022). In addition, both of them require retraining the whole model, which is extremely expensive and time-consuming considering the model scales of the current LLMs. These limitations also make them unable to dynamically deal with a sequence of unlearning requests which is often the need in real-world scenarios Jang et al. (2022); Nguyen et al. (2022). To fill in these gaps, in this work, we propose an **E**fficient **U**nlearning method for LLMs (EUL) to efficiently unlearn what needs to be forgotten without completely retraining the whole model while retaining the performances of the models. Specifically, we propose a lightweight approach to learning the unlearning layer that is plugged into transformers through a selective teacher-student formulation Kurmanji et al. (2023) within several updates, without tuning the large language models. Additionally, we introduce a fusion mechanism to effectively combine the weights of different unlearning layers that learn to forget different sets of data to a single unified unlearning layer by minimizing a regression objective. This allows EUL to efficiently address a sequence of deletion operations. To demonstrate the effectiveness of our proposed EUL, we perform experiments on IMDB Maas et al. (2011) and SAMSum Gliwa et al. (2019) in different settings compared to the state-of-the-art unlearning or model editing baselines. To summarize, our main contributions are threefold: * We introduce an efficient unlearning method to remove the effect of required data in a lightweight way via a selective teacher-student formulation. * We design a fusion mechanism to merge unlearning layers that are learned to forget different sets of data into a single unlearning layer to deal with a sequence of removal operations. * We conduct experiments on classification and generation tasks with backbone models of different scales in different settings, to illustrate the effectiveness of EUL. ## 2 Related Work ### Large Language Models Large language models have witnessed extensive progress recently Brown et al. (2020); Radford et al. (2019); Smith et al. (2022); Rae et al. (2021); Chowdhery et al. (2022); Touvron et al. (2023), especially in terms of scaling up LLMs such as LLAMA Touvron et al. (2023), Megatron-turing NLG Smith et al. (2022), Gopher Rae et al. (2021), and PaLM Chowdhery et al. (2022). Other works have also achieved better performance with smaller models through longer training Hoffmann et al. (2022), instruction tuning Wang et al. (2022); Zhou et al. (2023) and human feedback Ouyang et al. (2022). However, recent studies have shown that training data, such as personally identifiable information like names, phone numbers, email addresses, and even bank account numbers Carlini et al. (2021); Lee et al. (2021); Carlini et al. (2022); Jagielski et al. (2022), can be easily extracted from LLMs because LLMs memorize the training data in billions of parameters Carlini et al. (2022). Our work is proposed to alleviate such issues by allowing efficient unlearning of the requested or private data from the learned parameters in LLMs. Figure 1: Overall process of our EUL framework. The unlearning layers are plugged into transformer layers after the feed-forward networks. During training, only the unlearning layers are learned to forget requested data while the original LLMs remain unchanged. For every deletion request, an unlearning layer is learned first and then merged with other unlearning layers via our designed fusion mechanism to form the fused unlearning transformer which satisfies a series of deletion requests. ### Machine Unlearning for Privacy To mitigate the privacy risks for LLMs, machine unlearning methods have been introduced to remove the contributions of training examples that users request to be erased by users Bourtoule et al. (2021); Chien et al. (2023) including exact unlearning that retrains deep learning models on new datasets after removal Bourtoule et al. (2021) and approximate unlearning Izzo et al. (2021); Golatkar et al. (2020); Kurmanji et al. (2023); Jang et al. (2022) which aims to modify the weights of trained models to produce a new set of weights that approximate the weights from retraining. The effect of unlearning is often reflected by the model errors on the deleted data (models cannot predict the deleted data) Kurmanji et al. (2023); Jang et al. (2022). Another line of work has focused on Differential Privacy (DP) which ensures that user information in training data cannot be inferred Dwork (2008); Yu et al. (2021); Li et al. (2021); Anil et al. (2021); Abadi et al. (2016). However, both types of methods require retraining the whole model, which is extremely expensive and time-consuming, especially for large language models and even impacts the task performances Anil et al. (2021). And thus they can not dynamically tackle sequences of deletion Jang et al. (2022); Nguyen et al. (2022). To overcome these limitations, we introduce an efficient unlearning method as well as a fusion mechanism to **efficiently** and **dynamically** unlearn sequence of user data. Our work is also related to model editing Mitchell et al. (2021); Belinkov et al. (2017); Dai et al. (2021); Wang et al. (2020) while they usually focus on editing the model output based on several given linguistic structures or facts about the world instead of forgetting the required data. ## 3 Efficient Unlearning for LLMs This section presents our designed **E**fficient Unlearning method for **LL**Ms (EUL) which could efficiently and dynamically handle a sequence of deletion requests. The overall process is shown in Figure 1. Formally, for a large language model \(F(.)\) that is trained on a dataset \(D=\{(x,y)\}\) where \(x\) is textual data and \(y\) is the corresponding label, and a deletion request to forget \(D^{f}=\{(x^{f},y^{f}\}\), our goal is to learn an updated model \(F^{\prime}(.)\) that satisfies the following Kurmanji et al. (2023): \[\begin{split} I(F(D^{f});F^{\prime}(D^{f}))&=0\\ I(F(D^{r});F^{\prime}(D^{r}))&=1\end{split} \tag{1}\] where \(D^{r}=D-D^{f}=\{(x^{r},y^{r})\}\) refers to the data we would like to retain, and \(I(.)\) is the mutual information. Intuitively, we will update \(F(.)\) with \(F(.)\) to generate similar output for the data we want to retain while losing all information about making predictions on the data we want to forget. ### Learning to Forget via Unlearning Layers As the scales of current LLMs and the size of training data are usually large, updating all the parameters in the model \(F(.)\) (e.g., re-training \(F(.)\) on \(D^{r}_{\text{t}}\)) becomes extremely expensive. Inspired by recent advances in parameter-efficient fine-tuning Houlsby et al. (2019); Chien et al. (2023), we model \(F^{\prime}(.)\) by \(F(f(.))\) where \(f(.;W)\) is an adapter with significant smaller amount of parameters \(W\) compared to \(F(.)\). And we would only update \(f(.)\) to fulfill the unlearning requests. To effectively achieve the unlearning goals in equation 1, we minimize a selective teacher-student objective where the student model \(F^{\prime}(.)=F(f(.))\) is learned to follow the teacher model \(F(.)\) on \(D^{r}\) while disobeyed \(F(.)\) on \(D^{f}\): \[\begin{split} L_{KL}=&\alpha\sum_{x^{r}}KL(F(x^{r} )||F(f(x^{r})))\\ &-\sum_{x_{f}}KL(F(x^{f})||F(f(x^{f})))\end{split} \tag{2}\] where \(\alpha\) is a hyper-parameter to balance the trade-off between forgetting \(x^{f}\) and retaining \(x^{r}\). Intuitively, during training, \(f(.)\) is leaned to minimize the KL-divergence between the output from the updated model and the original model on the data to retain while maximizing the KL-divergence between the output from them on the data to forget. To maintain the task performance, we optimize \(f(.)\) for the task loss on the retain data: \[L_{TASK}=\sum_{x^{r}}l(F(f(x^{r})),y^{r}) \tag{3}\] where \(l(.)\) is the task-related loss, for example, cross-entropy loss, \(-\log P(F(f(x^{r})))\), for classification tasks. Furthermore, we also negate the original training objectives used in LLMs (e.g., masked language modeling objective Raffel et al. (2020)) to forget the knowledge related to the data, in order to forget in pre-trained parameters and ensure that the information in the forgotten data cannot be easily extracted from \(F(.)\): \[L_{LM}=-\sum_{x^{f}}l(F(f(x^{f}))) \tag{4}\] where \(l(.)\) is the language model loss used when pre-training \(F(.)\), for example, masked language model loss, \(-\log P(\hat{x}|x-\hat{x})\) (\(\hat{x}\) are the randomly masked tokens). In our experiments, we utilize T5 models (Raffel et al., 2020). Thus we add an extra "_Predict the masked word_" at the beginning of the input for this loss term. Our final training objective is then the following: \[L_{EUL}=L_{KL}+\lambda L_{TASK}+\gamma L_{LM} \tag{5}\] where \(\lambda\) and \(\gamma\) are hyper-parameters. In practice, following Kurmanji et al. (2023), we alternate the updates for the data to be forgotten and the data to be retained to optimize _min-max_ terms in \(L_{EUL}\) more stably. Specifically, we iteratively perform an epoch of updates on the data to be retained and then an epoch of updates on the data to be forgotten. ### Fusing Unlearning Layers To dynamically handle a sequence of unlearning requests and derive a unified model that could forget all of the requested data, we then introduce a fusion mechanism that could merge different unlearning layers \(f_{i}(.;W_{i})\) which are learned to forget \(D_{i}^{f}=(X_{i}^{f},Y_{i}^{f})\) in the previous section into a single \(f^{m}(.;W_{m})\). Namely, we would like the output of \(f^{m}(.)\) on \(D_{i}^{f}\) being close to \(f_{i}(.)\): \[\min_{W_{m}}\sum_{i}||W_{m}^{T}X_{i}^{f}-W_{i}^{T}X_{i}^{f}||^{2} \tag{6}\] which is a linear regression problem and has a closed-form solution: \[W_{m}=(\sum_{i}{X_{i}^{f}}^{X}X_{i}^{f})^{-1}\sum_{i}({X_{i}^{f}}^{X}X_{i}^{f} W_{i}) \tag{7}\] Specifically, to derive the weights \(W_{m}\) for the merged unlearning layer \(f^{m}\), we would use the pre-computed inner product matrix of the hidden representations before the unlearning layers in LLMs of the forgotten data \({X_{i}^{f}}^{X}X_{i}^{f}\) and then compute \(W_{m}\) following Equation 7. The fusion mechanism ensures efficiency and privacy as it could be performed without any extra training and only requires storing the inner product matrix of the representations of the data to be forgotten instead of the data itself. ## 4 Experiments ### Datasets We conduct experiments on both classification and generation tasks. For the classification task, we utilize the IMDB dataset(Maas et al., 2011), which is a sentiment classification dataset consisting of users' reviews of movies, directors, actors, etc. For the generation task, we use SAMSum (Gliwa et al., 2019), which is a recent popular conversation summarization dataset consisting of conversations between different speakers. The dataset statistics are shown in Table 1. We choose these two datasets because they are widely used (Wang et al., 2021; Yang et al., 2019; Qin et al., 2023; Ji et al., 2023; Wei et al., 2021; Sanh et al., 2021; Chen et al., 2022) to evaluate large language models and both datasets are related to cases where the user might require to remove their data, for example, removing all the reviews of a specific movie or removing all the conversations from one specific speaker. In experiments, we use the pre-trained NER models from AllenNLP2 to extract all the entities (names) in IMDB and directly use the speakers' names in SAMSum and simulate the unlearning requests to remove all the data from or related to certain names. Moreover, we substitute all the names in the dev and test set with special tokens. Footnote 2: [https://demo.allennlp.org/](https://demo.allennlp.org/) ### Evaluation Metrics To evaluate the performances, following Kurmanji et al. (2023), we measure several metrics: (1) **Performance on the test set**: The task-related performance on the test set, namely, accuracy for IMDB and ROUGE for SAMSum. This measures whether the unlearning algorithms affect the model performance or not. (2) **Performance on the retained set**: The task-related performance on the data to be retained. This measures whether the unlearning algorithms forget the data that need to be retained. Higher performance means that the model remembers the data that is not to be forgotten. (3) **Performance on the forgot set**: The task-related performance on the data to be forgotten. This measures whether the unlearning algorithms effectively forget the data requested to be forgotten. Lower \begin{table} \begin{tabular}{c|c|c c c} \hline **Dataset** & **Task** & **Train** & **Dev** & **Test** \\ \hline \hline IMDB & Classification & 20000 & 2000 & 25000 \\ SAMSum & Summarization & 14732 & 818 & 819 \\ \hline \end{tabular} \end{table} Table 1: Dataset statistics for IMDB and SUMSum. performance means that the model is better at forgetting the data. (4) **MLM Loss**: The masked language model loses on the data to be forgotten where related entities or actions are masked. This is achieved by adding "_Predict the masked word_" in the beginning. This measure whether the information in the data that needs to be forgotten can be extracted from the LLMs. Higher MLM loss means that it is harder to extract such information from the models. (5) **Updating time**: The time to update the original model in the forgetting process. ### Baselines We compare our EUL with several baseline methods: **Re-train**(Kumar et al., 2022): Re-training the model from scratch on the data to be retained without any forgotten data. **Fine-tune**(Kurmanji et al., 2023): Fine-tuning the original model on the data to be retained without any forgotten data. **SISA**(Kumar et al., 2022): Sharded, Isolated, Sliced, and Aggregated training where multiple models are trained independently on disjoined shards, and its slices and model checkpoints are saved for each \begin{table} \begin{tabular}{c|c|c c c c|c} \hline \hline **Methods** & **\# Forgot Data** & **Test Set \(\uparrow\)** & **Retained Set \(\uparrow\)** & **Forgot Set \(\downarrow\)** & **MLM Loss \(\uparrow\)** & **Time (s) \(\downarrow\)** \\ \hline \hline \multicolumn{7}{c}{_T5-base_} \\ \hline \hline Original & - & 93.2 & 100 & 100 & 1.46 & - \\ \hline Re-train & & 92.8 & 100 & 92.5 & 1.52 & 6685 \\ Fine-tune & & **93.0** & 100 & 96.5 & 1.47 & 4200 \\ SISA & & 92.4 & 98.2 & 91.5 & 1.54 & 1580 \\ Reverse-Gradient & & 92.0 & 97.3 & 68.6 & 1.56 & 4400 \\ MEND & & 92.2 & 98.5 & 73.5 & 1.60 & **34** \\ EUL\(\uparrow\) & & **93.0** & **100** & **65.7** & **1.78** & 1200 \\ \hline \hline Re-train & & 92.7 & 100 & 91.6 & 1.55 & 6610 \\ Fine-tune & & 92.8 & 100 & 96.2 & 1.48 & 3950 \\ SISA & 1\% & 92.2 & 98.1 & 90.4 & 1.55 & 2930 \\ Reverse-Gradient & & 91.5 & 96.4 & 67.4 & 1.59 & 4166 \\ MEND & & 91.3 & 95.5 & 74.6 & 1.62 & **62** \\ EUL\(\uparrow\) & & **93.0** & **100** & **64.4** & **1.84** & 1526 \\ \hline \hline Re-train & & 92.1 & **100** & 90.2 & 1.56 & 6026 \\ Fine-tune & & 92.0 & 100 & 95.8 & 1.52 & 3133 \\ SISA & & 91.6 & 98.2 & 88.4 & 1.55 & 2010 \\ Reverse-Gradient & & 91.0 & 96.5 & 65.4 & 1.62 & 3228 \\ MEND & & 90.8 & 94.8 & 76.2 & 1.66 & **328** \\ EUL\(\uparrow\) & & **92.2** & 99.0 & **57.2** & **2.01** & 1828 \\ \hline \hline \multicolumn{7}{c}{_T5-3b_} \\ \hline \hline Original & - & 97.0 & 100 & 100 & 1.28 & - \\ \hline Re-train & & 96.6 & 100 & 94.8 & 1.30 & 26855 \\ Fine-tune & & **96.7** & 100 & 96.2 & 1.28 & 20465 \\ SISA & & 95.0 & 97.2 & 94.1 & 1.33 & 16503 \\ Reverse-Gradient & & 93.3 & 96.5 & 78.9 & 1.42 & 21826 \\ MEND & & 93.0 & 95.8 & 89.5 & 1.30 & **4980** \\ EUL\(\uparrow\) & & 96.5 & **100** & **70.2** & **1.66** & 9240 \\ \hline \hline Re-train & & 96.3 & 100 & 94.2 & 1.30 & 25280 \\ Fine-tune & & **96.5** & 100 & 96.0 & 1.28 & 18466 \\ SISA & & 93.8 & 96.8 & 92.7 & 1.35 & 15680 \\ Reverse-Gradient & & 92.5 & 96.0 & 80.1 & 1.46 & 18800 \\ MEND & & 92.8 & 95.0 & 84.4 & 1.48 & **6600** \\ EUL\(\uparrow\) & & **96.5** & **100** & **67.5** & **1.72** & 9840 \\ \hline \hline Re-train & & 96.0 & 100 & 93.5 & 1.31 & 22140 \\ Fine-tune & & **96.2** & 100 & 94.0 & 1.30 & 16752 \\ SISA & & 93.0 & 95.5 & 92.2 & 1.35 & 14180 \\ Reverse-Gradient & 10\% & 91.9 & 95.2 & 68.4 & 1.46 & 17850 \\ MEND & & 92.0 & 94.2 & 78.5 & 1.50 & 12072 \\ EUL\(\uparrow\) & & 96.0 & **100** & **60.8** & **1.92** & **10460** \\ \hline \hline \end{tabular} \end{table} Table 2: Performances on IMDB for T5-base and T5-3B after unlearning different number of privacy-related data. \(\dagger\) refers to our model. All the results are averaged over 5 random runs. slice. When forgetting certain data, the corresponding data point is deleted from its slice, and the model checkpoint up to the data point is used to further retrain the model. **Reverse-Gradient**(Liu et al., 2022): Fine-tuning the original model on both retained data and forgot data while negating the gradient for the forgot data. **MEND**(Mitchell et al., 2021): Editing the model to generate output following the given examples. To adapt the model in the unlearning setting, we reverse the labels for data in classification tasks as input to MEND. However, it is infeasible to apply MEND to summarization tasks as it is hard to design the new output to perform the editing. ### Model Settings For all the experiments, we use T5 models (T5-base and T5-3b) (Raffel et al., 2020) as the backbone models. For SISA, we follow Kumar et al. (2022) to split the dataset. For our unlearning layers, we only tune 0.5% (Chen et al., 2023) of the parameters. The \(\alpha=0.8\), \(\lambda=1.0\) and \(\gamma=0.2\) are selected from grid searching \(\{0.1,0.2,0.5,0.8,1.0\}\). We set the linear decay scheduler with a warmup ratio of 0.06 for training. The maximum sequence length is 128 for IMDB and 800 for SAMSum. The batch size was 256 for base models and 128 for 3b models on IMDB and 8 for base models and 2 for 3b models on SAMSum. The maximum learning \begin{table} \begin{tabular}{c|c|c c c c|c} \hline \hline **Methods** & **\# Forgot Data** & **Test Set \(\uparrow\)** & **Retained Set \(\uparrow\)** & **Forgot Set \(\downarrow\)** & **MLM Loss \(\uparrow\)** & **Time (s) \(\downarrow\)** \\ \hline \hline \multicolumn{7}{c}{_T5-base_} \\ \hline \hline Original & - & 47.2/23.5/39.6 & 71.4/42.6/62.7 & 70.2/42.2/62.7 & 1.37 & - \\ \hline Re-train & & 46.8/23.0/38.1 & 71.7/42.8/62.4 & 42.4/23.2/42.0 & 1.40 & 28000 \\ Fine-tune & & 46.6/23.2/38.1 & **72.5/44.7/65.2** & 58.8/34.1/54.1 & 1.38 & 27120 \\ SISA & 0.5\% & 44.2/22.0/37.4 & 70.5/41.6/60.5 & 41.4/23.0/40.8 & 1.48 & 22582 \\ Reverse-Gradient & & 43.2/20.9/35.8 & 68.8/40.2/58.5 & 42.3/21.4/38.1 & 1.64 & 28800 \\ EUL\(\uparrow\) & & **46.8/23.0/38.5** & 71.5/42.4/63.3 & **38.4/20.2/37.2** & **1.88** & **17060** \\ \hline \hline Re-train & & 45.4/22.8/37.5 & 72.4/43.0/62.8 & 42.2/22.8/41.6 & 1.44 & 26855 \\ Fine-tune & & **46.4/23.2/38.1** & **72.9/43.6/64.0** & 56.4/31.8/52.7 & 1.40 & 27210 \\ SISA & 1\% & 43.1/1.2/13.6 & 69.8/40.2/60.0 & 41.4/23.0/40.8 & 1.50 & 22420 \\ Reverse-Gradient & & 42.0/02.0/34.6 & 68.4/20.5/8.5 & 42.3/21.4/38.1 & 1.64 & 27700 \\ EUL\(\uparrow\) & & 46.5/22.8/38.0 & 71.5/42.4/63.3 & **35.8/19.0/36.2** & **1.95** & **16820** \\ \hline \hline Re-train & & 44.2/21.2/35.8 & 70.4/41.2/60.5 & 41.4/21.4/40.0 & 1.48 & 26155 \\ Fine-tune & & 45.2/22.1/36.6 & **71.1/42.6/62.9** & 51.5/28.6/50.0 & 1.43 & 27510 \\ SISA & 10\% & 41.8/19.6/33.3 & 68.3/38.8/58.8 & 40.2/20.1/38.9 & 1.55 & 20790 \\ Reverse-Gradient & & 40.8/18.4/33.0 & 66.6/38.3/55.5 & 38.0/19.4/36.6 & 1.71 & 27240 \\ EUL\(\uparrow\) & & **45.8/22.4/37.8** & 70.9/42.0/62.3 & **33.0/18.3/33.0** & **2.23** & **15000** \\ \hline \hline \multicolumn{7}{c}{_T5-3b_} \\ \hline \hline Original & - & 53.6/29.6/45.1 & 78.5/47.6/66.1 & 74.2/43.5/64.9 & 1.30 & - \\ \hline Re-train & & 52.8/28.8/44.0 & 77.4/46.1/65.4 & 50.4/27.2/43.0 & 1.34 & 84480 \\ Fine-tune & & 53.3/29.0/44.4 & **78.0/7/1.6/55.8** & 60.2/36.1/55.7 & 1.30 & 83600 \\ SISA & 0.5\% & 51.7/27.2/40.8 & 74.8/48.3/63.5 & 49.4/26.8/42.2 & 1.33 & 75000 \\ Reverse-Gradient & & 50.6/25.9/39.9 & 72.8/42.0/62.8 & 44.3/23.1/39.0 & 1.44 & 83200 \\ EUL\(\uparrow\) & & **53.6/29.4/44.8** & 77.5/46.3/66.6 & **41.0/21.8/38.2** & **1.67** & **60430** \\ \hline \hline Re-train & & 52.0/28.2/42.8 & **76.7/45.8/64.8** & 49.6/26.6/42.1 & 1.35 & 82440 \\ Fine-tune & & 52.5/28.5/43.6 & 76.2/45.5/64.2 & 56.8/32.2/52.4 & 1.32 & 81135 \\ SISA & 1\% & 50.0/26.1/38.9 & 72.3/43.1/61.1 & 49.0/25.8/41.1 & 1.38 & 73550 \\ Reverse-Gradient & & 48.6/24.3/37.2 & 70.6/41.5/60.9 & 42.2/22.0/37.7 & 1.45 & 82485 \\ EUL\(\uparrow\) & & **53.3/29.0/44.4** & 76.4/45.3/64.3 & **38.4/19.9/36.0** & **1.74** & **60880** \\ \hline \hline Re-train & & 50.8/26.4/40.5 & 74.2/45.0/63.2 & 48.2/25.5/41.4 & 1.38 & 81010 \\ Fine-tune & & 51.4/27.2/41.9 & **75.2/45.3/64.0** & 52.1/29.8/49.9 & 1.35 & 81800 \\ SISA & 10\% & 48.2/24.5/36.0 & 70.4/40.5/59.6 & 41.2/23.5/40.0 & 1.40 & 70400 \\ Reverse-Gradient & & 44.7/22.0/34.2 & 68.5/40.9/58.8 & 40.9/21.0/36.5 & 1.49 & 82070 \\ EUL\(\uparrow\) & & **52.0/28.4/42.6** & 74.9/45.0/63.6 & **36.2/18.6/34.7** & **1.78** & **59900** \\ \hline \hline \end{tabular} \end{table} Table 3: Performances on SAMSum for T5-base and T5-3B after unlearning different number of privacy-related data. \(\dagger\) refers to our model. All the results are averaged over 3 random runs. The performance on Test, Retained and Forgot Set are ROUGE-1/2/L scores. rate was \(5e-5\) and the maximum number of training epochs was set to be \(3\) or \(5\). All the experiments were performed using 8 A100 GPUs. ### Results Unlearning Privacy-related Data on IMDBWe request the T5-base and T5-3b models that are fine-tuned on the IMDB dataset to unlearn 0.5%, 1% and 10% of the training data. The data to be forgotten is randomly selected based on the names of movies, actors, actresses, directors, etc. For example, the model might need to forget all the data points related to "_Lena Numan_". This simulates the cases where people/companies request to remove all the data related to them. The performances are displayed in Table 2. After unlearning the requested data from T5-base models, the re-training method hurts the accuracy (e.g., a 1.1 accuracy drop when forgetting 10% data) on the test set because there is fewer data for training, and the accuracy on the retained set keeps unchanged (100%) probably because the model memorizes the retained data. The accuracy on the forgot set drops after re-training (e.g., 92.5 com \begin{table} \begin{tabular}{c|c c c} \hline \hline **Models** & **Set 2** & **Set 2, 1** & **Set 2, 1,3** \\ \hline \hline Re-train & 92.7/91.4 & 92.5/90.8 & 91.3/90.0 \\ Fine-tune & 92.8/96.0 & 92.1/94.0 & 91.0/93.3 \\ SISA & 92.2/90.4 & 92.0/87.8 & 91.2/85.8 \\ Reverse-Gradient & 91.5/67.9 & 90.5/67.2 & 89.8/66.0 \\ \hline EUL & **93.0/64.6** & 92.1/64.8 & 91.0/64.2 \\ EUL-fuse & **93.0/64.6** & **92.8/62.2** & **92.4/60.8** \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy on the test/retained set of after unlearning sets of data following a sequence (set 2 -> set 1 -> set 3). \begin{table} \begin{tabular}{c|c c c|c} \hline \hline **Methods** & **Test Set \(\uparrow\)** & **Retained Set \(\uparrow\)** & **Forgot Set \(\downarrow\)** & **Updating Time (s) \(\downarrow\)** \\ \hline \hline Original & 91.8 & 100 & 91.2 & - \\ \hline Re-train & 92.5 & **100** & 12.6 & 6026 \\ Fine-tune & 92.3 & 100 & 26.8 & 3133 \\ SISA & 92.2 & 98.2 & 12.6 & 1510 \\ Reverse-Gradient & 92.8 & 98.6 & 9.0 & 3228 \\ MEND & 92.2 & 97.8 & 16.8 & **328** \\ EUL\(\dagger\) & **93.0** & 99.0 & **5.0** & 1828 \\ \hline \hline \end{tabular} \end{table} Table 4: Performances on IMDB for T5-base after unlearning 10% wrong-labeled data. \(\dagger\) refers to our model. All the results are averaged over 5 random runs. \begin{table} \begin{tabular}{c|c|c c c} \hline \hline **Metric** & **EUL** & **-KL** & **-TASK** & **-LM** \\ \hline \hline Test Set \(\uparrow\) & **93.0** & 91.4 & 91.0 & 92.4 \\ Retained Set \(\uparrow\) & **100** & 100 & 97.4 & 99.0 \\ Forgot Set \(\downarrow\) & **65.7** & 90.8 & 67.4 & 69.0 \\ MLM Loss \(\uparrow\) & **1.78** & 1.75 & 1.78 & 1.50 \\ \hline \hline \end{tabular} \end{table} Table 5: Performances on IMDB for T5-base after re-moving 0.5% privacy-related data. We remove one objective at a time from our EUL methods. Figure 2: Sequentially unlearning 1,2,3,4,5 different sets of data for T5-base on IMDB. The results are accuracy on the test set and the accuracy on the forgot set averaging across different orderings. Every single set contains 1% of the training data. pared to 100 when unlearning 0.5% of the data), showing that the model is forgetting the requested data, and the masked language model loss increases (e.g., increasing 0.06 when unlearning 0.5% of the data), indicating that it is harder to extract the information of the forgot data after re-training. The fine-tuning method shows better test accuracy with less updating time, however, it is worse in terms of forgetting the data. Even though SISA takes significantly less time (only costing around 1/3 of the time compared to re-training) to derive the updated model that forgets the requested data, it receives lower accuracy on the test and retained set, which means that the model prediction abilities get worse because of failing to remember the retained data. When reversing the gradients for the data to be forgotten, the updated model gets better at forgetting with lower test accuracy. The model editing method, MEND, shows better overall performance on nearly all the metrics but it requires extra data to train a model editing module to edit the original model, making the method hard to be generalized to new models and settings. Our EUL approach boosts all the metrics with faster speed to update the model compared to previous unlearning baselines after removing different numbers of privacy-related data (e.g., achieving the lowest accuracy (65.6%) on forgot set while keeping the best test accuracy (93.0%) and 100% retained accuracy with 1/6 of the updating time compared to re-training when forgetting 0.5% of the data), suggesting that our designed unlearning layers that are learned with tailored objectives could efficiently update the LLMs to forget the required data and remain the abilities to perform the tasks. When the size of the backbone model scales up to 3b, the improvements of our EUL are consistent, indicating that our methods could still forget what the user requests even for larger models that are better at memorizing data. #### 4.2.2 Unlearning Privacy-related Data on SAMSum We unlearn 0.5%, 1% and 10% training data from T5-base and T5-3B models that are fine-tuned on the SAMSum dataset. The data to be forgotten is randomly selected based on the speaker names. For example, the model might need to forget all the conversations from "_Jack_". This simulates the cases where people request to remove all the data generated by them. The performances are shown in Table 3. Similarly, our EUL method consistently achieves the best overall performances by effectively forgetting the requested data while remembering the retained data and keeping the test ROUGE scores with significantly less amount of training time. This indicates that our objectives could also be generalized to generation tasks. #### 4.2.3 Unlearning Mislabeled Data on IMDB We also test a setting where the data to be forgotten is those with wrong labels. In experiments, we randomly change the labels for 10% of the training data and then request the model to unlearn their impact. This simulates the cases where we improve the models that are trained on noisy data by unlearning the mislabeled data Kumar et al. (2022). We report the performances with T5-base models in Table 4. We observe that the accuracy of the test set of the original model is affected by the mislabeled data. And our EUL is the most effective approach to unlearn and remove the negative impact of those mislabeled data to achieve the best test accuracy. #### 4.2.4 Sequence of Removals We test baseline and our methods in a setting where a sequence of unlearn requests are received, i.e., the models need to forget different sets of data sequentially. In experiments, we sequentially unlearn 1,2,3,4,5 sets of data from T5-base model on IMDB dataset. For every unlearn length, we test with all the possible sequences and average the accuracy on the test set and the forgot set. For example, when the length of the forgetting requests are 2 (set 1, set 2), we test on the sequence (set 1 -> set 2) and sequence (set 2-> set 1) and average the final performances. We show the results (accuracy on the test/retained set) of one possible sequence whose length is 3 (set 2 -> set 1 -> set 3) in Table 6 as an example. Averaged performances over different sequence lengths are visualized in Figure 2. EUL means that we keep one unlearning layer to sequentially unlearn different sets of data and EUL-fuse means that for every set of forgot data we learn separate unlearning layers and then merge them into a single unlearning layer via our proposed fusion mechanism. The results demonstrate that our proposed fusion method that combines different unlearning layers could effectively handle the sequence of deletion (achieving higher accuracy on the test set and lower accuracy on the forgot set.) especially when the sequence length gets longer compared to baseline models. ### Ablation Studies #### 4.6.1 Removal of Objectives We perform ablation studies to show the effectiveness of each designed objective in EUL by removing each of them when learning the unlearning layers in Table 5. Compared to EUL which utilizes all of the learning objectives, removing each of them would result in a performance drop, which demonstrates every component contributes to the final performance. Specifically, removing \(L_{KL}\) would increase the accuracy of the forgot set, indicating that \(L_{KL}\) is the main factor to forget the requested data. Removing \(L_{TASK}\) from EUL would drop the accuracy on the test set, suggesting that \(L_{TASK}\) is essential to maintain task performance. Removing \(L_{LM}\) decreases the MLM Loss, showing that \(L_{LM}\) is the main objective to avoid the extraction of the requested information. Member Inference AttackWe further perform Member Inference Attack (MIA) [13] on IMDB and SAMSum when unlearn 1% privacy-related data for T5-base models. Specifically, we test the accuracy of a binary classifier which is trained to predict whether the input data belong to the forgotten set or the retained set based on their representations after the final layer of the T5 model. An accuracy closer to 0.5 means that it is hard for the classifier to predict the groups of the input data. The accuracies are shown in Table 7. We found that the classifiers could not converge so well on the training set and always had a low accuracy on the test set both before and after unlearning (e.g., 0.542 before unlearning and 0.566 after our EUL unlearning on IMDB). These showed that the randomly deleted data could not be easily inferred both before and after our EUL unlearning. ## 5 Conclusion In this work, we propose EUL, an efficient unlearning method for LLMs that could efficiently and effectively unlearn the user-requested data via learning unlearning layers through the selective teacher-student objective. We further introduce a fusion mechanism that could merge different unlearning layers into one unified layer to dynamically unlearn a sequence of data. Experiments on different settings (different datasets, different model sizes, different forget set sizes) demonstrated the effectiveness of our proposed EUL method compared to state-of-the-art baselines. ## 6 Limitations In this work, we mainly perform experiments on T5-base/3b models with fine-tuned tasks. We encourage future work to explore how to update different backbone models with larger sizes such as LLAMA models or even close-sourced models like ChatGPT to forget the requested data such asemn privacy-related data, toxic data, or misinformation in the pre-training corpus. Also, we mainly follow the previous work to measure the unlearning through performance on the test set, retained set, and forgot set, together with the MLM loss. Future work might explore how to evaluate unlearning methods more comprehensively, such as whether the model could recall forgotten content or whether methods would make forgotten data identifiable. In addition, we perform all the experiments in simulated settings. Future work might apply our methods to real-world applications to deal with actual use cases or introduce new benchmarks for evaluating unlearning methods. ## Acknowledgment We would like to thank all reviewers and the SALT Lab for their valuable feedback. This work was partially sponsored by NSF grant IIS-2247357 and IIS-2308994.
2309.03832
Soft Theorem to Three Loops in QCD and ${\cal N} = 4$ Super Yang-Mills Theory
The soft theorem states that scattering amplitude in gauge theory with a soft gauge-boson emission can be factorized into a hard scattering amplitude and a soft factor. In this paper, we present calculations of the soft factor for processes involving two hard colored partons, up to three loops in QCD. To accomplish this, we developed a systematic method for recursively calculating relevant Feynman integrals using the Feynman-Parameter representation. Our results constitute an important ingredient for the subtraction of infrared singularities at N$^4$LO in perturbative QCD. Using the principle of leading transcendentality between QCD and ${\cal N}=4$ super Yang-Mills theory, we determine the soft factor in the latter case to three loops with full-color dependence. As a by-product, we also obtain the finite constant $f_2^{(3)}$ in the Bern-Dixon-Smirnov ansatz analytically, which was previously known numerically only.
Wen Chen, Ming-xing Luo, Tong-Zhi Yang, Hua Xing Zhu
2023-09-07T16:47:29Z
http://arxiv.org/abs/2309.03832v2
# Soft Theorem to Three Loops in QCD and \(\mathcal{N}=4\) Super Yang-Mills Theory ###### Abstract The soft theorem states that scattering amplitude in gauge theory with a soft gauge-boson emission can be factorized into a hard scattering amplitude and a soft factor. In this paper, we present calculations of the soft factor for processes involving two hard colored partons, up to three loops in QCD. To accomplish this, we developed a systematic method for recursively calculating relevant Feynman integrals using the Feynman-Parameter representation. Our results constitute an important ingredient for the subtraction of infrared singularities at N\({}^{4}\)LO in perturbative QCD. Using the principle of leading transcendentality between QCD and \(\mathcal{N}=4\) super Yang-Mills theory, we determine the soft factor in the latter case to three loops with full-color dependence. As a by-product, we also obtain the finite constant \(f_{2}^{(3)}\) in the Bern-Dixon-Smirnov ansatz analytically, which was previously known numerically only. ## 1 Introduction A remarkable property of gauge theories is that scattering amplitude containing a soft gauge boson can be factorized into a universal soft factor \(\mathcal{S}^{\mu}(k)\) and a hard scattering amplitude with the soft gauge boson removed, \[\lim_{k\to 0}\mathcal{M}_{n+1}(p_{1},p_{2},\cdots,p_{n},k)=\mathcal{S}_{\mu}(k) \mathcal{M}_{n}^{\mu}(p_{1},p_{2},\cdots,p_{n})\,. \tag{1}\] This is known as the soft theorem [1; 2]. Note that for non-abelian gauge theory such as QCD, the soft factor is an operator acting on the color space of \(n\)-point amplitude. The soft theorem has found many applications in high energy physics both phenomenologically and theoretically. In this paper we present a calculation of the soft theorem for soft gluon radiation from two hard partons to three loops in the perturbative QCD. Our main motivation is from the intimate relation between soft theorem and infrared behavior of QCD in higher order perturbation theory [3; 4; 5; 6; 7]. In particular, the precision knowledge of the soft theorem in QCD is essential for constructing infrared subtraction terms in fixed order calculation [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. It also contributes to the calculation of various soft function in Soft-Collinear Effective Theory (SCET) [18; 19; 20; 21; 22]. In this paper, we focus on the calculation of the soft factor with two hard-scattering partons and a single soft gluon emission. While it is not the most general soft factor beyond one-loop, it suffices for applications to some of the most important processes, such as the Drell-Yan process, \(e^{+}e^{-}\) to dijet, and 1+1 jet production in deep-inelastic scattering. The one-loop contribution of it was calculated more than two decades ago [23; 24; 25; 26; 27; 28; 29]. The two-loop soft factor was initially extracted from the soft limit of splitting amplitude up to \(\mathcal{O}(\epsilon^{0})\) in dimensional regularization parameter [30], and was obtained through to \(\mathcal{O}(\epsilon^{2})\) and beyond, either by direct calculation in SCET [31], or by extracting from amplitude [32]. The two-loop soft factor constitutes an essential contribution to the total cross section of Higgs boson production at N\({}^{3}\)LO in the threshold limit [33; 34; 35; 36]. The two-loop soft factor is also an important ingredient for constructing infrared subtraction for generic perturbative QCD calculation at N\({}^{3}\)LO. Besides the two-loop soft factor for single gluon emission, also relevant are the one-loop double-parton soft emission [37; 38; 39], and the tree-level triple-parton soft emission [40; 41; 42]. In addition, starting from two loops, a nontrivial color structure that correlates more than two partons first arises and has been computed in [43]. To further push the theoretical accuracy towards the N\({}^{4}\)LO frontier for scattering cross section, in this paper we perform the calculation of the single soft-emission soft factor with two hard partons at three loops through \(\mathcal{O}(\epsilon^{2})\). To facilitate the calculation, we have developed a systematic approach to calculate single-scale soft integrals using Feynman parameter representation. Our main idea is to introduce an auxiliary scale in the parametric representation and directly construct differential equations in the parametric representation with respect to this scale. A parametric integral is nothing but a multi-fold integral. Thus, an auxiliary scale can trivially be introduced by leaving one fold of integration untouched. The obtained integrals can be calculated by using the standard differential-equation method [44; 45]. The boundary conditions of the differential-equation system can again be expressed in terms of parametric integrals, which can be calculated by further applying this method. Thus, this method allows us to calculate Feynman integrals recursively until the boundary conditions can be trivially determined. Besides phenomenological interests, the soft factor is also useful in determining quantities of theoretical interests. For instance, since the soft factor can be understood as the soft limit of the corresponding full amplitude, it shares the same iterative structure of the full amplitude in the maximally supersymmetric \(\mathcal{N}=4\) Yang-Mills theory (MSYM) [46; 47]. It was conjectured by Bern, Dixon, and Smirnov in ref. [47] that the planar maximally helicity violating (MHV) amplitudes in MSYM can be obtained iteratively. Specifically, the \(l\)-loop planar MHV \(n\)-point amplitude in MSYM is determined by the one-loop amplitude up to some kinematic-independent constants, which are known to three loops numerically [48]. Assuming the principle of transcendentality [49], we obtain the soft factor in MYSM from reading off the leading transcendental part of the QCD results. We obtain the analytic expression for the three-loop constant \(f_{2}^{(3)}\) in the BDS ansatz, which agrees well with the previously numerically determined one [48]. In addition, we also predict the full-color dependence for the soft function in MYSM at three loops, which provides a test to the three-loop non-planar form factor of \(1\to 3\) decay in MYSM [50], once the relevant master integrals there are computed. The rest of this paper is organized as follows: in sec. 2, we describe the method to calculate the soft factor based on an effective theory. The result is expressed in terms of single-scale soft master integrals. In sec. 3, we develop a systematic method to calculate these master integrals recursively based on the differential-equation method, with all the boundary integrals evaluated to gamma functions. The final results in both QCD and SYM are presented in sec. 4. ## 2 Calculation of QCD soft theorem to three loops In this section we introduce the method for constructing the integrand for loop-level soft factor in QCD. Our approach is based on SCET, where the soft factor can be expressed as a transition matrix element of soft Wilson lines from vacuum to single gluon state. We use this definition to construct the integrand through three loops. ### Soft theorem from Soft-Collinear Effective Theory The soft theorem with an outgoing soft gluon in Soft-Collinear Effective Theory (SCET) [18; 19; 20; 21; 22] is defined as follows, \[\varepsilon^{\mu}(q)J_{\mu}(q)=\langle q|\int d^{4}xe^{ix\cdot q}\,\mathrm{T} \bigg{[}\prod_{k=1}^{m}Y_{k}(x)\bigg{]}|0\rangle\,, \tag{1}\] where \(Y_{k}(x)\) is a semi-infinity Wilson line standing for the color source of an external hard parton. In this paper, we restrict ourselves to the case of two Wilson lines with \(m=2\). For an outgoing Wilson line, it starts from the origin and extends to null infinity, \[Y_{k}(x)=\mathrm{P}\exp\left(ig_{s}\int_{0}^{\infty}ds\,n_{k} \cdot A_{s}^{a}(x+sn_{k})\mathbf{T}_{k}^{a}\right), \tag{2}\] where the subscript '\(s\)' in \(A_{s}^{a}\) refers to the soft gluon field. Similarly, an incoming Wilson line is defined as \[Y_{k}(x)=\mathrm{P}\exp\left(ig_{s}\int_{-\infty}^{0}ds\,n_{k} \cdot A_{s}^{a}(x+sn_{k})\mathbf{T}_{k}^{a}\right). \tag{3}\] In the above equation, the \(\mathrm{P}\) refers to path ordering \[\mathrm{P}\big{[}\mathbf{A}(x+sn_{k}) \mathbf{A}(x+tn_{k})\big{]}=\theta(s-t)\mathbf{A}(x+sn_{k})\mathbf{ A}(x+tn_{k})\] \[+\theta(t-s)\mathbf{A}(x+tn_{k})\mathbf{A}(x+sn_{k})\,, \tag{4}\] where we define \(\mathbf{A}(x+sn_{k})=A^{a}(x+sn_{k})\mathbf{T}_{k}^{a}\), and \(\mathbf{T}_{k}^{a}\) is the color-charge operator defined in the color space formalism [26]. For an outgoing quark (incoming anti-quark), \((t^{a})_{ij}\), for an outgoing anti-quark (incoming quark), \(\left(\mathbf{T}_{k}^{a}\right)_{ij}=-\left(t^{a}\right)_{ji}\), for a gluon, \(\left(\mathbf{T}_{k}^{a}\right)_{bc}=-if^{abc}\), where \(f^{abc}\) are structure constants, and \(t^{a}\) are the Gell-Mann matrices with the normalization \(\mathrm{Tr}[t^{a}t^{b}]=\frac{1}{2}\delta^{ab}\). According to Lorentz and color structures, the soft factor \(J_{\mu}\) with two Wilson lines can be decomposed into the following form up to three loops: \[J_{\mu}^{a}(q)= -\frac{g_{s}}{2}\left(\frac{n_{1}^{\mu}}{n_{1}\cdot q}-\frac{n_{2 }^{\mu}}{n_{2}\cdot q}\right)\left[\,(\mathbf{T}_{1}^{a}-\mathbf{T}_{2}^{a})+2 if^{abc}\left(\mathbf{T}_{1}^{b}\mathbf{T}_{2}^{c}-\mathbf{T}_{2}^{b}\mathbf{T}_{1}^{ c}\right)\,B_{12}\right.\] \[\left.-\left(\mathbf{T}_{1}^{b}\mathbf{T}_{1}^{c}\mathbf{T}_{2}^ {d}-\mathbf{T}_{2}^{b}\mathbf{T}_{2}^{c}\mathbf{T}_{1}^{d}\right)\left(C_{12 }\,d_{A}^{abcd}+D_{12}\,d_{F}^{abcd}N_{f}\right)\,\right]+\mathcal{O}(\alpha_ {s}^{4})\,, \tag{5}\] where \(n_{1}^{2}=n_{2}^{2}=0\) are two light-like vectors. The form factor \(B_{12}\) starts to contribute at one loop. The quadrupole invariant tensor \(d_{A}^{abcd}\) and \(d_{F}^{abcd}\) are defined by \[d_{R}^{abcd}=\frac{1}{24}\mathrm{Tr}\big{[}\mathbf{T}^{a}\mathbf{T}^{b} \mathbf{T}^{c}\mathbf{T}^{d}\big{]}_{R}+\text{symmetric permutations}\,, \tag{6}\] and their coefficients \(C_{12},\,D_{12}\) only receive contributions starting from three-loop order. We stress that all these scalar factors don't depend on the particular representation of the Wilson lines, a form of Casimir scaling. The form of eq. (5) is constructed from scaling symmetry and dimensional analysis. It has been checked by an explicit computation which will be described in detail below. A related quantity, the \(l\)-loop Eikonal function can be derived from the soft factor, \[S_{12}^{(l)}(q)=\frac{1}{4N_{R}C_{R}}\mathrm{Tr}\bigg{\{}\left[\varepsilon^{ \mu}J_{\mu}^{a(l)}\right]\left[\varepsilon^{\nu}J_{\nu}^{a(0)}\right]^{*}(q) \bigg{\}}\,, \tag{7}\] where in SU(\(N_{c}\)) group \(N_{F}=N_{c},\,C_{F}=(N_{c}^{2}-1)/(2N_{c})\) and \(N_{A}=N_{c}^{2}-1,\,C_{A}=N_{c}\) for quarks and gluons respectively. The eikonal function directly contributed to the soft-virtual cross section of Drell-Yan or Higgs production, see e.g. [34; 35]. Here and in the following, we always expand a quantity in \[a_{s}=\frac{\alpha_{s}}{4\pi} \tag{8}\] with \(\alpha_{s}=g_{s}^{2}/(4\pi)\), for example, \[S_{12}(q)=(4\pi)^{2}a_{s}\sum_{l=0}^{\infty}a_{s}^{l}\,S_{12}^{(l)}(q)\,. \tag{9}\] The \(S_{12}^{(0)}\) is the well-known tree-level Eikonal function, \[S_{12}^{(0)}=\frac{n_{1}\cdot n_{2}}{2n_{1}\cdot q\,n_{2}\cdot q}\,. \tag{10}\] We are mainly interested in the higher-order corrections for scalar form factors in eq. (5) and the tree-level Eikonal function in eq. (10). The soft factor in general depends on the direction of the Wilson lines (incoming or outgoing). However, due to a rescaling symmetry \(n_{1}^{\mu}\to\lambda_{1}n_{1}^{\mu}\) and \(n_{2}^{\mu}\to\lambda_{2}n_{2}^{\mu}\) for arbitrary \(\lambda_{1}\) and \(\lambda_{2}\), this dependence can be fully encoded in terms of a factor \(S_{\epsilon}\): \[S_{\epsilon}=\left(4\pi S_{12}^{(0)}\mu^{2}e^{-\gamma_{E}}\frac{e^{-i\lambda_{ 12}\pi}}{e^{-i\lambda_{1q}\pi}e^{-i\lambda_{2q}\pi}}\right)^{\epsilon}\,, \tag{11}\] where \(\epsilon=(4-d)/2\) is the dimensional regulator, \(\lambda_{AB}\) in the phase factor \(e^{-i\lambda_{AB}\pi}\) is 1 if both A and B are both incoming or outgoing, and \(\lambda_{AB}=0\) for other cases (see for example [29]). By factoring out the dependence on \(S_{\epsilon}\) for eq. (5) and eq. (7) at each order, the remaining contributions are not sensitive to the direction of Wilson lines, \[S_{12}^{(l)}(q) =S_{12}^{(0)}(q)S_{\epsilon}^{l}\,r_{12}^{(l)}\,,\] \[B_{12}^{(l)}=S_{\epsilon}^{l}\,b_{12}^{(l)}\,,C_{12}^{(l)} =S_{\epsilon}^{l}\,c_{12}^{(l)}\,,D_{12}^{(l)}=S_{\epsilon}^{l}\, d_{12}^{(l)}\,. \tag{12}\] In this paper, we determine \(r_{12}\) and \(b_{12}\), \(c_{12}\), \(d_{12}\) in above equation to three-loop order. ### Construction of loop integrand for Soft theorem To construct the loop integrand for the soft theorem, we first derive the effective Feynman rules for soft Wilson lines as shown in eq. (2) and eq. (3). It can be conveniently done by expanding the Wilson lines order by order in \(g_{s}\). We get the following eikonal Feynman rules up to three gluon emissions (We checked explicitly that the Feynman rules with more gluon emissions are not needed for the computation of three-loop soft theorem), \(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\(\mu\,a)\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\(\mu\,,a)\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a\)\(\mu\,,a)\(\mu\,,a)\)\(\mu\,,a\)\ \[\rightarrow\frac{-g_{s}^{3}n_{k}^{\mu_{1}}n_{k}^{\mu_{2}}n_{k}^{\mu_{3}}}{-n_{k} \cdot(k_{1}+k_{2}+k_{3})+i\delta_{k}\,0^{+}}\bigg{[}\frac{\mathbf{T}_{k}^{a_{1} }\mathbf{T}_{k}^{a_{2}}\mathbf{T}_{k}^{a_{3}}}{-n_{k}\cdot(k_{1}+k_{2})+i \delta_{k}\,0^{+}}\frac{1}{-n_{k}\cdot k_{1}+i\delta_{k}\,0^{+}}\] \[+\text{ { permutations}}\bigg{]}\,, \tag{13}\] where the sign of Feynman's prescription \(i\delta_{k}\,0^{+}\) stems from the path along the light cone of outgoing or incoming Wilson lines. The \(\delta_{k}\) is 1 for an outgoing Wilson line and \(\delta_{k}=-1\) for an incoming Wilson line. The color-charge operator \(\mathbf{T}_{k}^{a}\) is the same operator as defined in eq. (2) and eq. (3). In the above equations, plus _permutations_ indicates a summation over all external gluon indices (simultaneous permutation of \(\mu_{i},a_{i},k_{i}\)). We generated in QGRAF[51] all relevant Feynman diagrams, which implement particle interactions from standard QCD and interactions due to the above effective vertices with up to three-gluon emissions. In figure 1, we show some sample Feynman diagrams. The amplitude is invariant under the rescaling of \(n_{1}\), \(n_{2}\), such that the only scale is \(\mu^{2}S_{12}^{(0)}\). Therefore, the soft factor only receives contributions from one-particle-irreducible (1PI) diagrams with a number 550 at three loops. Subsequently, an in-house Mathematica code was used to substitute the Feynman rules into Feynman diagrams, and FORM[52, 53, 54] and Color.h[55] were used to evaluate Dirac and color algebra. To verify the (generalized) Casimir scaling principle, we use the effective Feynman rules as shown in eq. (13) for Wilson lines in both fundamental and adjoint representations. Regarding the topology classification, the package Apart[56] was first used to eliminate the linear dependence of propagators largely due to the multiple linear propagators from the effective vertices. After the partial fraction, we found 780 topologies that were then reduced to 160 topologies by applying a self-written code. The code implemented a simple algorithm that tries to find a loop momentum transformation relating the two topologies with each other by carefully searching all possible loop momentum transformations. We noted that a similar algorithm was also implemented in the public package Reduze 2[57]1. By appending some proper propagators stemming from irreducible numerators, the 160 topologies can be further mapped into 25 integral families. The definition of these integral families can be found in Appendix. A. Footnote 1: We thank Andreas von Manteuffel for pointing this out to us. The integration-by-parts (IBP) [58] reductions were done by Kira[59] equipped with FireFly[60], which implements the Laporta algorithm [61] as well as finite fields and function reconstruction techniques [62, 63]. After IBP reductions, we found 52 master integrals and only 49 of them appeared in the amplitude, and the master integrals were found to appear only in six integral families. o check the gauge invariance of the amplitude, we use the Feynman gauge as well as the light cone gauge for the polarization summation of the external gluon as shown in eq. (7), \[\varepsilon_{\mu}(q)\varepsilon_{\nu}^{*}(q)=-g_{\mu\nu}+\sigma\frac{ n_{\mu}q_{\nu}+n_{\nu}q_{\mu}}{n\cdot q},\quad\sigma=0\text{ or }1\,. \tag{14}\] For internal gluons, we use the \(R_{\xi}\) gauge but with the amplitude truncating at \((1-\xi)^{1}\), \[D_{\mu\nu}(l)=\frac{i}{l^{2}}\left[-g_{\mu\nu}+(1-\xi)\frac{l_{\mu}l_{\nu}}{l^{ 2}}\right]\,. \tag{15}\] We found the amplitude is indeed gauge invariant provided that six extra relations exist within the 49 master integrals. We verified explicitly these six extra relations by computing all 49 master integrals as shown in section 3. ## 3 Calculation of master integrals In this section, we present the details of our approach to compute the single-scale soft master integrals. Our approach is based on Feynman parameter representation. Using the differential equation with respect to an auxiliary scale appearing in the intermediate step and reduction of integrals in Feynman parameter representation, we manage to compute all master integrals iteratively, with the final boundary conditions coming from simple Gamma functions. ### Differential equations We calculate the master integrals by using the differential-equation method [44; 45]. For soft integrals, the scale dependence is trivial. To get a nontrivial differential-equation system, we need to introduce an auxiliary scale. While there are several widely used methods to achieve this, such as the Drinfeld-associator method [64] and the auxiliary-mass-flow Figure 1: Sample Feynman diagrams for three-loop soft theorem. From left to right, when contracting with tree diagrams with single gluon emission, the first diagram is zero, the second diagram contributes to \(d_{R}^{abcd}d_{F}^{abcd}\), and the third diagram contributes to sub-leading color only. method [65; 66; 67; 68], these methods are less proper for the calculation in this work. Because an extra mass scale on either an external line or an internal line may highly increase the complexity of the IBP reduction. A better choice is to introduce a scale that is essential to the integral to be calculated. It is evident that an extra scale can be introduced by leaving one fold of integration untouched, or equivalently, by inserting a delta function. For phase-space integrals, this can be done in the momentum space (see e.g. refs. [69; 70]). While for normal loop integrals, an extra delta function in the momentum space may even complicate the calculation. A better choice is to insert a delta function in the parametric representation. The obtained integrals can further be reduced by using the method developed in refs. [71; 72; 73]. We consider the calculation of the parametric integrals of the following form \[\begin{split} I(\lambda_{0},\lambda_{1},\dots,\lambda_{n})=& \frac{\Gamma(-\lambda_{0})}{\prod_{i=m+1}^{n+1}\Gamma(\lambda_{i}+1)} \int\mathrm{d}\Pi^{(n+1)}\mathcal{F}^{\lambda_{0}}\prod_{i=1}^{m}x_{i}^{- \lambda_{i}-1}\prod_{i=m+1}^{n+1}x_{i}^{\lambda_{i}}\\ \equiv&\int\mathrm{d}\Pi^{(n+1)}\mathcal{I}^{(-n-1) }\,.\end{split} \tag{10}\] Here the integration measure is \(\mathrm{d}\Pi^{(n)}\equiv\prod_{i=1}^{n+1}\mathrm{d}x_{i}\delta(1-E^{(1)}(x))\), with \(E^{(n)}(x)\) a positive definite homogeneous function of \(x\) of degree \(n\). The region of integration for \(x_{i}\) is \((0,\ \infty)\) when \(i>m\) and \((-\infty,\ \infty)\) when \(i\leqslant m\). \(\mathcal{F}\) is a homogeneous polynomial of \(x\) of degree \(L+1\). For integrals with momentum-space correspondences, \(L\) is the number of loops. For loop integrals, the polynomial \(\mathcal{F}\) is related to the well-known Symanzik polynomials \(U\) and \(F\) through \(\mathcal{F}=F+Ux_{n+1}\). But here we consider the more general parametric integrals which may not have momentum-space correspondences. This generalization is necessary, because some asymptotically expanded integrals may not have any momentum-space correspondence [74; 75]. By virtue of the homogeneity of the integrands, it can be shown that the parametric integrals satisfy the equations \[0 =\int\mathrm{d}\Pi^{(n+1)}\frac{\partial}{\partial x_{i}} \mathcal{I}^{(-n)}, i=1,2,\dots,m, \tag{11a}\] \[0 =\int\mathrm{d}\Pi^{(n+1)}\frac{\partial}{\partial x_{i}} \mathcal{I}^{(-n)}+\delta_{\lambda_{i}0}\int\mathrm{d}\Pi^{(n)}\left.\mathcal{ I}^{(-n)}\right|_{x_{i}=0}, i=m+1,m+2,\dots,n+1. \tag{11b}\] A parametric integral can be understood as a function of the indices \(\lambda_{i}\). Then we can define the following operators. \[\mathcal{R}_{i}I(\lambda_{0},\dots,\lambda_{i},\dots,\lambda_{n})= (\lambda_{i}+1)I(\lambda_{0},\dots,\lambda_{i}+1,\dots,\lambda_{n}),\] \[\mathcal{D}_{i}I(\lambda_{0},\dots,\lambda_{i},\dots,\lambda_{n})= I(\lambda_{0},\dots,\lambda_{i}-1,\dots,\lambda_{n}),\] \[\mathcal{A}_{i}I(\lambda_{0},\dots,\lambda_{i},\dots,\lambda_{n})= \lambda_{i}I(\lambda_{0},\dots,\lambda_{i},\dots,\lambda_{n}).\] It is understood that \[I(\lambda_{0},\dots,\lambda_{i-1},-1,\dots,\lambda_{n})\equiv\int\mathrm{d} \Pi^{(n)}\left.\mathcal{I}^{(-n)}\right|_{x_{i}=0},\quad i=m+1,\ m+2,\ \cdots,\ n.\] We further define \[\hat{x}_{i}= \begin{cases}\mathcal{D}_{i}\,\ \ \ \ \ \ \ \ \ \ \ i=1,\ 2,\dots,\ m,\\ \mathcal{R}_{i}\,\ i=m+1,\ m+2,\dots,\ n+1,\end{cases}\] \[\hat{z}_{i}= \begin{cases}-{\cal R}_{i}\,\ \ \ \ \ \ \ \ \ \ \ i=1,\ 2,\ldots,\ m,\\ {\cal D}_{i}\ \,\ i=m+1,\ m+2,\ldots,\ n+1,\end{cases}\] \[\hat{a}_{i}= \begin{cases}-{\cal A}_{i}-1\,\ \ \ \ \ \ \ \ \ \ \ \ \ \ i=1,\ 2,\ldots,\ m,\\ {\cal A}_{i}\ \ \ \ \,\ i=m+1,\ m+2,\ldots,\ n+1.\end{cases}\] And we formally define operators \(\hat{z}_{n+1}\) and \(\hat{x}_{n+1}\), such that \(\hat{z}_{n+1}I=I\), and \(\hat{x}_{n+1}^{i}I=\prod_{j=1}^{i}(\hat{a}_{n+1}+j)I\), with \(\hat{a}_{n+1}=-(L+1)\hat{a}_{0}-\sum_{i=1}^{n}(\hat{a}_{i}+1)-1\). We assume that \(\hat{x}_{n+1}\) is always to the right of \(U(\hat{x})\) in \({\cal F}(\hat{x})\). By using these operators, we can write eq. (3.2) in the following form2: Footnote 2: Notice that here the definition of \(\hat{a}_{n+1}\) is slightly different from that in refs. [72; 73], because otherwise eq. (3.4) is invalid when the \(x_{n+1}\)-dependent terms in \({\cal F}\) also depend on \(y\). Consequently, there is an extra \(\hat{x}_{n+1}\) in eq. (3.3). With the new convention, eq. (2.11) in ref. [72] becomes \(D_{0}{\cal F}+{\cal A}_{0}\approx 0\). The definition of \(\hat{x}_{n+1}\) is only of formal sense, and \(\hat{x}_{n+1}^{i}\hat{x}_{n+1}^{j}\hat{I}\) should be understood as \(\hat{x}_{n+1}^{i+j}I\) rather than \(\hat{x}_{n+1}^{i}\left(\hat{x}_{n+1}^{j}I\right)\). In practical calculations, we always express \(\hat{x}_{n+1}^{i}\) in terms of \(\hat{a}_{n+1}\) from the very beginning. \[\left[{\cal D}_{0}\frac{\partial{\cal F}(\hat{x})}{\partial\hat{x}_{i}}-\hat{ z}_{i}\right]\hat{x}_{n+1}I=0. \tag{3.3}\] Let \(y\) be a kinematical variable, then it is easy to see that the parametric integrals satisfy the equation \[\frac{\partial}{\partial y}I=-{\cal D}_{0}\frac{\partial{\cal F}}{\partial y }I. \tag{3.4}\] To get a nontrivial scale dependence, we insert a delta function into the parametric integral in eq. (3.1), and get \[I(\lambda_{0},\lambda_{1},\ldots,\lambda_{n})= \int{\rm d}\Pi^{(n+1)}{\rm d}y\ \delta(y-E^{(0)}(x)){\cal I}^{(-n-1)}. \tag{3.5}\] Here the function \(E^{(n)}\) is the one defined below eq. (3.1). Equation (3.2) also holds for eq. (3.5), since it is a consequence of the homogeneity of the integrand. In practical calculations, by a proper choice of the \(E^{(0)}(x)\), we can eliminate one fold of integration by using the delta function. The resulting \(y\)-dependent integral is still of the form in eq. (3.1). Thus it can be reduced in the parametric representation and then calculated by using the differential-equation method. Compared with the original integral, the \(y\)-dependent integral is one less fold of integration. By successive applications of this method, we can calculate Feynman integrals recursively. The method described in this subsection applies to both normal loop integrals and phase-space integrals. But we do not consider phase-space integrals hereafter. That is, we take \(m=0\) in eq. (3.1). ### Rules for choosing \(E^{(0)}\) In principle, the function \(E^{(0)}\) in eq. (3.5) can be chosen arbitrarily. Nevertheless, for a general choice of \(E^{(0)}\), it is not easy to express the right-hand sides of eqs. (3.2) in terms of regular parametric integrals. In practical calculations, we choose \(E^{(0)}\) to be of the form \[E^{(0)}=\frac{x_{i}}{x_{j}}. \tag{3.6}\] Then we can eliminate the integration with respect to \(x_{i}\) by using the delta function. That is, \[I(\lambda_{0},\lambda_{1},\ldots,\lambda_{n})= \int\mathrm{d}\Pi^{(n+1)}\mathrm{d}y\ \delta(y-\frac{x_{i}}{x_{j}})\mathcal{I}^{(-n-1)}\] \[= \int\mathrm{d}y\int\mathrm{d}\Pi^{(n)}\ x_{j}\,\mathcal{I}^{(-n-1 )}\Big{|}_{x_{i}=yx_{j}} \tag{10}\] \[\equiv \frac{\Gamma(\lambda_{i}+\lambda_{j}+2)}{\Gamma(\lambda_{i}+1) \Gamma(\lambda_{j}+1)}\int\mathrm{d}y\ y^{\lambda_{i}}I_{y}.\] For integrals that have momentum-space correspondences, this is equivalent to the method of combining two propagators with a Feynman parameter [76]. The pair \(\{x_{i},x_{j}\}\) can still be arbitrarily chosen. A good choice may greatly simplify the calculation. In this section, we provide a method to choose \(E^{(0)}\) wisely such that the number of regions of the asymptotic expansion for the obtained \(y\)-dependent integral is minimized. Consequently, the boundary conditions of the differential equations are simplified. A general \(\mathcal{F}\) polynomial is of the structure \[\mathcal{F}=\sum_{a=1}^{A}\left(C_{\mathcal{F},a}\prod_{i}^{n+1}x_{i}^{\Lambda _{ai}}\right), \tag{11}\] where \(C_{\mathcal{F},a}\) are some \(x\)-independent constants. This polynomial may not depend on any physical scale. Thus it does not make sense to talk about asymptotic expansion for the corresponding parametric integrals. Nevertheless, we can still formally introduce the notion of "region" for this polynomial by using the idea of the convex hull described in ref. [75]. Specifically, a region \(r\) is associated with a subset \(S_{r}\) of \(\{1,\ 2,\cdots,A\}\) and a \(n+2\) dimensional vector \(\mathbf{k}_{r}\), such that the number of elements of \(S_{r}\) is not less than \(n+1\), and \[\sum_{k=1}^{n+1}\Lambda_{ak}k_{r,k}= k_{r,0},\quad a\in S_{r}, \tag{12a}\] \[\sum_{k=1}^{n+1}\Lambda_{ak}k_{r,k}> k_{r,0},\quad a\notin S_{r}. \tag{12b}\] It is easy to see that \(\Lambda_{ai}\) with \(a\notin S_{r}\) is linearly independent of \(\Lambda_{ai}\) with \(a\in S_{r}\). Since \(\mathcal{F}\) is a homogeneous polynomial of degree \(L+1\), we have \(\sum_{i=1}^{n+1}\Lambda_{ai}=L+1\). Thus, if \(\Lambda_{bi}=\sum_{a\in S_{r}}c_{ba}\Lambda_{ai}\), we have \(\sum_{a}c_{ba}=1\), and \(\sum_{i}\Lambda_{bi}k_{r,i}=k_{r,0}\sum_{a}c_{ba}=k_{r,0}\). Hence \(b\in S_{r}\). And it is easy to see that the cardinal number of \(S_{r}\) should be smaller than \(A\), because otherwise the corresponding parametric integral is scaleless. To see this, without loss of generality, we assume that \(k_{r,1}\neq 0\). Then we rescale \(x_{i}\) with \(i>1\) by \(x_{i}\to x_{i}x_{1}^{k_{r,i}/k_{r,1}}\). If \(S_{r}=\{1,\ 2,\ldots,A\}\), the \(x_{1}\) dependence of \(\mathcal{F}\) can be factored out. Thus the integration with respect to \(x_{1}\) is scaleless. Immediately we will show that those regions defined by eqs. (12) are intimately related to the regions of the \(y\)-dependent integrals. We consider the integrals obtained by replacing \(x_{i}\) with \(yx_{j}\). The corresponding \(\mathcal{F}\) polynomial reads \[\mathcal{F}^{\prime}=\left.\mathcal{F}\right|_{x_{i}=yx_{j}}. \tag{13}\] For simplicity, we formally denote \(y\) by \(x_{i}\) for \(\mathcal{F}^{\prime}\). Obviously, we have \[\mathcal{F}^{\prime}=\sum_{a=1}^{A}\left(C_{\mathcal{F},a}\prod_{i}^{n+1}x_{i}^{ \Lambda_{ai}^{\prime}}\right), \tag{3.11}\] with \[\Lambda_{aj}^{\prime}= \Lambda_{aj}+\Lambda_{ai}, \tag{3.12a}\] \[\Lambda_{ak}^{\prime}= \Lambda_{ak},\qquad k\neq j. \tag{3.12b}\] It is easy to see that \[\sum_{k=1}^{n+1}\Lambda_{ak}^{\prime}k_{r,k}^{\prime}= k_{r,0},\quad a\in S_{r}, \tag{3.13a}\] \[\sum_{k=1}^{n+1}\Lambda_{ak}^{\prime}k_{r,k}^{\prime}> k_{r,0},\quad a\notin S_{r}, \tag{3.13b}\] with \[k_{r,i}^{\prime}\equiv k_{r,i}-k_{r,j}, \tag{3.14a}\] \[k_{r,k}^{\prime}\equiv k_{r,k},\qquad k\neq i. \tag{3.14b}\] According to the convex-hull algorithm described in ref. [75], a vector \(\mathbf{k}_{r}^{\prime}\) with \(k_{r,i}^{\prime}>0\) gives exactly a region of asymptotic expansion in the limit of \(y\to 0\), because terms \(\prod_{i}^{n+1}x_{i}^{\Lambda_{ai}^{\prime}}\) with \(a\in S_{r}\) dominate \(\mathcal{F}^{\prime}\) when \(x_{k}\) scales as \(x_{k}\sim y^{k_{r,k}^{\prime}}\). We denote \[R_{ij}=\left\{r|k_{r,i}>k_{r,j}\right\}. \tag{3.15}\] Since \(k_{r,i}^{\prime}=k_{r,i}-k_{r,j}\), by choosing the pair \(\{i,\ j\}\) such that the cardinal number of the set \(R_{ij}\) is minimized, the number of regions of asymptotic expansion is minimized. Obviously, after expanding the \(\mathcal{F}\) polynomial asymptotically in a region \(r\), only terms in \(S_{r}\) survive. Thus, by choosing the pair \(\{i\,j\}\) such that the cardinal number of \(S_{r}\) (denoted by \(N_{r}\)) is minimized, the boundary integrals are simplified. As a summary, we choose the pair \(\{i,\ j\}\) according to the following rules: 1. We choose the pair \(\{i,\ j\}\) such that the cardinal number of \(R_{ij}\) is minimized, where \(R_{ij}\) is defined in eq. (3.15). 2. Among all the pairs satisfying the first rule, we choose the one such that \(\max\{N_{r}|r\in R_{ij}\}\) is minimized, where \(N_{r}\) is the cardinal number of \(S_{r}\). ### Boundary integrals By using the method described in the previous sections, we can construct differential equations for the parametric integrals. The boundaries of the solutions of the differential equations can further be expressed in terms of parametric integrals. Thus, this algorithm can be carried out recursively. The algorithm terminates when the \(\mathcal{F}\) polynomial has exactly \(n+1\) monomials. In this case, the parametric integral can be expressed in terms of gamma functions. We have \[I(\lambda_{0},\lambda_{1},\ldots,\lambda_{n})= \frac{\Gamma(-\lambda_{0})}{\prod_{i=1}^{n+1}\Gamma(\lambda_{i}+1)} \int\mathrm{d}\Pi^{(n+1)}\mathcal{F}^{\lambda_{0}}\prod_{i=1}^{n+1}x_{i}^{ \lambda_{i}} \tag{3.16}\] \[= \frac{(L+1)\prod_{a=1}^{n+1}\left[\Gamma(\bar{\lambda}_{a})C_{ \mathcal{F},a}^{-\bar{\lambda}_{a}}\right]}{\parallel\Lambda\parallel\prod_{i =1}^{n+1}\Gamma(\lambda_{i}+1)}.\] with \[\bar{\lambda}_{a}=\sum_{i=1}^{n+1}(\Lambda^{-1})_{ia}(\lambda_{i}+1). \tag{3.17}\] Here \(\Lambda_{ai}\) and \(C_{\mathcal{F},a}\) are defined in eq. (3.8), and \(L\) is defined in the paragraph below eq. (3.1). The derivation of eq. (3.16) is as follows. We introduce a new set of variables \[u_{a}\equiv\prod_{i=1}^{n+1}x^{\Lambda_{ai}}. \tag{3.18}\] The Jacobian is \[\left|\left|\frac{\partial u_{a}}{\partial x_{i}}\right|\right|=\left|\left| \frac{u_{a}}{x_{i}}\Lambda_{ai}\right|\right|=\frac{\prod_{a=1}^{n+1}u_{a}}{ \prod_{i=1}^{n+1}x_{i}}\parallel\Lambda\parallel \tag{3.19}\] For the integration measure \(\mathrm{d}\Pi^{(n+1)}\), we choose \(E^{(1)}=u_{n+1}^{\frac{1}{1+1}}\). Then we have \[I(\lambda_{0},\lambda_{1},\ldots,\lambda_{n})= \frac{(L+1)\Gamma(-\lambda_{0})}{\parallel\Lambda\parallel\prod_{ i=1}^{n+1}\Gamma(\lambda_{i}+1)}\int\prod_{a=1}^{n}\mathrm{d}u_{a} \tag{3.20}\] \[\left(C_{\mathcal{F},n+1}+\sum_{a=1}^{n}C_{\mathcal{F},a}u_{a} \right)^{\lambda_{0}}\prod_{a=1}^{n}u_{a}^{\sum_{i=1}^{n+1}\left(\Lambda^{-1} \right)_{ia}(\lambda_{i}+1)-1}\] \[= \frac{(L+1)\Gamma(-\lambda_{0}-\bar{\lambda}_{1})\Gamma(\bar{ \lambda}_{1})C_{\mathcal{F},1}^{-\bar{\lambda}_{1}}}{\parallel\Lambda \parallel\prod_{i=1}^{n+1}\Gamma(\lambda_{i}+1)}\int\prod_{a=2}^{n}\mathrm{d}u _{a}\] \[\left(C_{\mathcal{F},n+1}+\sum_{a=2}^{n}C_{\mathcal{F},a}u_{a} \right)^{\lambda_{0}+\bar{\lambda}_{1}}\prod_{a=2}^{n}u_{a}^{\bar{\lambda}_{a }-1}\] \[= \ldots\] \[= \frac{(L+1)\prod_{a=1}^{n+1}\left[\Gamma(\bar{\lambda}_{a})C_{ \mathcal{F},a}^{-\bar{\lambda}_{a}}\right]}{\parallel\Lambda\parallel\prod_{ i=1}^{n+1}\Gamma(\lambda_{i}+1)}.\] ### Analytic continuation While the analytic continuation is not a problem for the calculations in this paper, it needs to be considered in order to develop a general-purpose algorithm. For a \(\mathcal{F}\) polynomial with both positive terms and negative terms, a Feynman parameter may cross a branch point in the region of integration. Generally speaking, it is not easy to determine the branch while a Feynman parameter crosses a branch point. A possible solution to this problem is as follows. We replace each negative coefficient of \(\mathcal{F}\), denoted by \(-C_{\mathcal{F},a}\), by \(-yC_{\mathcal{F},a}\), and construct differential equations with respect to \(y\). The imaginary part of \(y\) should be \(i0^{+}\) due to the \(i0^{+}\) prescription of Feynman propagators. We determine the boundary conditions at \(y=0^{-}\). All the boundary integrals are with positive definite \(\mathcal{F}\) polynomials and thus can further be evaluated by using the method described in previous subsections. ### Examples As an example of the application of the method described in this section, we consider the calculation of the following integral: \[I_{1}(-\frac{d}{2},0,0,0,0,0,0)\] \[= -\frac{i}{\pi^{3d/2}}\int\mathrm{d}^{d}l_{1}\mathrm{d}^{d}l_{2} \mathrm{d}^{d}l_{3}\frac{1}{l_{1}^{+}l_{3}^{2}\left(l_{1}-q\right)^{2}\left(q ^{-}-l_{1}^{-}\right)\left(l_{2}-q\right)^{2}\left(l_{1}-l_{3}\right)^{2}\left( l_{2}-l_{3}\right)^{2}}.\] The \(\mathcal{F}\) polynomial for this topology reads \[\mathcal{F}_{1}= x_{8}\left(x_{2,3,5}+x_{2,3,7}+x_{2,5,6}+x_{2,6,7}+x_{3,5,6}+x_{ 3,5,7}+x_{3,6,7}+x_{5,6,7}\right)\] \[-\left(x_{1,2,3,5}+x_{1,2,3,7}+x_{1,2,4,5}+x_{1,2,4,7}+x_{1,3,5,6} +x_{1,3,5,7}+x_{1,3,6,7}\right.\] \[\left.+x_{1,4,5,6}+x_{1,4,5,7}+x_{1,4,6,7}+x_{1,5,6,7}+x_{2,4,5,6} +x_{2,4,6,7}\right).\] Here we use \(x_{i,j,\dots}\) to denote \(x_{i}x_{j}\cdots\). The \(\mathbf{k}_{r}\) vectors for this polynomial can be found by using Qhull[77, 78]. Due to the homogeneity of the integrand of a parametric integral, two vectors \(\mathbf{k}_{r_{1}}\) and \(\mathbf{k}_{r_{2}}\) describe the same region if \(k_{r_{2},0}=k_{r_{1},0}+(L+1)c\) and \(k_{r_{2},i}=k_{r_{1},i}+c,\ i\neq 0\), for an arbitrary constant \(c\). We fix this ambiguity with the constraint \(k_{r,n+1}=0\). We get \[\begin{pmatrix}0&0&0&1&0&0&0&0&0\\ 3&1&1&1&0&1&1&1&0\\ -1&0&0&0&0&-1&0&0&0\\ -1&0&0&0&0&0&-1&0&0\\ -3&0&0&-1&-1&-1&-1&-1&0\\ -2&0&-1&0&0&-1&0&-1&0\\ -1&0&0&0&0&0&-1&0\\ 3&0&1&1&1&1&1&1&0\\ -3&0&-1&-1&0&-1&-1&-1&0\\ -2&-1&-1&0&0&0&-1&0&0\\ -4&-1&-1&-1&-1&-1&-1&0\\ 1&0&0&0&0&1&0&1&0\\ -1&0&0&-1&-1&0&0&0&0\end{pmatrix}.\] Here each row represents \((k_{r,0},k_{r,1},\dots,k_{r,n+1})\). There are 7 pairs \(\{i,\ j\}\) with only one \(\mathbf{k}_{r}\) such that \(k_{r,i}>k_{r,j}\), which are \(\{2,\ 1\}\), \(\{4,\ 3\}\), \(\{5,\ 7\}\), \(\{6,\ 1\}\), \(\{6,\ 2\}\), \(\{6,\ 3\}\), and \(\{7,\ 5\}\). Among these pairs, the pairs \(\{5,\ 7\}\) and \(\{7,\ 5\}\) has the minimal \(N_{r}\). We choose the pair \(\{7,\ 5\}\). The \(\mathbf{k}_{r}\) vector with \(k_{r,7}>k_{r,5}\) is \((-1,0,0,0,0,-1,0,0,0)\). After inserting \(\delta(y-\frac{x_{5}}{x_{7}})\) and eliminating the \(x_{5}\) integration, we get a \(y\)-dependent integral \[I_{2,0}=I_{2}(-\frac{d}{2},0,0,0,0,0,1),\] with the \(\mathcal{F}\) polynomial \[\mathcal{F}_{2}= x_{7}\left[y\left(x_{2,3,6}+x_{2,5,6}+x_{3,5,6}+x_{3}x_{6}^{2}+ x_{5}x_{6}^{2}\right)+x_{2,3,6}+x_{2,5,6}+x_{3,5,6}\right]\] \[-y\left(x_{6}^{2}x_{1,3}+x_{6}^{2}x_{1,4}+x_{6}^{2}x_{1,5}+x_{1,2,3,6}+x_{1,2,4,6}+x_{1,3,5,6}+x_{1,4,5,6}+x_{2,4,5,6}\right)\] \[-\left(x_{1,2,3,6}+x_{1,2,4,6}+x_{1,3,5,6}+x_{1,4,5,6}+x_{2,4,5,6} \right).\] By construction, we can easily get the momentum-space correspondence \[I_{2,0}=-\frac{i}{\pi^{3d/2}}\int\mathrm{d}^{d}l_{1}\mathrm{d}^{d}l_{2} \mathrm{d}^{d}l_{3}\frac{1}{l_{1}^{+}l_{3}^{2}\left(l_{1}-q\right)^{2}\left(q ^{-}-l_{1}^{-}\right)\left(l_{1}-l_{3}\right)^{2}\left[y\left(l_{2}-q\right)^ {2}+\left(l_{2}-l_{3}\right)^{2}\right]}.\] This integral can be further reduced. We have \[I_{2,0}=-\frac{(3d-8)(5d-16)(5d-14)(y+1)}{4(d-3)^{2}y}I_{2,1}+\frac{(d-2)(2d-7) (3d-8)(y+1)}{4(d-3)^{2}y}I_{2,2},\] where the master integrals are \[I_{2,1}= I_{2}(-\frac{d}{2},0,0,0,0,0,0),\] \[I_{2,2}= I_{2}(-\frac{d}{2},0,0,-1,0,0,0).\] The differential equations for these integrals are quite simple: \[\frac{\partial}{\partial y}I_{2,i}=\frac{y(\epsilon-2)-\epsilon+1}{y(y+1)}I_{ 2,i},\quad i=1,\ 2,\] where \(\epsilon\equiv\frac{1}{2}(4-d)\). This differential equation can be trivially solved. The boundary conditions are determined by expanding the master integrals asymptotically in the limit of \(y\to 0\). We consider the integral \(I_{2,1}\) for example. As is already known, there is only one region, for which the Feynman parameters scale as \[x_{6}\sim y^{-1},\] \[x_{i}\sim 1,\qquad i\neq 6.\] Rescaling the \(\mathcal{F}\) polynomial according to the above scaling, and expanding it to the leading order in \(y\), we get \[\lim_{y\to 0}I_{2,2}\to y^{1-\epsilon}I_{3}(-\frac{d}{2},0,0,0,0,0,0).\] The \(\mathcal{F}\) polynomial for the integral family \(I_{3}\) is \[\mathcal{F}_{3}= x_{7}\left(x_{2,3,6}+x_{2,5,6}+x_{3,5,6}+x_{3}x_{6}^{2}+x_{5}x_{6} ^{2}\right)\] \[-\left(x_{6}^{2}x_{1,3}+x_{6}^{2}x_{1,4}+x_{6}^{2}x_{1,5}+x_{1,2,3,6}+x_{1,2,4,6}+x_{1,3,5,6}+x_{1,4,5,6}+x_{2,4,5,6}\right).\] The integral family \(I_{3}\) does not have an evident momentum-space correspondence. It can further be calculated by using the method described in this section. The \(\textbf{k}_{r}\) vectors for \(\mathcal{F}_{3}\) are \[\begin{pmatrix}0&0&0&1&0&0&0&0\\ -3&0&0&-1&-1&-1&-1&0\\ -4&-1&-1&-1&-1&-1&0\\ -1&0&0&0&0&-1&0&0\\ -2&-1&-1&0&0&-1&0&0\\ -3&0&-1&-1&0&-1&-1&0\\ -2&0&-1&0&0&0&-1&0\\ 3&0&1&1&1&1&1&0\\ 3&1&1&1&0&1&1&0\\ 1&0&0&0&0&0&1&0\\ -1&0&0&-1&-1&0&0&0.\end{pmatrix}.\] There are \(7\) pairs \(\{x_{i},\ x_{j}\}\) with only one \(\textbf{k}_{r}\) such that \(k_{r,i}>k_{r,j}\), among which the pair \(\{2,\ 6\}\) has the minimal \(N_{r}\). Insertion of \(\delta(y-\frac{x_{2}}{x_{6}})\) leads to the integral \[I_{4,0}=I_{4}(-\frac{d}{2},0,0,0,0,1),\] with the \(\mathcal{F}\) polynomial \[\mathcal{F}_{4}= x_{6}\left[y\left(x_{2}x_{5}^{2}+x_{4}x_{5}^{2}\right)+x_{2,4, 5}+x_{2}x_{5}^{2}+x_{4}x_{5}^{2}\right]-y\left(x_{5}^{2}x_{1,2}+x_{5}^{2}x_{1, 3}+x_{5}^{2}x_{3,4}\right)\] \[-\left(x_{5}^{2}x_{1,2}+x_{5}^{2}x_{1,3}+x_{5}^{2}x_{1,4}+x_{1,2, 4,5}+x_{1,3,4,5}\right].\] The integral \(I_{4,0}\) can further be reduced to \[I_{4,0}=-\frac{9(d-2)(y+1)^{2}}{4y^{2}}I_{4,1}-\frac{3(d-3)}{y}I_{4,2},\] where the master integrals are \[I_{4,1}= I_{4}\left(-\frac{d}{2},0,-1,0,0,0\right),\] \[I_{4,2}= I_{4}\left(-\frac{d}{2},0,0,0,0,0\right).\] The differential equations for these integrals are \[\frac{\partial}{\partial y}\begin{pmatrix}I_{4,1}\\ I_{4,2}\end{pmatrix}=\begin{pmatrix}\frac{(2y-3)(\epsilon-1)}{y(y+1)}&0\\ \frac{3(y+1)(\epsilon-1)}{y^{2}}&-\frac{(y+2)(\epsilon-1)}{y\ (y+1)}\end{pmatrix}. \begin{pmatrix}I_{4,1}\\ I_{4,2}\end{pmatrix}.\] This differential-equation system can be converted into the canonical form [79] by using the package epsilon[80] which implements Lee's algorithm [81]. The obtained differential-equation system is solved by using the standard differential-equation method. The boundary conditions can be determined by applying the method developed in this section recursively. Here we do not go into more detail. Soft theorem at three loops in QCD and \(\mathcal{N}=4\) sYM In this section, we present the results for the three-loop soft factor in QCD up to \(\mathcal{O}(\epsilon^{2})\). These results are necessary ingredients for QCD corrections or soft function calculation at N\({}^{4}\)LO. We also derive the corresponding soft factor in \(\mathcal{N}=4\) sYM with full-color dependence, using the principle of leading transcendentality [82]. Note that the principle of leading transcendentality has not been proved, but is known to work in many cases, including e.g. twist operator dimensions [49; 82], soft functions or Wilson loops [83; 35], form factors [84; 85; 86]. We verify that the leading color contributions agree with a previous calculation [31] based on BDS ansatz [47], and determine a three-loop constant \(f_{2}^{(3)}\) analytically. ### IR singularities of soft factor Before presenting our results for soft factor at three loops, we first discuss its infrared singularities. IR singularities for scattering amplitudes have been understood to be factorized, as a result of soft-collinear factorization [87; 88; 89]. Since the soft factor is simply the soft limit of scattering amplitude, we can extract the IR singularities of the soft factor by taking the soft limit in the IR singularities of scattering amplitude. To all orders in perturbation theory, the IR singularities of massless scattering amplitudes are governed by a multiplicative renormalization factor \(\mathbf{Z}\), which in general is a matrix in color space \[|\mathcal{M}_{n}(\{p\},\mu)\rangle=\lim_{\epsilon\to 0}\mathbf{Z}^{-1}( \epsilon,\{p\},\mu)|\mathcal{M}_{n}(\epsilon,\{p\})\rangle, \tag{4.1}\] where in the equation above \(|\mathcal{M}_{n}(\epsilon,\{p\})\rangle\) is the UV renomalized ampltidues. The IR renormalized factor is \[\mathbf{Z}(\epsilon,\{p\},\mu)=\mathbf{P}\exp\left[\int_{\mu}^{\infty}\frac{d\mu^{ \prime}}{\mu^{\prime}}\mathbf{\Gamma}(\{p\},\mu^{\prime})\right] \tag{4.2}\] and \[\mathbf{\Gamma}(\{p\},\mu)=\sum_{(i,j)}\frac{\mathbf{T}_{i}\cdot\mathbf{T}_{j}}{2}\gamma^ {\rm cusp}(\alpha_{s})\ln\frac{\mu^{2}}{-s_{ij}}+\sum_{i}\gamma^{i}(\alpha_{s} )+\mathbf{\Delta}_{\bf 3}+\mathcal{O}(\mathbf{\Delta}_{4})\,, \tag{4.3}\] where explicit data for the anomalous dimension will be provided in the appendix. The Mandelstam variable is defined as \(s_{ij}=2\sigma_{j}p_{i}p_{j}+i0\) with the sign factor \[\sigma_{ij}=\begin{cases}+1\,,&p_{i},p_{j}\text{ both incoming or outgoing}\\ -1\,,&\text{otherwise}\end{cases} \tag{4.4}\] The notation \((i,j)\) refers to unordered tuples of distinct parton indices. In eq. (4.3), \(\mathbf{\Delta}_{\bf 3}\) refers to tripole contribution which is kinematics-independent and starts to contribute at the three-loop order [90]: \[\mathbf{\Delta}_{\bf 3}^{(3)}=-16f_{abe}f_{cde}\left(\zeta_{5}+2\zeta_{2}\zeta_{3} \right)\sum_{i=1}^{3}\sum_{\begin{subarray}{c}1\leq j<k\leq 3\\ j,k\neq i\end{subarray}}\left\{\mathbf{T}_{i}^{a},\mathbf{T}_{i}^{d}\right\} \mathbf{T}_{j}^{b}\mathbf{T}_{k}^{c}\,. \tag{4.5}\] Another contribution from the quadrupole term \(\mathbf{\Delta}_{4}\) involving four partons is also present at the three-loop order, for example in the three-loop four-parton scattering amplitude in \(\mathcal{N}=4\) and QCD [91; 92; 93; 94]. However, because we only deal with the soft factor of two hard partons (only scattering amplitude involving three colored partons is required), it is not needed in this work. The soft factor computed in this work refers to the soft gluon limit of a three-parton amplitude in QCD. We write the IR renormalization formula as \[|\mathcal{M}_{3}(p_{1},p_{2},p_{3},\mu)\rangle=\lim_{\epsilon\to 0}\mathbf{Z}^{-1}( \epsilon,p_{1},p_{2},p_{3},\mu)|\mathcal{M}_{3}(\epsilon,p_{1},p_{2},p_{3})\rangle, \tag{4.6}\] where \(|\mathcal{M}_{3}(\epsilon,p_{1},p_{2},p_{3})\rangle\) is the UV renormalized amplitudes with three massless QCD partons, for example \(\gamma^{*}\to q(p_{1})\bar{q}(p_{2})g(p_{3})\). In the soft gluon limit, the soft gluon factorization demands that the IR renormalized amplitude factorizes as \[\lim_{p_{3}^{0}\to 0}|\mathcal{M}_{3}(p_{1},p_{2},p_{3},\mu)\rangle=J(p_{3},\mu)| \mathcal{M}_{2}(p_{1},p_{2},\mu)\rangle\,, \tag{4.7}\] where \(J(p_{3},\mu)\) is the IR renormalized soft factor, and \(|\mathcal{M}_{2}(p_{1},p_{2},\mu)\rangle\) is the IR renormalized 2-parton amplitude, \[J(p_{3},\mu) = \lim_{\epsilon\to 0}Z_{s}^{-1}(\epsilon,p_{3},\mu)J( \epsilon,p_{3})\,, \tag{4.8}\] \[|\mathcal{M}_{2}(p_{1},p_{2},\mu)\rangle = \lim_{\epsilon\to 0}Z_{2}^{-1}(\epsilon,p_{1},p_{2},\mu)| \mathcal{M}_{2}(\epsilon,p_{1},p_{2})\rangle\,. \tag{4.9}\] This leads to the relation \[Z_{s}^{-1}(\epsilon,p_{3},\mu)=\lim_{p_{3}\to 0}Z_{3}^{-1}(\epsilon,p_{1},p_{2},p_{3}, \mu)Z_{2}(\epsilon,p_{1},p_{2},\mu) \tag{4.10}\] The infrared singularities of the three-loop soft factor can then be read-off from \(Z_{s}(\epsilon,p_{3},\mu)\). ### Soft theorem to three loops in QCD We are now ready to present our results for the soft factor to three loops. The results were verified to satisfy the (generalized) Casimir scaling principle, such that we are able to write them down in a unified form for both fundamental and adjoint representations. The corrections of \(B_{12}\) in eq. (2.5) were calculated to two loops in [31; 32], we list them here using the convention of eq. (2.12) for completeness. At one-loop order, the result can be expressed in terms of the following gamma functions, \[b_{12}^{(1)}=-\frac{\exp\left(\gamma_{\rm E}\epsilon\right)\Gamma^{3}(1- \epsilon)\Gamma^{2}(\epsilon+1)}{\epsilon^{2}\Gamma(1-2\epsilon)}\,. \tag{4.11}\] For the two-loop corrections, we give the result to \(\epsilon^{4}\) and found full agreement with the \(\epsilon\) expansion of all-order result in [32], \[b_{12}^{(2)}= C_{A}\bigg{\{}\frac{1}{2\epsilon^{4}}-\frac{11}{12\epsilon^{3}}+ \frac{\zeta_{2}-\frac{67}{36}}{\epsilon^{2}}+\frac{-\frac{11\zeta_{2}}{12}- \frac{11\zeta_{3}}{6}-\frac{193}{54}}{\epsilon}-\frac{67\zeta_{2}}{36}+\frac {341\zeta_{3}}{18}+\frac{7\zeta_{4}}{8}-\frac{571}{81}\] \[+\epsilon\Big{[}-\frac{7}{6}\zeta_{3}\zeta_{2}-\frac{139\zeta_{2 }}{54}+\frac{2077\zeta_{3}}{54}+\frac{2035\zeta_{4}}{48}-\frac{247\zeta_{5}} {10}-\frac{3410}{243}\Big{]}+\epsilon^{2}\Big{[}-\frac{205\zeta_{3}^{2}}{18}\] \[+\frac{341\zeta_{2}\zeta_{3}}{18}+\frac{6388\zeta_{3}}{81}-\frac{436 \zeta_{2}}{81}+\frac{12395\zeta_{4}}{144}+\frac{5621\zeta_{5}}{30}-\frac{3070 \zeta_{6}}{48}-\frac{20428}{729}\Big{]}\] \[+\epsilon^{3}\Big{[}-\frac{10571\zeta_{3}^{2}}{54}+\frac{2077 \zeta_{2}\zeta_{3}}{54}-\frac{509\zeta_{4}\zeta_{3}}{24}+\frac{37427\zeta_{3}} {243}-\frac{2411\zeta_{2}}{243}+\frac{41105\zeta_{4}}{216}\] \[-\frac{219\zeta_{2}\zeta_{5}}{10}+\frac{34237\zeta_{5}}{90}+\frac {42361\zeta_{6}}{64}-\frac{4573\zeta_{7}}{14}-\frac{122504}{2187}\Big{]}+ \epsilon^{4}\Big{[}-40\zeta_{5,3}-\frac{845}{18}\zeta_{2}\zeta_{3}^{2}\] \[-\frac{64387\zeta_{3}^{2}}{162}+\frac{5524\zeta_{2}\zeta_{3}}{81 }-\frac{63085\zeta_{4}\zeta_{3}}{72}-\frac{29\zeta_{5}\zeta_{3}}{15}+\frac{226 405\zeta_{3}}{729}-\frac{14785\zeta_{2}}{729}+\frac{119135\zeta_{4}}{324}\] \[+\frac{5621\zeta_{2}\zeta_{5}}{30}+\frac{108748\zeta_{5}}{135}+ \frac{258017\zeta_{6}}{192}+\frac{90101\zeta_{7}}{42}-\frac{1264777\zeta_{8}} {1152}-\frac{734896}{6561}\Big{]}\bigg{\}}\] \[+N_{f}\bigg{\{}\frac{1}{6\epsilon^{3}}+\frac{5}{18\epsilon^{2}}+ \frac{\frac{\zeta_{2}}{6}+\frac{19}{54}}{\epsilon}+\frac{5\zeta_{2}}{18}- \frac{31\zeta_{3}}{9}+\frac{65}{162}+\epsilon\Big{[}-\frac{35\zeta_{2}}{54}- \frac{155\zeta_{3}}{27}\] \[-\frac{185\zeta_{4}}{24}+\frac{211}{486}\Big{]}+\epsilon^{2} \Big{[}-\frac{31}{9}\zeta_{3}\zeta_{2}-\frac{367\zeta_{2}}{162}-\frac{994 \zeta_{3}}{81}-\frac{925\zeta_{4}}{72}-\frac{511\zeta_{5}}{15}+\frac{665}{1458 }\Big{]}\] \[+\epsilon^{3}\Big{[}\frac{961\zeta_{3}^{2}}{27}-\frac{155\zeta_{ 2}\zeta_{3}}{27}-\frac{5255\zeta_{3}}{243}-\frac{3083\zeta_{2}}{486}-\frac{891 5\zeta_{4}}{216}-\frac{511\zeta_{5}}{9}-\frac{3851\zeta_{6}}{32}+\frac{2059}{ 4374}\Big{]}\] \[+\epsilon^{4}\Big{[}\frac{4805\zeta_{3}^{2}}{81}-\frac{130\zeta_ {2}\zeta_{3}}{81}+\frac{5735\zeta_{4}\zeta_{3}}{36}-\frac{31246\zeta_{3}}{729 }-\frac{20503\zeta_{2}}{1458}-\frac{55225\zeta_{4}}{648}-\frac{511\zeta_{2} \zeta_{5}}{15}\] \[-\frac{19834\zeta_{5}}{135}-\frac{19255\zeta_{6}}{96}-\frac{8191 \zeta_{7}}{21}+\frac{6305}{13122}\Big{]}\bigg{\}}\,. \tag{4.12}\] The three-loop corrections to eq. (2.5) are our main results, we expand the results to \(\epsilon^{2}\) below: \[b_{12}^{(3)}= C_{A}^{2}\bigg{\{}-\frac{1}{6\epsilon^{6}}+\frac{11}{12\epsilon^{5}}+ \frac{\frac{119}{324}-\frac{3\zeta_{2}}{4}}{\epsilon^{4}}+\frac{\frac{649 \zeta_{2}}{216}+\frac{2\zeta_{3}}{3}-\frac{1517}{486}}{\epsilon^{3}}\] \[+\frac{\frac{2501\zeta_{2}}{648}-\frac{2101\zeta_{3}}{108}-\frac {1487\zeta_{4}}{288}-\frac{7271}{486}}+\frac{\frac{11\zeta_{3}\zeta_{2}}{18}+ \frac{437\zeta_{2}}{972}+\frac{2575\zeta_{3}}{36}-\frac{22583\zeta_{4}}{576}+ \frac{98\zeta_{5}}{5}-\frac{446705}{8748}}{\epsilon}\] \[+\frac{293\zeta_{3}^{2}}{36}-\frac{2453\zeta_{2}\zeta_{3}}{72}+ \frac{203705\zeta_{3}}{486}-\frac{12911\zeta_{2}}{2916}+\frac{493381\zeta_{4}} {1728}-\frac{26543\zeta_{5}}{60}+\frac{445679\zeta_{6}}{6912}\] \[-\frac{8206861}{52488}+\epsilon\Big{[}-\frac{17149\zeta_{3}^{2}}{ 216}+\frac{21031\zeta_{2}\zeta_{3}}{216}+\frac{86\zeta_{4}\zeta_{3}}{9}+\frac{23 30483\zeta_{3}}{1458}-\frac{403379\zeta_{2}}{17496}\] \[+\frac{1228523\zeta_{4}}{864}+\frac{9773\zeta_{2}\zeta_{5}}{90}+ \frac{262597\zeta_{5}}{180}-\frac{25965643\zeta_{6}}{13824}+\frac{151631\zeta_{7 }}{252}-\frac{48027739}{104976}\Big{]}\] \[+\epsilon^{2}\Big{[}-\frac{15008\zeta_{5,3}}{45}+\frac{10045}{72} \zeta_{2}\zeta_{3}^{2}-\frac{920995\zeta_{3}^{2}}{216}+\frac{71831\zeta_{2} \zeta_{3}}{108}+\frac{388289\zeta_{4}\zeta_{3}}{576}-\frac{9907\zeta_{5}\zeta_{3 }}{30}\] \[+\frac{15854467\zeta_{3}}{2916}-\frac{5363867\zeta_{2}}{104976}+ \frac{42678481\zeta_{4}}{7776}-\frac{71533\zeta_{2}\zeta_{5}}{120}+\frac{82837 \zeta_{5}}{10}+\frac{112195243\zeta_{6}}{13824}\] \[-\frac{1343045\zeta_{7}}{126}+\frac{3738034847\zeta_{8}}{829440}- \frac{2482106477}{1889568}\Big{]}\bigg{\}}+C_{A}N_{f}\bigg{\{}-\frac{1}{6 \epsilon^{5}}+\frac{43}{162\epsilon^{4}}\] \[+\frac{\frac{895}{486}-\frac{59\zeta_{2}}{108}}{\epsilon^{3}}+ \frac{-\frac{31\zeta_{2}}{324}+\frac{239\zeta_{3}}{54}+\frac{2603}{486}}{ \epsilon^{2}}+\frac{3265\zeta_{2}}{972}-\frac{4945\zeta_{3}}{162}+\frac{2437 \zeta_{4}}{288}+\frac{24169}{2187}+\frac{271\zeta_{3}\zeta_{2}}{36}\] \[-\frac{3925\zeta_{2}}{2916}-\frac{2513\zeta_{3}}{18}-\frac{33109 \zeta_{4}}{288}+\frac{7799\zeta_{5}}{90}+\frac{397699}{26244}+\epsilon\Big{[}- \frac{4969\zeta_{3}^{2}}{108}-\frac{1595\zeta_{2}\zeta_{3}}{36}\] \[-\frac{720299\zeta_{3}}{1458}-\frac{2288895\zeta_{2}}{4374}-\frac{ 1168171\zeta_{4}}{2592}-\frac{187753\zeta_{5}}{270}+\frac{2476865\zeta_{6}}{6912} -\frac{22273}{5832}\Big{]}\] \[+\epsilon^{2}\Big{[}\frac{404075\zeta_{3}^{2}}{324}-\frac{78295 \zeta_{2}\zeta_{3}}{324}-\frac{121555\zeta_{4}\zeta_{3}}{288}-\frac{3316207\zeta_ {3}}{2187}-\frac{17477627\zeta_{2}}{52488}\] \[-\frac{15232813\zeta_{4}}{7776}+\frac{7063\zeta_{2}\zeta_{5}}{60} -\frac{52115\zeta_{5}}{18}-\frac{76597939\zeta_{6}}{20736}+\frac{13871\zeta_{7 }}{7}-\frac{125652667}{944784}\Big{]}\bigg{\}}\] \[+C_{F}N_{f}\bigg{\{}\frac{1}{9\epsilon^{3}}+\frac{55}{54}-\frac{8 \zeta_{3}}{9}+\frac{\zeta_{2}}{6}-\frac{76\zeta_{3}}{27}-\frac{4\zeta_{4}}{3} +\frac{1819}{324}-\frac{4}{3}\zeta_{3}\zeta_{2}+\frac{67\zeta_{2}}{36}-\frac {1385\zeta_{3}}{81}\] \[-\frac{38\zeta_{4}}{9}-\frac{56\zeta_{5}}{9}+\frac{45967}{1944} +\epsilon\Big{[}\frac{544\zeta_{3}^{2}}{9}-\frac{38\zeta_{2}\zeta_{3}}{9}- \frac{50495\zeta_{3}}{486}+\frac{3547\zeta_{2}}{216}-\frac{16237\zeta_{4}}{432}\] \[-\frac{532\zeta_{5}}{27}-\frac{101\zeta_{6}}{6}+\frac{1007179}{11 664}\Big{]}+\epsilon^{2}\Big{[}\frac{5168\zeta_{3}^{2}}{27}-\frac{809\zeta_{2 }\zeta_{3}}{54}+\frac{599\zeta_{4}\zeta_{3}}{2}-\frac{1661303\zeta_{3}}{2916}\] \[+\frac{9931\zeta_{2}}{1296}-\frac{635899\zeta_{4}}{2592}-\frac{2 8\zeta_{2}\zeta_{5}}{3}-\frac{70417\zeta_{5}}{405}-\frac{1919\zeta_{6}}{36}- \frac{392\zeta_{7}}{9}+\frac{20357263}{69984}\Big{]}\bigg{\}}\] \[+N_{f}^{2}\bigg{\{}-\frac{4}{81\epsilon^{4}}+-\frac{40}{243 \epsilon^{3}}+\frac{-\frac{2\zeta_{2}}{2}-\frac{8}{27}}{\epsilon^{2}}+\frac{- \frac{20\zeta_{2}}{81}+\frac{260\zeta_{3}}{81}-\frac{704}{2187}}+\frac{44 \zeta_{2}}{27}+\frac{2600\zeta_{3}}{243}\] \[+\frac{1229\zeta_{4}}{108}+\frac{640}{6561}+\epsilon\Big{[}\frac {130\zeta_{3}\zeta_{2}}{27}+\frac{5984\zeta_{2}}{729}+\frac{296\zeta_{3}}{9}+ \frac{6145\zeta_{4}}{162}+\frac{10084\zeta_{5}}{135}+\frac{12160}{6561}\Big{]}\] \[+\epsilon^{2}\Big{[}-\frac{8450\zeta_{3}^{2}}{81}+\frac{1300 \zeta_{2}\zeta_{3}}{81}+\frac{168448\zeta_{3}}{2187}+\frac{67712\zeta_{2}}{21 87}+\frac{9355\zeta_{4}}{54}+\frac{20168\zeta_{5}}{81}\] \[+\frac{999593\zeta_{6}}{2592}+\frac{423296}{59049}\Big{]}\bigg{\}}\,, \tag{4.13}\] \[c_{12}^{(3)}= \frac{-32\zeta_{2}\zeta_{3}-16\zeta_{5}}{\epsilon}-192\zeta_{3} ^{2}+\frac{64\zeta_{3}}{3}-64\zeta_{2}\] \[+\frac{1760\zeta_{5}}{3}-940\zeta_{6}+\epsilon\Big{[}\frac{4928 \zeta_{3}^{2}}{3}-1112\zeta_{4}\zeta_{3}-\frac{1696\zeta_{3}}{9}-416\zeta_{2} +208\zeta_{4}-1496\zeta_{2}\zeta_{5}\] \[+\frac{10720\zeta_{5}}{9}+1760\zeta_{6}-4032\zeta_{7}\Big{]}+ \epsilon^{2}\Big{[}\frac{29376\zeta_{5,3}}{5}-480\zeta_{2}\zeta_{3}^{2}+ \frac{30016\zeta_{3}^{2}}{9}+608\zeta_{2}\zeta_{3}\] \[+4928\zeta_{4}\zeta_{3}+5488\zeta_{5}\zeta_{3}-\frac{42560\zeta_{3 }}{27}-\frac{6208\zeta_{2}}{3}-\frac{10048\zeta_{4}}{3}+880\zeta_{2}\zeta_{5}+ \frac{101216\zeta_{5}}{27}\] \[+\frac{10720\zeta_{6}}{3}+27280\zeta_{7}-\frac{2613298\zeta_{8}}{ 45}\Big{]}\,, \tag{4.14}\] \[d_{12}^{(3)}= 128\zeta_{2}-\frac{128\zeta_{3}}{3}-\frac{640\zeta_{5}}{3}\] \[+\epsilon\Big{[}-\frac{1792\zeta_{3}^{2}}{3}+\frac{3008\zeta_{3}} {9}+960\zeta_{2}-416\zeta_{4}-\frac{3200\zeta_{5}}{9}-640\zeta_{6}\Big{]}+ \epsilon^{2}\Big{[}-\frac{8960\zeta_{3}^{2}}{9}\] \[-1216\zeta_{2}\zeta_{3}-1792\zeta_{4}\zeta_{3}+\frac{94144\zeta_{ 3}}{27}+\frac{15296\zeta_{2}}{3}+\frac{18848\zeta_{4}}{3}-320\zeta_{2}\zeta_{5}\] \[-\frac{91072\zeta_{5}}{27}-\frac{3200\zeta_{6}}{3}-9920\zeta_{7} \Big{]}\,, \tag{4.15}\] where several regular zeta values up to transcendental-weight 8 and one multiple zeta value are involved, \[\zeta_{5,3}=\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{1}{m^{5}n^{3}} \simeq 0.0377076729848475\,. \tag{4.16}\] The higher-order corrections of the Eikonal functions in eq. (7) can be readily expressed in terms of the above results, in the cases of one and two loops, \[r_{12}^{(l)}=C_{A}b_{12}^{(l)}\,\text{ for }l=1,\,2\,, \tag{119}\] and in the three-loop case, \[r_{12}^{(3)}=C_{A}\,b_{12}^{(3)}+\frac{d_{R}^{abcd}d_{A}^{abcd}}{N_{R}C_{R}}\,c_ {12}^{(3)}+\frac{d_{R}^{abcd}d_{F}^{abcd}N_{f}}{N_{R}C_{R}}\,d_{12}^{(3)}\,, \tag{120}\] where in gauge group SU(\(N_{c}\)), the quadratic color structures are evaluated to be the following explicit expressions, \[\frac{d_{F}^{abcd}d_{F}^{abcd}}{N_{F}C_{F}} =\frac{N_{c}^{4}-6N_{c}^{2}+18}{48N_{c}^{2}},\quad\frac{d_{F}^{ abcd}d_{A}^{abcd}}{N_{F}C_{F}}=\frac{N_{c}^{3}+6N_{c}}{24},\] \[\frac{d_{A}^{abcd}d_{F}^{abcd}}{N_{A}C_{A}} =\frac{N_{c}^{2}+6}{48},\quad\frac{d_{A}^{abcd}d_{A}^{abcd}}{N_{A }C_{A}}=\frac{N_{c}^{3}+36N_{c}}{24}\,. \tag{121}\] We emphasize that our results are for unrenormalized quantities. To perform the ultraviolet (UV) renormalization, we just need to renormalize the strong coupling constant, i.e., \[a_{s}\to Z_{a_{s}}a_{s}\,, \tag{122}\] with \(Z_{a_{s}}=1-\frac{\beta_{0}}{\epsilon}a_{s}+\left(\frac{\beta_{0}^{2}}{ \epsilon^{2}}-\frac{\beta_{1}}{2\epsilon}\right)+\left(-\frac{\beta_{0}^{3}}{ \epsilon^{3}}+\frac{7\beta_{1}\beta_{0}}{6\epsilon^{2}}-\frac{\beta_{2}}{3 \epsilon}\right)+\mathcal{O}(a_{s}^{4})\), where \(\beta_{i}\) is QCD beta function which will be collected in the appendix. After UV renormalization, the remaining poles stem from IR singularities. We checked that the IR poles in our explicit results agree with those predicted in section 4.1. ### Soft theorem in \(\mathcal{N}=4\) sYM and BDS ansatz at three loops The soft theorem in \(\mathcal{N}=4\) sYM can be easily extracted from QCD results assuming the principle of maximal transcendentality [49; 82]. At the one-loop order, the Eikonal function for the soft theorem in \(\mathcal{N}=4\) sYM and QCD are identical, i.e., \[S_{12,\,\mathcal{N}=4}^{(1)}(q)=S_{12}^{(1)}(q)=S_{12}^{0}(q)S_{\epsilon}C_{ A}\,b_{12}^{(1)}\,. \tag{123}\] At two-loop order, by keeping only the maximal transcendentality part of eq. (4.1), the \(\mathcal{N}=4\) sYM results up to \(\epsilon^{4}\) reads, \[S_{12,\,\mathcal{N}=4}^{(2)}(q)= S_{12}^{0}(q)S_{\epsilon}^{2}C_{A}^{2}\bigg{\{}\frac{1}{2 \epsilon^{4}}+\frac{\zeta_{2}}{\epsilon^{2}}-\frac{11\zeta_{3}}{6\epsilon}+ \frac{7\zeta_{4}}{8}+\epsilon\left(-\frac{7}{6}\zeta_{2}\zeta_{3}-\frac{247 \zeta_{5}}{10}\right)\] \[+\epsilon^{2}\left(-\frac{205\zeta_{3}^{2}}{18}-\frac{3307\zeta_ {6}}{48}\right)+\epsilon^{3}\left(-\frac{509}{24}\zeta_{3}\zeta_{4}-\frac{2 19\zeta_{2}\zeta_{5}}{10}-\frac{4573\zeta_{7}}{14}\right)\] \[+\epsilon^{4}\left(-40\zeta_{5,3}-\frac{845}{18}\zeta_{2}\zeta_{ 3}^{2}-\frac{29\zeta_{5}\zeta_{3}}{15}-\frac{1264777\zeta_{8}}{1152}\right) \bigg{\}}\,, \tag{124}\] where only the leading color contributes. Similarly, at the three-loop order, we have \[S_{12,\,\mathcal{N}=4}^{(3)}(q)= S_{12}^{0}(q)S_{\epsilon}^{3}\bigg{[}C_{A}^{3}\bigg{\{}-\frac{1}{6 \epsilon^{6}}-\frac{3\zeta_{2}}{4\epsilon^{4}}+\frac{2\zeta_{3}}{3\epsilon^{3 }}-\frac{1487\zeta_{4}}{288\epsilon^{2}}+\frac{\frac{284\zeta_{5}}{15}-\frac{ 13\zeta_{2}\zeta_{3}}{18}}{\epsilon}+\frac{5\zeta_{3}^{2}}{36}\] \[+\frac{174959\zeta_{6}}{6912}+\epsilon\left(-\frac{331}{9}\zeta_{3} \zeta_{4}+\frac{4163\zeta_{2}\zeta_{5}}{90}+\frac{109295\zeta_{7}}{252}\right)\] \[+\epsilon^{2}\left(-\frac{3992\zeta_{5,3}}{45}+\frac{8605}{72} \zeta_{2}\zeta_{3}^{2}-\frac{3047\zeta_{5}\zeta_{3}}{30}+\frac{1731021983\zeta_ {8}}{829440}\right)\right\}\] \[+\frac{3}{2}C_{A}\bigg{\{}\frac{-32\zeta_{2}\zeta_{3}-16\zeta_{5} }{\epsilon}-192\zeta_{3}^{2}-940\zeta_{6}+\epsilon\big{(}-1112\zeta_{3}\zeta_{ 4}-1496\zeta_{2}\zeta_{5}\] \[\left.-\,4032\zeta_{7}\big{)}+\epsilon^{2}\left(\frac{29376\zeta_ {5,3}}{5}-480\zeta_{2}\zeta_{3}^{2}+5488\zeta_{5}\zeta_{3}-\frac{2613298\zeta_ {8}}{45}\right)\bigg{\}}\right], \tag{4.23}\] where the three-loop \(\mathcal{N}=4\) result receive contributions from two color structures \(C_{A}^{3}\) and \(d_{R}^{abcd}d_{A}^{abcd}\) with \(R=A\) in (4.18). And the sub-leading color in the above equation comes from the fourth invariant tensor \(d_{A}^{abcd}d_{A}^{abcd}\) solely. Interestingly, the soft limit of three-loop splitting amplitude in planar \(\mathcal{N}=4\) sYMs can be reorganized into the following form [31] which is the soft limit of the well-known BDS ansatz [47], \[r_{S}^{(3)}(\epsilon)=-\frac{1}{3}\left(r_{S}^{(1)}(\epsilon)\right)^{3}+r_{S} ^{(1)}(\epsilon)r_{S}^{(2)}(\epsilon)+f^{(3)}(\epsilon)r_{S}^{(1)}(3\epsilon) +\mathcal{O}(\epsilon)\,, \tag{4.24}\] where \(r_{S}^{(i)}\) are related to the Eikonal functions in the following way, \[S_{12,\mathcal{N}=4}^{(1)}(q) =2S_{12}^{0}(q)S_{\epsilon}C_{A}\,r_{S}^{(1)}(\epsilon)\,,S_{12, \mathcal{N}=4}^{(2)}(q)=4S_{12}^{0}(q)S_{\epsilon}^{2}C_{A}^{2}\,r_{S}^{(2)} (\epsilon)\,,\] \[S_{12,\mathcal{N}=4}^{(3)}(q) =8S_{12}^{0}(q)S_{\epsilon}^{3}C_{A}^{3}r_{S}^{(3)}(\epsilon)+ \text{sub-leading color contribution}\,, \tag{4.25}\] and the \(f^{(3)}(\epsilon)\) has been calculated to order \(\epsilon^{2}\)[48] \[f^{(3)}(\epsilon)=\frac{11\zeta_{4}}{2}+(5\zeta_{2}\zeta_{3}+6\zeta_{5}) \epsilon+f_{2}^{(3)}\epsilon^{2}+\mathcal{O}(\epsilon^{3})\,, \tag{4.26}\] where \(f_{2}^{(3)}\) is known numerically only with \(f_{2}^{(3)}=85.263\pm 0.004\). By comparing the predicted result from BDS ansatz with our explicit result shown in (4.23), we managed to determine the \(a\) analytically to be \[f_{2}^{(3)}=31\zeta_{3}^{2}+\frac{1909\zeta_{6}}{48}\simeq 85.25374611\,, \tag{4.27}\] which agrees well with the numerical calculation of \(f_{2}^{(3)}\) in [48]. ## 5 Conclusion High energy scattering amplitude in QCD with a gluon radiation admits a universal factorization formula in the soft gluon limit in terms of a soft factor and an amplitude with the soft gluon removed, commonly known as soft theorem. In this paper we present a calculation for the soft factor through three loops in the expansion of the strong coupling constant. Our calculation was carried out with restriction to only two hard partons in the scattering processes, which is relevant for several important processes in collider physics, such as Drell-Yan production, \(e^{+}e^{-}\) annihilation to dijet, and 1+1 jet production in DIS. We present analytic results for the soft factor through to \(\mathcal{O}(\epsilon^{2})\) in dimensional regularization parameter, which is needed for infrared subtraction and soft function calculation to N\({}^{4}\)LO in perturbation theory. The calculation was done by expressing the soft factor as a single-gluon matrix element of soft Wilson line operator in SCET, and constructing the corresponding integrand using modern Feynman integral techniques. The three-loop soft factor can be reduced to the calculation of 49 single-scale soft master integrals. We developed a systematic iterative approach based on Feynman parameter representation, differential equation on Feynman parameter, and IBP reduction in Feynman parameter representation to tackle these master integrals. We expect that our approach can be applied well to other single-scale master integrals for both loop and phase space integrals. We verify that the infrared poles of the three-loop soft factor agree with the general infrared factorization formula of QCD, providing a strong check to our calculation. As an application of our results, we obtain the soft factor in \(\mathcal{N}=4\) sYM by assuming the principle of maximal transcendentality, which states that for certain quantities in QCD and \(\mathcal{N}=4\) sYM, their leading transcendental part is the same in perturbation theory. The three-loop soft factor at leading color approximation in \(\mathcal{N}=4\) sYM agrees with previously known results obtained from BDS ansatz. In addition, we also analytically determine a three-loop constant \(f_{2}^{(3)}=31\zeta_{3}^{2}+1909\zeta_{6}/48\) in BDS ansatz, which was only known numerically. Our new results are the full-color dependence of the three-loop soft factor, which can be used to check the three-loop form factor \(1\to 3\) decay once the relevant master integrals are known. Towards the full-color three-loop \(1\to 3\) form factor in QCD, we notice that the corresponding leading-color result became available quite recently [95]. W.C. and H.X.Z. were supported by National Natural Science Foundation of China under contract No. 11975200. M.X.L. was supported by National Natural Science Foundation of China under contract No. U2230402. T.Z.Y. would like to acknowledge the European Research Council (ERC) for funding this work under the European Union's Horizon 2020 research and innovation programme grant agreement 101019620 (ERC Advanced Grant TOPUP). H.X.Z. would also like to express gratitude to the Erwin-Schrodinger Institute for Mathematical Physics for their hospitality during the program "Quantum Field Theory at the Frontiers of the Strong Interaction", where part of this work was completed. ## Appendix A Integral families We define the following integrals, \[J(;\nu_{1},\nu_{2},\cdots,\nu_{15})=(\mu^{2}e^{\gamma_{E}})^{3\epsilon}\int \frac{d^{d}l_{1}d^{d}l_{2}d^{d}l_{3}}{i^{3}\pi^{3d/2}}\frac{1}{D_{1}^{\nu_{1} }D_{2}^{\nu_{2}}\cdots D_{15}^{\nu_{15}}}, \tag{10}\] with the denominator sets taken from the following six integral families, \begin{tabular}{|c|c|c|c|c|c|c|} \hline & 1 & 2 & 4 & 7 & 15 & 16 \\ \hline \(D_{1}\) & \(n_{1}\cdot l_{1}\) & \(n_{1}\cdot l_{3}\) & \(n_{1}\cdot l_{3}\) & \(n_{1}\cdot l_{1}\) & \(n_{1}\cdot l_{2}\) & \(n_{1}\cdot l_{2}\) \\ \(D_{2}\) & \(n_{1}\cdot l_{3}\) & \(n_{1}\cdot(l_{3}-l_{2})\) & \(n_{1}\cdot(l_{3}-l_{2})\) & \(n_{1}\cdot(l_{1}-l_{3})\) & \(n_{1}\cdot(q-l_{1})\) & \(n_{1}\cdot(l_{3}-q)\) \\ \(D_{3}\) & \(n_{1}\cdot l_{2}\) & \(n_{1}\cdot(l_{3}-l_{1})\) & \(n_{1}\cdot(l_{3}-l_{1})\) & \(n_{1}\cdot(l_{2}-l_{3})\) & \(n_{1}\cdot(l_{3}-l_{1})\) & \(n_{1}\cdot(l_{3}-l_{1})\) \\ \(D_{4}\) & \(n_{2}\cdot(q-l_{1})\) & \(n_{2}\cdot(q-l_{3})\) & \(n_{2}\cdot(l_{2}-l_{3})\) & \(n_{2}\cdot(q-l_{1})\) & \(n_{2}\cdot(q-l_{2})\) & \(n_{2}\cdot(q-l_{2})\) \\ \(D_{5}\) & \(n_{2}\cdot(q-l_{3})\) & \(n_{2}\cdot(q-l_{2})\) & \(n_{2}\cdot(l_{1}-l_{3})\) & \(n_{2}\cdot(q-l_{3})\) & \(n_{2}\cdot l_{1}\) & \(n_{2}\cdot l_{1}\) \\ \(D_{6}\) & \(n_{2}\cdot(q-l_{2})\) & \(n_{2}\cdot(q-l_{1})\) & \(n_{2}\cdot(q-l_{3})\) & \(n_{2}\cdot(q-l_{2})\) & \(n_{2}\cdot(l_{3}-l_{2})\) & \(n_{2}\cdot(l_{3}-l_{2})\) \\ \(D_{7}\) & \(l_{1}^{2}\) & \(l_{1}^{2}\) & \(l_{1}^{2}\) & \(l_{1}^{2}\) & \(l_{1}^{2}\) & \(l_{1}^{2}\) \\ \(D_{8}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) & \((l_{1}-q)\,^{2}\) \\ \(D_{9}\) & \(l_{2}^{2}\) & \(l_{2}^{2}\) & \(l_{2}^{2}\) & \(l_{2}^{2}\) & \(l_{2}^{2}\) & \(l_{2}^{2}\) \\ \(D_{10}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) & \((l_{2}-q)\,^{2}\) \\ \(D_{11}\) & \(l_{3}^{2}\) & \(l_{3}^{2}\) & \(l_{3}^{2}\) & \(l_{3}^{2}\) & \(l_{3}^{2}\) & \(l_{3}^{2}\) \\ \(D_{12}\) & \((l_{3}-q)\,^{2}\) & \((l_{3}-q)\,^{2}\) & \((l_{3}-q)\,^{2}\) & \((l_{3}-q)\,^{2}\) & \((l_{3}-q)\,^{2}\) & \((l_{3}-q)\,^{2}\) \\ \(D_{13}\) & \((l_{1}-l_{2})\,^{2}\) & \((l_{1}-l_{2})\,^{2}\) & \((l_{1}-l_{2})\,^{2}\) & \((l_{1}-l_{2})\,^{2}\) & \((l_{1}-l_{3})\,^{2}\) & \((l_{1}-l_{3})\,^{2}\) \\ \(D_{14}\) & \((l_{1}-l_{3})\,^{2}\) & \((l_{1}-l_{3})\,^{2}\) & \((l_{1}-l_{3})\,^{2}\) & \((l_{1}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) \\ \(D_{15}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{2}-l_{3})\,^{2}\) & \((l_{1}+l_{2}-l_{3})\,^{2}\) \\ \hline \end{tabular} where all propagators have Feynman prescription \(+i0^{+}\), for example \(n_{1}\cdot l_{1}+i0^{+}\). ## Appendix B QCD beta function and anomalous dimensions To predict the infrared singularities as shown in sec. 4.1, we need the relevant anomalous dimensions and QCD beta function, which will be listed below for readers' convenience. The QCD beta function is defined as \[\frac{d\alpha_{s}}{d\ln\mu}=\beta(\alpha_{s})=-2\alpha_{s}\sum_{n=0}^{\infty} \left(\frac{\alpha_{s}}{4\pi}\right)^{n+1}\beta_{n}\,, \tag{12}\] and here we need to three-loop order [96] \[\beta_{0} =\frac{11}{3}C_{A}-\frac{4}{3}T_{F}N_{f}\,,\] \[\beta_{1} =\frac{34}{3}C_{A}^{2}-\frac{20}{3}C_{A}T_{F}N_{f}-4C_{F}T_{F}N_{ f}\,,\] \[\beta_{2} =\left(\frac{158C_{A}}{27}+\frac{44C_{F}}{9}\right)N_{f}^{2}T_{F}^ {2}+\left(-\frac{205C_{A}C_{F}}{9}-\frac{1415C_{A}^{2}}{27}+2C_{F}^{2}\right)N_{ f}T_{F}+\frac{2857C_{A}^{3}}{54}\,. \tag{13}\] We perform a perturbative expansion for the anomalous dimension \(\gamma\) as follows, \[\gamma=\sum_{i=1}^{\infty}a_{s}^{i}\gamma_{i-1}\,, \tag{14}\] where \(a_{s}\) is defined in eq. (8). The cusp anomalous dimension to three-loop order was first extracted from the three-loop non-singlet splitting functions [97], and is given as \[\gamma_{0}^{\rm cusp}= 4\,,\] \[\gamma_{1}^{\rm cusp}= \left(\frac{268}{9}-8\zeta_{2}\right)C_{A}-\frac{80T_{F}N_{f}}{9}\,,\] \[\gamma_{2}^{\rm cusp}= \bigg{[}\left(\frac{320\zeta_{2}}{9}-\frac{224\zeta_{3}}{3}-\frac {1672}{27}\right)C_{A}+\left(64\zeta_{3}-\frac{220}{3}\right)C_{F}\bigg{]}N_{f }T_{F}\] \[+ \left(-\frac{1072\zeta_{2}}{9}+\frac{88\zeta_{3}}{3}+88\zeta_{4}+ \frac{490}{3}\right)C_{A}^{2}-\frac{64}{27}N_{f}^{2}T_{F}^{2}\,. \tag{100}\] Finally, the quark and gluon anomalous dimensions of the three-loop order can be extracted from the three-loop quark and gluon form factors [98; 99], \[\gamma_{0}^{q}= -3C_{F}\,,\] \[\gamma_{1}^{q}= C_{F}\left[C_{F}\left(-\frac{3}{2}+12\zeta_{2}-24\zeta_{3} \right)+C_{A}\left(-\frac{961}{54}-11\zeta_{2}+26\zeta_{3}\right)+T_{F}N_{f} \left(\frac{130}{27}+4\zeta_{2}\right)\right]\,,\] \[\gamma_{2}^{q}= N_{f}T_{F}\bigg{[}\left(\frac{5188\zeta_{2}}{81}-\frac{1928 \zeta_{3}}{27}+44\zeta_{4}-\frac{17318}{729}\right)C_{A}C_{F}\] \[+\left(-\frac{52\zeta_{2}}{3}+\frac{512\zeta_{3}}{9}-\frac{280 \zeta_{4}}{3}+\frac{2953}{27}\right)C_{F}^{2}\bigg{]}+\left(-\frac{80\zeta_{2 }}{9}-\frac{32\zeta_{3}}{27}+\frac{9668}{729}\right)C_{F}N_{f}^{2}T_{F}^{2}\] \[+\left(-16\zeta_{3}\zeta_{2}+\frac{410\zeta_{2}}{3}-\frac{844 \zeta_{3}}{3}+\frac{494\zeta_{4}}{3}-120\zeta_{5}-\frac{151}{4}\right)C_{A}C_ {F}^{2}\] \[+\left(-\frac{88}{3}\zeta_{3}\zeta_{2}-\frac{7163\zeta_{2}}{81}+ \frac{3526\zeta_{3}}{9}-83\zeta_{4}-136\zeta_{5}-\frac{139345}{2916}\right)C_ {A}^{2}C_{F}\] \[+\left(32\zeta_{3}\zeta_{2}-18\zeta_{2}-68\zeta_{3}-144\zeta_{4}+ 240\zeta_{5}-\frac{29}{2}\right)C_{F}^{3}\,,\] \[\gamma_{0}^{g}= -\beta_{0}=-\frac{11}{3}\,C_{A}+\frac{4}{3}\,T_{F}n_{f}\,,\] \[\gamma_{1}^{g}= C_{A}^{2}\left(-\frac{692}{27}+\frac{11\zeta_{2}}{3}+2\zeta_{3} \right)+C_{A}T_{F}n_{f}\left(\frac{256}{27}-\frac{4\zeta_{2}}{3}\right)+4C_{F} T_{F}n_{f}\,,\] \[\gamma_{2}^{g}= C_{A}^{3}\left(-\frac{97186}{729}+\frac{6109\zeta_{2}}{81}- \frac{319\zeta_{4}}{3}+\frac{122}{3}\,\zeta_{3}-\frac{40}{3}\zeta_{2}\,\zeta _{3}-16\zeta_{5}\right)\] \[+C_{A}^{2}T_{F}n_{f}\left(\frac{30715}{729}-\frac{2396\zeta_{2}}{ 81}+\frac{164\zeta_{4}}{3}+\frac{712}{27}\,\zeta_{3}\right)\] \[+C_{A}C_{F}T_{F}n_{f}\left(\frac{2434}{27}-4\zeta_{2}-\frac{144 \zeta_{4}}{5}-\frac{304}{9}\,\zeta_{3}\right)-2C_{F}^{2}T_{F}n_{f}\] \[+C_{A}T_{F}^{2}n_{f}^{2}\left(-\frac{538}{729}+\frac{80\zeta_{2}}{ 27}-\frac{224}{27}\,\zeta_{3}\right)-\frac{44}{9}\,C_{F}T_{F}^{2}n_{f}^{2}\,, \tag{101}\] where the explicit results can also be found in [100]. ## Appendix C Instructions of the ancillary files We present our results for master integrals up to transcendentality weight-8 and soft theorem to \(\epsilon^{2}\) in the ancillary files. MIsolutions.m contains the solutions of all 49 three-loop master integrals from the six integral families as defined in section A. softTheorem.m contains the results as shown in eq. (4.11) to eq. (4.15).
2306.00200
Zero-shot Pose Transfer for Unrigged Stylized 3D Characters
Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Furthermore, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at https://jiashunwang.github.io/ZPT
Jiashun Wang, Xueting Li, Sifei Liu, Shalini De Mello, Orazio Gallo, Xiaolong Wang, Jan Kautz
2023-05-31T21:39:02Z
http://arxiv.org/abs/2306.00200v1
# Zero-shot Pose Transfer for Unrigged Stylized 3D Characters ###### Abstract Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Furthermore, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at [https://jiashunwang.github.io/2PT/](https://jiashunwang.github.io/2PT/) ## 1 Introduction Stylized 3D characters, such as those in Fig. 1, are commonly used in animation, movies, and video games. Deforming these characters to mimic natural human or animal poses has been a long-standing task in computer graphics. Different from the 3D models of natural humans and animals, stylized 3D characters are created by professional artists through imagination and exaggeration. As a result, each stylized character has a distinct skeleton, shape, mesh topology, and usually include various accessories, such as a cloak or wings (see Fig. 1). These variations hinder the process of matching the pose of a stylized 3D character to that of a reference avatar, generally making manual rigging a requirement. Unfortunately, rigging is a tedious process that requires manual effort to create the skeleton and skinning weights for each character. Even when provided with manually annotated rigs, transferring poses from a source avatar onto stylized characters is not trivial when the source and target skeletons differ. Automating this procedure is still an open research problem and is the focus of many recent works [2, 4, 53, 2]. Meanwhile, non-stylized 3D humans and animals have been well-studied by numerous prior works [53, 56, 63, 70, 41]. A few methods generously provide readily available annotated datasets [11, 12, 42, 70], or carefully designed parametric models [70, 41, 52]. By taking advantage of these datasets [12, 42], several learning-based methods [69, 14, 7, 63] disentangle and transfer poses between human meshes using neural networks. However, these methods (referred to as "part-level" in the following) carry out pose transfer by either globally deforming the whole body mesh [14, 48, 49, 22] or by transforming body parts [35, 49], both of which lead to overfitting on the training human meshes and fail to generalize to stylized characters with significantly different body part shapes. Interestingly, classical mesh deformation methods [56, 57] (referred to as "local" in the following) can transfer poses between a pair of meshes with significant shape differences by computing and transferring per-triangle transformations through correspondence. Though these methods require manual correspondence annotation between the source and target meshes, they provide a key insight that by transforming individual triangles instead of body parts, the mesh deformation methods are more agnostic to a part's shape and can generalize to meshes with different shapes. We marry the benefits of learning-based methods [7, 14, 63, 69, 35] with the classic local deformation approach [56] and present a model for unrigged, stylized character deformation guided by a non-stylized biped or quadruped avatar. Notably, our model only requires easily accessible posed human or animal meshes for training and can be directly applied to deform 3D stylized characters with a significantly different shape at inference. To this end, we implicitly operationalize the key insight from the local deformation method [56] by modeling the shape and pose of a 3D character with a correspondence-aware shape understanding module and an implicit pose deformation module. The shape understanding module learns to predict the part segmentation label (_i.e_., the coarse-level correspondence) for each surface point, besides representing the shape of a 3D character as a latent shape code. The pose deformation module is conditioned on the shape code and deforms individual surface point guided by a target pose code sampled from a prior pose latent space [51]. Furthermore, to encourage realistic deformation and generalize to rare poses, we propose a novel volume-based test-time training procedure that can be efficiently applied to unseen stylized characters. During inference, by mapping biped or quadruped poses from videos, in addition to meshes to the prior pose latent space using existing works [54, 52, 32], we can transfer poses from different modalities onto unrigged 3D stylized characters. Our main contributions are: * learning a model for stylized 3D character deformation with only posed human or animal meshes. * We develop a correspondence-aware shape understanding module, an implicit pose deformation module, and a volume-based test-time training procedure to generalize the proposed model to unseen stylized characters and arbitrary poses in a zero-shot manner. * We carry out extensive experiments on both humans and quadrupeds to show that our method produces more visually pleasing and accurate deformations compared to baselines trained with comparable or more supervision. ## 2 Related Work **Deformation Transfer.** Deformation transfer is a long-standing problem in the computer graphics community [3, 6, 8, 9, 56, 66]. Sumner _et al_. [56] apply an affine transformation to each triangle of the mesh to solve an optimization problem that matches the deformation of the source mesh while maintaining the shape of the target mesh. Ben-Chen _et al_. [9] enclose the source and target shapes with two cages and transfer the Jacobians of the source deformation to the target shape. However, these methods need tedious human efforts to annotate the correspondence between the source and target shapes. More recently, several deep learning methods are developed to solve the deformation transfer task. However, they either require manually providing the correspondence [67] or cannot generalize [14, 69, 22] to stylized characters with different shapes. Gao _et al_. [22] propose a VAE-GAN based method to leverage the cycle consistency between the source and target shapes. Nonetheless, it can only work on shapes used in training. Wang _et al_. [63] introduce conditional normalization used in style transfer for 3D deformation transfer. But the method is limited to clothed-humans and cannot handle the large shape variations of stylized characters. We argue that these learning-based methods cannot generalize to stylized characters because they rely on encoding their global information (_e.g_., body or parts), which is different from traditional works that focus on local deformation, _e.g_., the affine transformation applied to each triangle in [56]. Using a neural network to encode the global information easily leads to overfitting. For example, models trained on human meshes cannot generalize to a stylized humanoid character. At the same time, early works only focus on local information and cannot model global information such as correspondence between the source and target shapes, which is why they all need human effort to annotate the correspondence. Our method tries to learn the correspondence and deform locally at the same time. **Skeleton-based Pose Transfer.** Besides mesh deformation transfer, an alternative way to transfer pose is to utilize skeletons. Motion retargeting is also a common name used for transferring poses from one motion sequence to another. Gleicher [24] propose a space-time constrained solver aiming to satisfy the kinematics-level constraints and to preserve the characters' original identity. Following works [5, 33, 19] try to solve inverse-kinematics or inverse rate control to achieve pose transfer. There are also dynamics-based methods [60, 4] that consider physics during the retargeting process. Recently, learning-based methods [20, 27, 61, 38, 62] train deep neural networks to predict the transformation of the skeleton. Aberman [2] propose a pooling-based method to transfer poses between meshes with different skeletons. All these works highly rely on the skeleton for pose transfer. Other works try to estimate the rigging of the template shape [7, 40, 53, 64, 65] when a skeleton is not available. But if the prediction of the skinning weights fails, the retargeting fails as well. Liao [37] propose a model that learns to predict the skinning weights and pose transfer jointly using ground truth skinning weights and paired motion data as supervision, which limits the generalization of this method to categories where annotations are more scarce compared to humans (quadrupeds). Instead, our method uses posed human or animal meshes for training and deforms stylized characters of different shapes at inference. **Implicit 3D shape representation.** Implicit 3D shape representations have shown great success in reconstructing static shapes [13, 16, 23, 29, 43, 44, 50, 18, 28, 45] and deformable ones [45, 46, 47, 48, 49, 50, 28, 59]. DeepSDF [50] proposes to use an MLP to predict the signed distance field (SDF) value of a query point in 3D space, where a shape code is jointly optimized in an auto-decoding manner. Occupancy flow [46] generalizes the Occupancy Networks [43] to learn a temporally and spatially continuous vector field with a NeuralODE [15]. Inspired by parameteric models, NPMs [48] disentangles and represents the shape and pose of dynamic humans by learning an implicit shape and pose function, respectively. Different from these implicit shape representation works that focus on reconstructing static or deformable meshes, we further exploit the inherent continuity and locality of implicit functions to deform stylized characters to match a target pose in a zero-shot manner. ## 3 Method We aim to transfer the pose of a biped or quadruped avatar to an unrigged, stylized 3D character. We tackle this problem by modeling the shape and pose of a 3D character using a correspondence-aware shape understanding module and an implicit pose deformation module, inspired by classical mesh deformation methods [56, 57]. The shape understanding module (Sec. 3.1, Fig. 2) predicts a latent shape code and part segmentation label of a 3D character in rest pose, while the pose deformation module (Sec. 3.2, Fig. 3) deforms the character in the rest pose given the predicted shape code and a target pose code. Moreover, to produce natural deformations and generalize to rare poses unseen at training, we introduce an efficient volume-based test-time training procedure (Sec 3.3) for unseen stylized characters. All three modules, trained only with posed, unclothed human meshes, and unrigged, stylized characters in a rest pose, are directly applied to unseen stylized characters at inference. We explain our method for humans, and describe how we extend it to quadrupeds in Sec. 4.6. ### Correspondence-Aware Shape Understanding Given a 3D character in rest pose, we propose a shape understanding module to represent its shape information as a latent code, and to predict a body part segmentation label for each surface point. To learn a representative shape code, we employ an implicit auto-decoder [48, 50] that reconstructs the 3D character taking the shape code as input. During training, we jointly optimize the shape code of each training sample and the decoder. Given an unseen character (, a stylized 3D character) during inference, we obtain its shape code by freezing the decoder and optimizing the shape code to reconstruct the given character. Specifically, as shown in Fig. 2, given the concatenation of a query point \(x\in\mathbb{R}^{3}\) and the shape code \(s\in\mathbb{R}^{d}\), we first obtain an embedding \(e\in\mathbb{R}^{d}\) via an MLP denoted as \(\mathcal{F}\). Conditioned on the embedding \(e\), the occupancy \(\hat{o}_{x}\in\mathbb{R}\) of \(x\) is then predicted by another MLP denoted as \(\mathcal{O}\). The occupancy indicates if the query point \(x\) is inside or outside the body surface and can be supervised by the ground truth occupancy as: \[\mathcal{L}_{\mathcal{O}}=-\sum_{x}(o_{x}\cdot log(\hat{o}_{x})+(1-o_{x}) \cdot log(1-\hat{o}_{x})), \tag{1}\] where \(o_{x}\) is the ground truth occupancy at point \(x\). Since our shape code eventually serves as a condition for the pose deformation module, we argue that it should also capture the part correspondence knowledge across different instances, in addition to the shape information (, height, weight, and shape of each body part). This insight has been utilized by early local mesh deformation method [56], which explicitly utilizes correspondence to transfer local transformations between the source and target meshes. Our pose deformation process could also benefit from learning part correspondence. Take the various headgear, hats, and horns on the stylized characters's heads in Fig. 1 as an example. If these components can be "understood" as extensions of the character's heads by their shape codes, they will move smoothly with the character's heads during pose deformation. Thus, besides mesh reconstruction, we effectively task our shape understanding module with an additional objective: predicting part-level correspondence instantiated as the part segmentation label. Specifically, we propose to utilize an MLP \(\mathcal{P}\) to additionally predict a part label \(p_{x}=(p_{x}^{1},...,p_{x}^{K})^{T}\in\mathbb{R}^{K}\) for each surface point \(x\). Thanks to the densely annotated human mesh dataset, we can also supervise part segmentation learning with ground truth labels via: \[\mathcal{L}_{\mathcal{P}}=\sum_{x}(-\sum_{k=1}^{K}\mathbbm{1}_{x}^{k}log(p_{x} ^{k})), \tag{2}\] where \(K\) is the total number of body parts, and \(\mathbbm{1}_{x}^{k}=1\) if \(x\) belongs to the \(k^{th}\) part and \(\mathbbm{1}_{x}^{k}=0\) otherwise. To prepare the shape understanding module for stylized characters during inference, besides unclothend human meshes, we also include _unrigged_ 3D stylized characters in rest pose during training. These characters in rest pose are easily accessible and do not require any annotation. For shape reconstruction, Eq. 1 can be similarly applied to the stylized characters. However, as there is no part segmentation annotation for stylized characters, we propose a self-supervised inverse constraint inspired by correspondence learning methods [17, 39] to facilitate part segmentation prediction on these characters. Specifically, we reconstruct the query point's coordinates from the concatenation of the shape code \(s\) and the embedding \(e\) through an MLP \(\mathcal{Q}\) and add an auxiliary objective as: \[\mathcal{L}_{\mathcal{Q}}=||\mathcal{Q}(s,e)-x||^{2}. \tag{3}\] Intuitively, for stylized characters without part annotation, the model learned without this objective may converge to a trivial solution where similar embeddings are predicted for points with the same occupancy value, even when they are far away from each other, and belong to different body parts. Tab. 4 further quantitatively verifies the effectiveness of this constraint. Beyond facilitating shape understanding, the predicted part segmentation label is further utilized in the volume-based test-time training module which will be introduced in Sec. 3.3. ### Implicit Pose Deformation Module Given the learned shape code and a target pose, the pose deformation module deforms each surface point of the character to match the target pose. In the following, we first describe how we represent a human pose and then introduce the implicit function used for pose deformation. Instead of learning a latent pose space from scratch as in [48, 37], we propose to represent a human pose by the corresponding pose code in the latent space of VPoser [52]. Our intuition is that VPoser is trained with an abundance of posed humans from the large-scale AMASS dataset [42]. This facilitates faster training and provides robustness to overfitting. Furthermore, human poses can be successfully estimated from different modalities (_e.g_., videos or meshes), and mapped to the latent space of VPoser by existing methods [52, 32, 54]. By taking advantage of these works, our Figure 3: **The pose deformation module (Sec. 3.2).** Given a query point on the surface, the learned shape code and a target pose code, we use an MLP to predict the offset of the query point. Figure 2: **The shape understanding module (Sec. 3.1).** Given a query point and a learnable shape code, we take MLPs to predict the occupancy, part segmentation label and further use an inverse MLP to regress the query point. model can be applied to transfer poses from various modalities to an unrigged stylized character without any additional effort. A few examples can be found in the supplementary. To deform a character to match the given pose, we learn a neural implicit function \(\mathcal{M}\) that takes the sampled pose code \(m\in\mathbb{R}^{32}\), the learned shape code, and a query point \(x\) around the character's surface as inputs and outputs the offset (denoted as \(\Delta\hat{x}\in\mathbb{R}^{3}\)) of \(x\) in 3D space. Given the densely annotated human mesh dataset, we directly use the ground truth offset \(\Delta x\) as supervision. The training objective for our pose deformation module is defined as: \[\mathcal{L}_{\mathcal{D}}=\sum_{x}||\Delta\hat{x}-\Delta x||^{2}. \tag{4}\] Essentially, our implicit pose deformation module is similar in spirit to early local mesh deformation methods [56] and has two key advantages compared to the part-level pose transfer methods [22, 37, 63]. First, our implicit pose deformation network is agnostic to mesh topology and resolution. Thus our model can be directly applied to unseen 3D stylized characters with significantly different resolutions and mesh topology compared to the training human meshes during inference. Second, stylized characters often include distinct body part shapes compared to humans. For example, the characters shown in Fig. 1 include big heads or various accessories. Previous part-level methods [37] that learn to predict a bone transformation and skinning weight for each body part usually fail on these unique body parts, since they are different from the corresponding human body parts used for training. In contrast, by learning to deform individual surface point, implicit functions are more agnostic to the overall shape of a body part and thus can generalize better to stylized characters with significantly different body part shapes. Fig. 4 and Fig. 6 show these advantages. ### Volume-based Test-time Training The shape understanding and pose deformation modules discussed above are trained with only posed human meshes and unrigged 3D stylized characters in rest pose. When applied to unseen characters with significantly different shapes, we observe surface distortion introduced by the pose deformation module. Moreover, it is challenging for the module to fully capture the long tail of the pose distribution. To resolve these issues, we propose to apply test-time training [58] and fine-tune the pose deformation module on unseen stylized characters. To encourage natural pose deformation, we further propose a volume-preserving constraint during test-time training. Our key insight is that preserving the volume of each part in the rest pose mesh during pose deformation results in less distortion [35, 63]. However, it is non-trivial to compute the precise volume of each body part, which can have complex geometry. Instead, we propose to preserve the Euclidean distance between pairs of vertices sampled from the surface of the mesh, as a proxy for constraining the volume. Specifically, given a mesh in rest pose, we randomly sample two points \(x_{i}^{c}\) and \(x_{j}^{c}\) on the surface within the same part \(c\) using the part segmentation prediction from the shape understanding module. We calculate the offset of these two points \(\Delta\hat{x}_{i}^{c}\) and \(\Delta\hat{x}_{j}^{c}\) using our pose deformation module and minimize the change in the distance between them by: \[\mathcal{L}_{v}=\sum_{c}\sum_{i}\sum_{j}(||x_{i}^{c}-x_{j}^{c}||-||(x_{i}^{c} +\Delta\hat{x}_{i}^{c})-(x_{j}^{c}+\Delta\hat{x}_{j}^{c})||)^{2}. \tag{5}\] By sampling a large number of point pairs within a part and minimizing Eq. 5, we can approximately maintain the volume of each body part during pose deformation. Furthermore, in order to generalize the pose deformation module to long-tail poses that are rarely seen during training, we propose to utilize the source character in rest pose and its deformed shape as paired training data during test-time training. Specifically, we take the source character in rest pose, its target pose code, and its optimized shape code as inputs and we output the movement \(\Delta\hat{x}^{dr}\), where \(x^{dr}\) is a query point from the source character. We minimize the L2 distance between the predicted movement \(\Delta\hat{x}^{dr}\) and the ground truth movement \(\Delta x^{dr}\), \[\mathcal{L}_{dr}=\sum_{x^{dr}}||\Delta\hat{x}^{dr}-\Delta x^{dr}||^{2}. \tag{6}\] Besides the volume-preserving constraint and the reconstruction of the source character, we also employ the edge loss \(\mathcal{L}_{e}\) used in [25, 37, 63]. Overall, the objectives for the test-time training procedure are \(\mathcal{L}_{\mathcal{T}}=\lambda_{v}\mathcal{L}_{v}+\lambda_{e}\mathcal{L}_{ e}+\lambda_{dr}\mathcal{L}_{dr}\), where \(\lambda_{v}\), \(\lambda_{e}\), and \(\lambda_{dr}\) are hyper-parameters balancing the loss weights. ## 4 Experiments ### Datasets To train the shape understanding module, we use 40 human meshes sampled from the SMPL [41] parametric model. We use both the occupancy and part segmentation label of these meshes as supervision (see Sec. 3.1). To generalize the shape understanding module to stylized characters, we further include 600 stylized characters from RigNet [64]. Note that we _only_ use the rest pose mesh (_i.e._, occupancy label) of the characters in [64] for training. To train our pose deformation module, we construct paired training data by deforming each of the 40 SMPL characters discussed above with 5000 pose codes sampled from the VPoser's [51] latent space. In total, we collect 200,000 training pairs, with each pair including an unclothed human mesh in rest pose and the same human mesh in target pose. After training the shape understanding and pose deformation modules, we test them on the Mixamo [1] dataset, which includes challenging stylized characters, and the MGN [11] dataset, which includes clothed humans. The characters in both datasets have different shapes compared to the unclothed SMPL meshes we used for training, demonstrating the generalization ability of the proposed method. Following [37], we test on 19 stylized characters, with each deformed by 28 motion sequences from the Mixamo dataset. For the MGN dataset, we test on 16 clothed characters, with each deformed by 200 target poses. Both the testing characters and poses are unseen during training. For quadrupeds, since there is no dataset including large-scale paired stylized quadrupeds for quantitative evaluation, we split all characters from the SMAL [70] dataset and use the first 34 shapes (, cats, dogs, and horses) for training. We further collect 81 stylized quadrupeds in rest pose from the RigNet [64] to improve generalization of the shape understanding module. Similarly to the human category, we use occupancy and part segmentation supervision for the SMAL shapes and only the occupancy supervision for RigNet meshes. To train the pose deformation module, we deform each of the 34 characters in SMAL by 2000 poses sampled from the latent space of BARC [55], a 3D reconstruction model trained for the dog category. We quantitatively evaluate our model on the hippo meshes from the SMAL dataset, which have larger shape variance compared to the cats, dogs, and horses used for training. We produce the testing data by deforming each hippo mesh with 500 unseen target poses from SMAL [70]. We show qualitative pose transfer on stylized quadrupeds in Fig. 1. ### Implementation Details We use the ADAM [30] optimizer to train both the shape understanding and pose deformation modules. For the shape understanding module, we use a learning rate of \(1e-4\) for both the decoder and shape code optimization, with a batch size of 64. Given a new character at inference time, we fix the decoder and only optimize the shape code for the new character with the same optimizer and learning rate. For the pose deformation module, we use a learning rate of \(3e-4\) with a batch size of 128. For test-time training, we use a batch size of 1 and a learning rate of \(5e-3\) with the ADAM optimizer. We set \(\lambda_{v}\), \(\lambda_{e}\), and \(\lambda_{dr}\) (See Sec. 3.3) as 0.05, 0.01, and 1 respectively. ### Metrics and Baselines for Comparison **Metrics.** We use Point-wise Mesh Euclidean Distance (PMD) [37, 63] to evaluate pose transfer error. The PMD metric reveals pose similarity of the predicted deformation compared to its ground truth. However, as shown in Fig. 4, PMD can not fully show the smoothness and realism of the generated results. Thus, we adopt an edge length score (ELS) metric to evaluate the character's smoothness after the deformation. Specifically, we compare each edge's length in the deformed mesh with the corresponding edge's length in the ground truth mesh. We define the score as \[\frac{1}{|\mathcal{E}|}\sum_{\{i,j\}\sim\mathcal{E}}1-|\frac{||\hat{V}_{i}- \hat{V}_{j}||_{2}}{||V_{i}-V_{j}||_{2}}-1|, \tag{7}\] where \(\mathcal{E}\) indicates all edges of the mesh, \(|\mathcal{E}|\) is the number of the edges in the mesh. \(\hat{V}_{i}\) and \(\hat{V}_{j}\) are the vertices in the deformed mesh. \(V_{i}\) and \(V_{j}\) are the vertices in the ground truth mesh. For all the evaluation metrics, we scale the template character to be 1 meter tall, following [37]. **Baselines.** We compare our method with Neural Blend Shapes (NBS) [35] and Skeleton-free Pose Transfer (SPT) [37]. NBS is a rigging prediction method trained on the SMPL and MGN datasets, which include naked and clothed human meshes with ground truth rigging information. For SPT, we show the results of two versions, one is trained only on the AMASS dataset, named SPT, which has a comparable level of supervision to our method. We also test the SPT*(full) version, which is trained on the AMASS, RigNet and Mixamo datasets, using both stylized characters' skinning weights as supervision and paired stylized characters in rest pose and target pose. ### Human-like Character Pose Transfer We report the PMD metric on the MGN and Mixamo datasets in Tab. 1. We also include the performance of SPT*(full) for reference. On the MGN dataset which includes clothed humans, our method which is trained with only unclothed humans achieve the best PMD score than all baseline methods, including baselines trained with more supervision (, the NBS [35] learned with clothed humans and the SPT*(full) [37] learned with skinning weight and paired motion data). For the stylized characters, our method outperforms the SPT baseline learned with a comparable amount of supervision and gets competitive results with the NBS [35] and SPT*(full) baseline trained with more supervision. Furthermore, when testing on the more challenging, less human-like characters (, a mouse with a big head in Fig. 1), the baselines produce noticeable artifacts and rough surfaces, which can be observed in the qualitative comparisons in Fig. 4. We provide the PMD value for each character in the supplementary. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Dataset & Metric & SPT*(full) [37] & NBS [35] & SPT [37] & Ours \\ \hline \multirow{2}{*}{MGN [11]} & PMD \(\downarrow\) & 1.62 & 1.33 & 1.82 & 0.99 \\ & ELS \(\uparrow\) & 0.86 & 0.70 & 0.85 & 0.89 \\ \hline \multirow{2}{*}{Mixamo [1]} & PMD \(\downarrow\) & 3.05 & 7.04 & 5.29 & 5.06 \\ & ELS \(\uparrow\) & 0.61 & 0.66 & 0.59 & 0.88 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparison on MGN and Mixamo.** Our method achieves the lowest PMD with the highest ELS. We provide the performance of the SPT*(full) method, which uses more supervision than the other methods as a reference. Our method is even better or comparable to it. We show the ELS score comparison of different methods on the MGN and Mixamo datasets in Tab. 1. For both clothed humans and stylized characters, our method can generate more realistic results which are consistent with the target mesh and achieves the best ELS score. We visually compare our method and the baseline methods in Fig. 4 on the Mixamo dataset. Although NBS is trained with a clothed-human dataset, when testing on the human-like characters, it still fails on parts that are separate from the body such as the hair and the pants. When using only naked human meshes as supervision, SPT cannot generalize to challenging human-like characters, producing rough mesh surface with spikes. ### Part Understanding Comparison As discussed in Sec. 3.1, part segmentation plays an important role in both shape understanding and pose deformation. Though NBS [35] and SPT [37] do not explicitly predict part segmentation label, they are both skinning weight-based methods and we can derive the part segmentation label from the predicted skinning weights. Specifically, by selecting the maximum weight of each vertex, we can convert the skinning weight prediction to part segmentation labels for the vertices. We compare our part prediction results with those derived from SPT and NBS. We report the part segmentation accuracy on the Mixamo datasets in Tab. 2 \begin{table} \begin{tabular}{l c c c} \hline \hline Metric & NBS [35] & SPT [37] & Ours \\ \hline Accuracy \(\uparrow\) & 67.8\% & 75.6\% & 86.9\% \\ \hline \hline \end{tabular} \end{table} Table 2: **Part prediction accuracy on Mixamo [1]**. Our method achieves the best part segmentation accuracy. Figure 4: **Qualitative comparison on Mixamo.** The average PMD of these three results for NBS, SPT, and Ours are 8.16, 6.13, and 5.16 respectively and the average ELS for NBS, SPT, and Ours are 0.65, 0.78, and 0.93 respectively. Our method can successfully transfer the pose to challenging stylized characters (e.g., the mouse with a big head in the second row). Figure 5: **Part segmentation visualization.** NBS makes wrong predictions for hair while SPT may mix the upper legs. Figure 6: **Quadrupedal pose transfer visualization.** Our method can achieve smooth and accurate pose transfer while SPT fails on the mouth and leg regions. and visualize the part segmentation results in Fig. 5. Even trained with only part segmentation supervision of human meshes, our method can successfully segment each part for the stylized characters. On the contrary, SPT uses graph convolution network [31] to predict the skinning weights. When training only with human meshes, it often fails to distinguish different parts. As shown in Fig. 5, it mixes up the right and left upper legs, and incorrectly classifies the shoulder as the head. Though NBS is trained with clothed humans, it always classifies human hair as the human body for characters from Mixamo. This is because that NBS uses the MeshCNN [26] as the shape encoder. As a result, it is sensitive to mesh topology and cannot generalize to meshes with disconnected parts (_e.g_., disconnected hair and head). Tab. 2 further quantitatively demonstrates that our method achieves the best part segmentation accuracy, demonstrating its ability to correctly interpret the shape and part information in stylized characters. ### Quadrupedal Pose Transfer Comparison To further show the generalization ability of our method, we conduct experiments on quadrupeds. We report the PMD and ELS score of our method and the SPT [37] in Tab. 3. When testing on hippos with large shape gap from the training meshes, SPT has a hard time generalizing both in terms of pose transfer accuracy and natural deformation. While our method achieves both better qualitative and quantitative results. We visualize the qualitative comparisons in Fig. 6. SPT produces obvious artifacts on the hippo's mouth and legs, while our method achieves accurate pose transfer and maintains the shape characteristics of the original character at the same time. We provide more results in the supplementary. We also show the part segmentation results on stylized characters by our method in Fig. 8. Even for unique parts such as the hats and antlers, our method correctly assigns them to the head part. ### Ablation Study To evaluate the key components of our method, we conduct ablation studies on the MGN dataset by removing the inverse constraint (Eq. 3) in the shape understanding module and the volume-preserving loss (Eq. 5) used during the test-time training produce, we name them as "ours w/o inv" and "ours w/o \(v\)" respectively. We report the PMD and ELS metrics in Tab. 4. The model learned without the inverse constraint or volume-preserving loss has worse PMD and ELS score than our full model, indicating the contribution of these two objectives. We also provide qualitative results in Fig. 7. We use red boxes to point out the artifacts. As shown in Fig. 7, our model trained without the inverse constraint produces less accurate pose transfer results. Moreover, adding the volume-preserving loss helps to maintain the character's local details such as the thickness of the arms. ## 5 Conclusion In this paper, we present a model that deforms unrigged, stylized characters guided by a biped or quadruped avatar. Our model is trained with only easily accessible posed human or animal meshes, yet can be applied to unseen stylized characters in a zero-shot manner during inference. To this end, we draw key insights from classic mesh deformation method and develop a correspondence-aware shape understanding module, an implicit pose deformation module and a volume-based test-time training procedure. We carry out extensive experiments on both the biped and quadruped category and show that our method produces more realistic and accurate deformation compared to baselines learned with comparable or more supervision. \begin{table} \begin{tabular}{l c c c} \hline \hline Metric & Ours w/o inv & Ours w/o volume & Ours \\ \hline PMD \(\downarrow\) & 1.26 & 1.02 & 0.99 \\ \hline ELS \(\uparrow\) & 0.88 & 0.88 & 0.89 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study on inverse MLP and volume preserving loss.** The inverse MLP and volume preserving loss helps to improve pose transfer accuracy and produce smooth deformation. Figure 8: **Part prediction on stylized quadrupeds.** Our method successfully predicts the parts of unseen stylized quadrupeds. \begin{table} \begin{tabular}{l c|c c c} \hline \hline Metric & SPT [37] & Ours & Metric & SPT [37] & Ours \\ \hline PMD \(\downarrow\) & 10.28 & 8.28 & ELS \(\uparrow\) & 0.28 & 0.86 \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparison on Hippos from SMAL [70]**. Our method achieves better pose transfer accuracy with more smooth results. Figure 7: **Qualitative comparison for ablation study.** Removing the constraint (eq. 1) in shape understanding leads to wrong pose deformation results. The volume preserving loss (eq. 5) helps to maintain the identity, _e.g_., the thickness of the arms in first row. ## Appendix In this appendix, we introduce more details about the evaluation data curation procedure, the implementation of our method and the baseline methods, more qualitative results and the limitations of our method. ## Appendix A Evaluation Data Curation **Mixamo.** Because the preprocessed Mixamo [1] testing sequences used in [37] are not publicly available, we follow the instructions in [37] and download the testing data from the Mixamo website [1]. In [37], 20 stylized characters and 28 motion sequences are used for evaluation. Among the 20 characters, the "liam" character is not publicly available on the Mixamo website, thus we evaluate our method and the baselines on the other 19 stylized characters. Moreover, some evaluation motions (e.g., "Teeter") include more than one motion sequence on the Mixamo website with the same name. However, it is not public information as to what exact sequences were used for evaluation in the prior work [37]. Thus, we download all motion sequences with the same name and randomly pick one for evaluation. Given a character in rest pose and the desired pose, we use the linear blend skinning algorithm to obtain the ground truth deformed mesh. We then compare the prediction from each method with the ground truth mesh by computing the PMD and ELS scores as discussed in Sec.4.3 in the main paper. For a fair comparison, all poses in the evaluation motion sequences are not used during training. All methods are evaluated using these collected testing pairs. **MGN.** We follow NBS [35] and download the MGN dataset1, which includes 96 clothed human characters. We use the same evaluation set (i.e., the last 16 human characters) as in NBS. To obtain the ground truth deformed characters, we sample 200 poses (unseen during training) and deform each of the 16 clothed characters using the Multi-Garment Net [11]. Footnote 1: [https://github.com/bharat-b7/MultiGarmentNetwork](https://github.com/bharat-b7/MultiGarmentNetwork) **Pose code extraction from Mixamo characters.** To obtain target poses from the Mixamo motion sequences, we apply a similar fitting procedure introduced in [36]. We optimize the SMPL parameters to minimize the L2 distance between the SMPL joints and the Mixamo joints. Different from [36], we also add a constraint to minimize the Chamfer distance between the SMPL shape vertices and the Mixamo shape vertices. Similarly as [54], we directly optimize the pose code in the VPoser's [51] latent space, instead of the parameters in SMPL. We fit the SMPL shape to the "marker man" character in Mixamo to get all the testing poses. ## Appendix B Implementation Details **Shape code computation.** We use an off-the-shelf method2 that computes occupancy with "virtual laser scans" and does not require a watertight mesh. We sample 10,000 points in a unit space, which takes **2.35s** on average. Then, we use the occupancy of each query point as supervision to optimize the shape code. We run 2,000 iterations with a batch size of 2,000 to get the shape code, which takes **3.41s** on average. For each character, we only compute its shape code **once** and use it to transfer poses from different motion sequences. All the time cost reported in this supplementary was measured on a laptop with I7-11700h and a RTX 3060. Footnote 2: [https://github.com/marian42/mesh_to_sdf](https://github.com/marian42/mesh_to_sdf) **Detailed test-time training (TTT) procedure.** Following the inference procedure in [37], TTT takes a stylized character in T-pose, and a source human character in T-pose and target pose as inputs. TTT finetunes the pose module to perform two tasks: a) the T-pose stylized character is deformed to the target pose, while being constrained by the self-supervised volume-preserving loss \(L_{v}\). b) the source human character in T-pose is deformed to the target pose, while being supervised by the ground truth human character in the target pose (\(L_{dr}\)). TTT further refines the results' smoothness and resemblance to driving poses. \(L_{dr}\) helps the pose module understand and generalize to the target pose, rather than enforcing that the human and stylized character have similar offsets. TTT is carried out for each pair of stylized character and target pose. It is highly efficient and only requires fine-tuning the pose module for 20 iterations, which takes **18ms** without batching. We can speed it up to **12ms** for each pair with a batch size of 8. ## Appendix C Baseline Methods Implementation **NBS [35].** We evaluate NBS using its publicly available code and pre-trained model3. NBS [35] takes the SMPL pose parameters as input, thus we feed the optimized SMPL parameters discussed above to NBS. Footnote 3: [https://github.com/Peizhuolc/neural-blend-shapes](https://github.com/Peizhuolc/neural-blend-shapes) **SPT [37].** To evaluate both SPT(full) and SPT on human-like stylized characters, we use the publicly available code4 and pre-trained models generously provided by the authors. For the quadruped category, we train and evaluate the SPT model using its public code on the dataset discussed in Sec.4.1 in the main paper. Specifically, we utilize the SMAL model [70] to produce motion pairs, including an animal mesh in rest pose and the desired pose. We also supervise SPT with the ground truth skinning weights from SMAL. Note that our model is trained and evaluated using the same quadruped dataset as SPT. Footnote 4: [https://github.com/zycliao/skeleton-free-pose-transfer](https://github.com/zycliao/skeleton-free-pose-transfer) ## Appendix D Visualization We provide more visualizations, including qualitative comparisons (Fig. 9), deformation results by using source poses from in-the-wild videos for both human-like (Fig. 10 and Fig. 11) and quadrupeds (Fig. 12). To obtain the pose code from a video frame, we apply PyMAF [68] for human and BARC [55] for quadrupeds. We provide more visualizations in the supplementary video. ## Appendix E Limitation Although our approach exhibits good generalization performance for bipedal and quadrupedal characters, modeling other categories whose poses are not being studied well remains difficult. Additionally, our method is unable to solve the articulation of hands and just treats them as rigid parts. Figure 9: **Qualitative comparisons on Mixamo [1].** Figure 10: **Transferring poses from in-the-wild videos to stylized characters.** Figure 11: **Transferring poses from in-the-wild videos to stylized characters.** Figure 12: **Transferring animal poses from in-the-wild videos to stylized quadrupedal characters.**
2309.10696
Nuclear descent from the fission barrier in the presence of long--range memory effects
We have investigated the peculiarities of nuclear descent from a parabolic fission barrier within a generalized Langevin equation with power--law $f(t-t')=(|t-t'|/\tau)^{-\alpha}$ memory function. We have observed much stronger slowing down of the nuclear descent in the presence of long--range memory effects, caused by the power--law memory function at $0<\alpha<1$, than in the presence of short--range memory effects, generated by exponential $f(t-t')={\rm exp}(-|t-t'|/\tau)$ memory function. At a specific value of the exponent $\alpha=1/2$ of the power--law memory function, it turned out possible to find analytically the trajectory of the descent and demonstrate that the long--range memory effects give rise to complex time oscillations of nuclear shape, becoming more frequent and damped with the correlation time $\tau$. We have found fairly long ($>10^{-20}~{\rm s}$) times of the descent of $^{\rm 236}{\rm U}$ at the values of the correlation time $\tau \sim [10^{-24}\div 10^{-23}]~{\rm s}$.
S. V. Radionov
2023-09-19T15:34:45Z
http://arxiv.org/abs/2309.10696v1
# Nuclear descent from the fission barrier in the presence of long-range memory effects ###### Abstract We have investigated the peculiarities of nuclear descent from a parabolic fission barrier within a generalized Langevin equation with power-law \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}\) memory function. We have observed much stronger slowing down of the nuclear descent in the presence of long-range memory effects, caused by the power-law memory function at \(0<\alpha<1\), than in the presence of short-range memory effects, generated by exponential \(f(t-t^{\prime})=\exp(-|t-t^{\prime}|/\tau)\) memory function. At a specific value of the exponent \(\alpha=1/2\) of the power-law memory function, it turned out possible to find analytically the trajectory of the descent and demonstrate that the long-range memory effects give rise to complex time oscillations of nuclear shape, becoming more frequent and damped with the correlation time \(\tau\). We have found fairly long (\(>10^{-20}\) s) times of the descent of \({}^{236}\)U at the values of the correlation time \(\tau\sim[10^{-24}\div 10^{-23}]\) s. Introduction. The formalism of Langevin equation [1; 2; 3] is a powerful tool under transport description of different dynamical processes in systems of many interacting particles. In the case of nuclear many-body systems, the Langevin approaches have been used to describe fission [4], fusion [5] and deep-inelastic processes [6]. All these Langevin approaches are based on separation of nuclear degrees of freedom onto a few macroscopic (collective) \(q(t)\) and a mass of microscopic (nucleonic) modes of motion [7; 8]. The latter constitutes a heat bath with temperature \(T\), exerting friction \(\kappa_{0}\int_{0}^{t}f(t-t^{\prime})[dq/dt](t^{\prime})dt^{\prime}\) and random \(\xi(t)\) forces on collective motion, related to each other through the fluctuation-dissipation theorem [8]. In general, the friction and random forces have time non-local character, determined by time-spreaded memory function \(f(t-t^{\prime})\), which represents a complex energy flow between the collective and nucleonic degrees of freedom [9; 10]. Although there is a controversial opinion on the importance of non-Markovian (memory) effects in the nuclear large-amplitude collective dynamics [8; 11; 12; 13; 14; 15; 16; 17], we would like to stress that all these non-Markovian studies are only using the different versions of exponential \(f(t-t^{\prime})=\exp(-|t-t^{\prime}|/\tau)\) memory function, where the relaxation (correlation) time \(\tau\) measures the time spread of the retarded friction force. It has been demonstrated earlier [13] that the non-Markovian descent from the fission barrier, governed by the exponential memory function, may be significantly delayed and accompanied by characteristic oscillations of nuclear shape. The memory effects there show non-monotonic dependence on the correlation time \(\tau\), i. e., in two extremes of quite small and fairly large values of \(\tau\) the nuclear collective dynamics becomes Markovian [13; 18]. In fact, such memory effects are of short-range type as far as they are only prominent within a narrow interval of values of the correlation time \(\tau\), which are comparable to reciprocal of the characteristic frequency of the nuclear collective motion [12; 13]. In the present study we investigate nuclear fission dynamics at the descent from the top of fission barrier to scission point by the help of the generalized Langevin equation with a power-law, \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}\), memory function. Such a memory function has been successfully applied under the generalized Langevin description of many dynamical systems, exhibiting anomalous diffusion behaviour [19; 20; 21]. The anomalous character of the diffusion there reflects in a fractional time dependence of the mean square displacement of the system and in a power-law decay of its velocity autocorrelation function [22; 23]. All these remarkable features of the anomalous diffusion process are caused by the presence of long-range memory effects in the system's dynamics [24], i. e., the memory effects, existing over a broad range of time scales of the system's dynamics [25]. The plan of the paper is as follows. In Sect. II, we set in the generalized Langevin equation of motion for a nuclear shape variable and give its solution in terms of the Laplace transform of the memory function \(f(t-t^{\prime})\). Sect. III is devoted to discussion of the time evolution of trajectory of nuclear descent \(q(t)\) in the presence of memory effects, caused by the exponential memory function \(f(t-t^{\prime})=\exp(-|t-t^{\prime}|/\tau)\). In Sect. IV, we give an analytical solution of the generalized Langevin equation of motion with the power-law memory function \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}\) at specific value of the exponent \(\alpha=1/2\). In Sect. V, we present results of calculations of times of the nuclear descent. Finally, summary and main conclusions are given in Summary. ## II Generalized Langevin equation of motion We start by postulating a generalized Langevin equation of motion for a nuclear shape variable \(q(t)\), \[M(q)\frac{d^{2}q(t)}{dt^{2}}=-\frac{1}{2}\frac{\partial M(q)}{\partial q} \left(\frac{dq(t)}{dt}\right)^{2}-\frac{\partial\mathrm{E}_{\mathrm{pot}}(q)} {\partial q}-\kappa_{0}\int_{0}^{t}f(t-t^{\prime})\frac{dq(t^{\prime})}{dt^{ \prime}}dt^{\prime}+\xi(t), \tag{1}\] Here, \(q(t)\) is measured in units of the radius \(R_{0}\) of equal-volume spherical nucleus, \(M(q)\) is a collective mass parameter, \(\mathrm{E}_{\mathrm{pot}}(q)\) is a collective potential energy and \(\kappa_{0}\) is the strength of the retarded friction force related to the random force \(\xi(t)\) through the fluctuation-dissipation theorem, \[\langle\xi(t)\xi(t^{\prime})\rangle=T\kappa_{0}f(t-t^{\prime}). \tag{2}\] In the latter equation, \(T\) is a constant nuclear temperature and the ensemble averaging \(\langle...\rangle\) is performed over all random realizations of the Gaussian stationary process \(\xi(t)\). The memory function \(f(t-t^{\prime})\) in Eqs. (1) and (2) is assumed to be a decaying function of its argument \(x\equiv|t-t^{\prime}|/\tau\), \[f(x)\to 0,\ \ x\rightarrow\infty, \tag{3}\] where a correlation time, \(\tau\), measures the time spread of the retarded friction force and defines an effective interval of time within which values \(\xi(t)\) and \(\xi(t^{\prime})\) of the random force at different moments of time \(t\) and \(t^{\prime}\) correlate with each other. Considering a nuclear descent from a top, \(q=q_{b}\), of fission barrier to some scission \(q=q_{sc}\), we approximate the potential energy \(\rm E_{pot}(q)\) by the inverted parabolic dependence on \(q\), \[\rm E_{pot}(q)=V_{b}-(M_{B}\omega_{b}^{2}/2)(q-q_{b})^{2}. \tag{4}\] Here, \(V_{b}\) is the height of fission barrier and \(\omega_{b}\) is a frequency parameter, defining the nuclear stiffness at \(q=q_{b}\), \[\omega_{b}=\sqrt{\frac{1}{M_{b}}\left|\frac{d^{2}E_{\rm pot}(q)}{dq^{2}} \right|_{q=q_{b}}},\ \ \ M_{b}=M(q=q_{b}). \tag{5}\] For the mass parameter \(M(q)\) it is adopted a hydrodynamical value, \[M(q)=\frac{1}{5}Am_{0}R_{0}^{2}\left(1+\frac{1}{2q^{3}}\right), \tag{6}\] where \(A\) is the nuclear mass number, \(m_{0}\) is nucleon mass and \(R_{0}=r_{0}A^{1/3}\) is the radius of equal volume sphere. In Fig. 1, we showed the potential energy \(\rm E_{pot}(q)\) (4) for the following set of parameters [26]: \[A=236,\ \ V_{b}=8\ {\rm MeV},\ \ q_{b}=1.6,\ \ \hbar\omega_{b}=1.16\ {\rm MeV}. \tag{7}\] Figure 1: Schematic representation of the parabolic fission barrier (4), (7) of \({}^{236}\)U with saddle point \(B\) and scission point \(C\). ### General solution of the non-Markovian dynamics To get analytical solution of the Langevin equation of motion (1), we linearize it with respect to a displacement, \(\Delta q(t)\equiv q(t)-q_{b}\), \[\frac{d^{2}\Delta q(t)}{dt^{2}}=\omega_{b}^{2}\Delta q(t)-(\kappa_{0}/M_{b})\int _{0}^{t}f(t-t^{\prime})\frac{d\Delta q(t^{\prime})}{dt^{\prime}}dt^{\prime}+(1/ M_{b})\xi(t), \tag{8}\] with \(M_{b}\) given by Eq. (5). The general solution of Eq. (8), subject to the initial conditions \[\Delta q(t=0)=0,\quad[d\Delta q/dt](t=0)=v_{0}>0, \tag{9}\] can be written as \[\Delta q(t)=B(t)v_{0}+(1/M_{b})\int_{0}^{t}B(t-t^{\prime})\xi(t^{\prime})dt^{ \prime}, \tag{10}\] where \(v_{0}\) is the initial velocity of a nucleus, and \(B(t)\) is a solution of the homogeneous equation, \[\frac{d^{2}B(t)}{dt^{2}}=\omega_{b}^{2}B(t)-(\kappa_{0}/M_{b}) \int_{0}^{t}f(t-t^{\prime})\frac{dB(t^{\prime})}{dt^{\prime}}dt^{\prime},\] \[B(t=0)=0,\quad[dB/dt](t=0)=1. \tag{11}\] The linear integro-differential equation (11) can be solved via the Laplace transformation method as \[\tilde{B}(s)=\frac{1}{s^{2}+(\kappa_{0}/M_{b})s\tilde{f}(s)-\omega_{b}^{2}}, \tag{12}\] where \(\tilde{B}(s)\) and \(\tilde{f}(s)\) are the Laplace transforms of the solution function \(B(t)\) and the memory function \(f(t)\), respectively. If the denominator of the rational expression (12) has in total \(N\) zeros such that \[\text{Re}(s_{1})\geq\text{Re}(s_{2})\geq...\geq\text{Re}(s_{N}), \tag{13}\] then the first zero \(s_{1}\) will define the long-time behaviour of the solution function \(B(t)\), \[B(t)\sim\text{e}^{s_{1}t},\quad t\rightarrow\infty. \tag{14}\] ## III Exponential memory function We first consider an exponential, \[f(t-t^{\prime})=\exp\left(-\frac{|t-t^{\prime}|}{\tau}\right), \tag{15}\] memory function, which recovers two Markovian limits of the nuclear generalized Langevin dynamics (1). As it follows from Eq. (11), the characteristic time scale of the solution function \(B(t)\) variations is \(1/\omega_{b}\) and in the limit \(\tau<<1/\omega_{b}\), the time integral in (11) can be evaluated by parts, \[-\kappa_{0}\int_{0}^{t}\exp\left(-\frac{|t-t^{\prime}|}{\tau}\right)\frac{dB(t ^{\prime})}{dt^{\prime}}dt^{\prime}\approx-\kappa_{0}\tau\frac{dB(t)}{dt}, \tag{16}\] giving rise to the appearance of a Markovian friction term with a friction coefficient, \(\kappa_{0}\tau\), proportional to the correlation time \(\tau\). The corresponding solution function \(B(t)\) is given by \[B(t)=\frac{1}{s_{1}-s_{2}}\left(\mathrm{e}^{s_{1}t}-\mathrm{e}^{s_{2}t}\right),\quad s_{1,2}=-\frac{\kappa_{0}\tau}{2M_{b}}\pm\sqrt{\left(\frac{\kappa_{0} \tau}{2M_{b}}\right)^{2}+\omega_{b}^{2}}. \tag{17}\] In the opposite limit of \(\tau>>1/\omega_{b}\), the time integral in Eq. (11) gives rise to a Markovian restorative term, \[-\kappa_{0}\int_{0}^{t}\exp\left(-\frac{|t-t^{\prime}|}{\tau}\right)\frac{dB (t^{\prime})}{dt^{\prime}}dt^{\prime}\approx-\kappa_{0}B(t), \tag{18}\] and \(B(t)\) may be either an exponentially growing in time, \[B(t)=\frac{1}{s_{1}-s_{2}}\left(\mathrm{e}^{s_{1}t}-\mathrm{e}^{s_{2}t} \right),\ \ s_{1,2}=\pm\sqrt{\omega_{b}^{2}-\kappa_{0}/M_{b}},\ \ \frac{\kappa_{0}}{M_{b}\omega_{b}^{2}}<1, \tag{19}\] or, oscillatory in time, \[B(t)=\frac{1}{\Omega}\mathrm{sin}(\Omega t),\ \ \Omega\equiv|\mathrm{Im}(s_{1},s_{2})|= \sqrt{\kappa_{0}/M_{b}-\omega_{b}^{2}},\ \ \frac{\kappa_{0}}{M_{b}\omega_{b}^{2}}\geq 1. \tag{20}\] In the latter case, the nuclear system \(q(t)\) (10), (11) remains in the close vicinity of the top \(q_{b}\) of fission barrier (4) for infinitely long time. In general, the time integral in Eq. (11) contains both a time-irreversible (friction) and time-reversible (restorative) components, \[-\kappa_{0}\int_{0}^{t}\exp\left(-\frac{|t-t^{\prime}|}{\tau}\right)\frac{dB (t^{\prime})}{dt^{\prime}}dt^{\prime}=-\gamma(t,\tau)dB(t)/dt-\kappa(t,\tau)B (t), \tag{21}\] described by a friction, \(\gamma(t,\tau)\), and a spring, \(\kappa(t,\tau)\), coefficients, respectively. The corresponding solution \(B(t)\) is found by substituting the Laplace transform \(\tilde{f}(s)=1/(s-1/\tau)\) of the memory function (15) into Eq. (12), \[\tilde{B}(s)=\frac{s-1/\tau}{s^{3}+(1/\tau)s^{2}+(\kappa_{0}/M_{b}-\omega_{b}^{ 2})s-1/\tau}=\frac{C_{1}}{s-s_{1}}+\frac{C_{2}}{s-s_{2}}+\frac{C_{3}}{s-s_{3}}, \tag{22}\] leading us to the solution function \[B(t)=C_{1}\mathrm{e}^{s_{1}t}+C_{2}\mathrm{e}^{s_{2}t}+C_{3}\mathrm{e}^{s_{3}t}. \tag{23}\] Here, \(C_{1},C_{2},C_{3}\) are constants, defined by the initial conditions (11), \[C_{i}=(s_{i}+1/\tau)\prod_{j=1(j\neq i)}^{3}\frac{1}{(s_{i}-s_{j})},\quad i= \overline{1,3} \tag{24}\] and \(s_{1},s_{2},s_{3}\) are three roots of the cubic secular equation: \[(s/\omega_{b})^{3}+\frac{1}{\omega_{b}\tau}(s/\omega_{b})^{2}+\left(\frac{ \kappa_{0}}{M_{b}\omega_{b}^{2}}-1\right)(s/\omega_{b})-\frac{1}{\omega_{b} \tau}=0. \tag{25}\] This equation always has one real positive root, \(s_{1}>0\), while the other two roots \(s_{2}\) and \(s_{3}\) may be either both real and negative or complex conjugated. In the latter case, the memory effects in the dynamics (11) are quite prominent and the solution function (23) can be rewritten as \[B(t)=C_{1}\mathrm{e}^{s_{1}t}+\left(C_{+}\mathrm{cos}(\Omega t)+C_{-}\mathrm{ sin}(\Omega t)\right)\mathrm{e}^{-\Gamma t}, \tag{26}\] where \[\Omega=|\mathrm{Im}(s_{2},s_{3})|,\ \ \Gamma=|\mathrm{Re}(s_{2},s_{3})|,\ \ C_{\pm}=C_{2}\pm C_{3}. \tag{27}\] ## IV Power-law memory function Now, we are going to investigate the peculiarities of the the non-Markovian dynamics (11), governed by a power-law, \[f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}, \tag{28}\] memory function. In this case, the dynamics (11) remains _essentially_ non-Markovian at all positive values of an exponent, \(\alpha\), and of a parameter, \[\rho_{\alpha}=\frac{\kappa_{0}}{M_{b}\omega_{b}^{2}}(\omega_{b}\tau)^{\alpha}. \tag{29}\] At each value of the exponent \(\alpha\), \(\rho_{\alpha}\) is a dimensionless combination of the strength \(\kappa_{0}\) and time-spread \(\tau\) of the retarded friction force in Eq. (11). In the sequel, we fix the value of the strength \(\kappa_{0}\), by taking it from the Fermi-liquid model calculations of nuclear fission dynamics [17], \(\kappa_{0}/(M_{b}\omega_{b}^{2})=42\). For the power-law memory function \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}\) (\(\tilde{f}(s)=\tau^{\alpha}\Gamma(1-\alpha)/s^{1-\alpha}\)), the long-time rate \(s_{1}\) (14) of the solution function \(B(t)\) is determined as the largest positive root of the secular equation \[(s/\omega_{b})^{2}+\rho_{\alpha}\Gamma(1-\alpha)(s/\omega_{b})^{\alpha}-1=0, \tag{30}\] where \(\Gamma(x)\) is the gamma function and where Eq. (12) was used. In Fig. 2, we showed by solid lines the largest positive root \(s_{1}/\omega_{b}\) of the secular equation (30) as a function of the correlation time \(\omega_{b}\tau\) and at several values of the exponent \(\alpha\) (28), \(\alpha=1/4,1/2\) and \(3/4\). For comparison, we also showed by dashed line in Fig. 2 the largest positive root \(s_{1}/\omega_{b}\) of the cubic secular equation (25), corresponding to the exponential memory function (15). It is seen from the Figure that the non-Markovian nuclear descent (10)-(12), measured in terms of the long-time rate \(s_{1}/\omega_{b}\) (14), undergoes much slower in the presence of the long Figure 2: The largest positive root \(s_{1}/\omega_{b}\) of the secular equation (30) are shown by solid lines as a function of the correlation time \(\omega_{b}\tau\) and at several values of the exponent \(\alpha\) of the power–law memory function \(f(x)=x^{-\alpha}\). Dashed line is the largest positive root \(s_{1}/\omega_{b}\) of the cubic secular equation (25), corresponding to the exponential memory function \(f(x)=\exp(-x)\). range \(f(x)=x^{-\alpha}\) than the short-range \(f(x)=\exp(-x)\) memory effects. Such a significant (by few orders of magnitude) slowing down of the nuclear drift is observed even at extremely small correlation times \(\omega_{b}\tau\sim 10^{-2}\), at which the non-Markovian dynamics (11), subject to the exponential \(f(x)=\exp(-x)\) memory function, reduces to the Markovian friction limit (16). The difference in the values of the long-time rate \(s_{1}/\omega_{b}\), calculated with the power-law \(f(x)=x^{-\alpha}\) and exponential \(f(x)=\exp(-x)\) memory functions, becomes enormous at \(\alpha\to 0\) as far as we approach the Markovian restorative limit (18) of the non-Markovian dynamics (11). The stopping of the nuclear descent here is mostly caused by the time-reversible restorative part (21) of the retarded friction force in Eq. (11). As we seen from Fig. 2, an exponentially unstable mode of motion, \(\exp^{s_{1}t}\), of the solution function \(B(t)\) is significantly suppressed (\(s_{1}/\omega_{b}\sim[10^{-6}\div 10^{-2}]\)) and one has to define an entire time evolution of \(B(t)\). With that purpose, we restrict ourselves by considering a particular case of \(\alpha=1/2\), for which we have found a clear analytical solution. At \(\alpha=1/2\), the Laplace transform (12) of \(B(t)\) is given by \[\tilde{B}(s)=\frac{1}{s^{2}+(\kappa_{0}\sqrt{\pi\tau}/M_{b})\sqrt{s}-\omega_{ b}^{2}} \tag{31}\] and we can formally factorize the denominator of this expression in the following way \[s^{2}+(\kappa_{0}\sqrt{\pi\tau}/M_{b})\sqrt{s}-\omega_{b}^{2}=(\sqrt{s}-\mu_{ 1})(\sqrt{s}-\mu_{2})(\sqrt{s}-\mu_{3})(\sqrt{s}-\mu_{4}). \tag{32}\] The factorization (32) enables us to decompose the Laplace transform (31) as \[\tilde{B}(s)=\frac{C_{1}}{\sqrt{s}-\mu_{1}}+\frac{C_{2}}{\sqrt{s}-\mu_{2}}+ \frac{C_{3}}{\sqrt{s}-\mu_{3}}+\frac{C_{4}}{\sqrt{s}-\mu_{4}}, \tag{33}\] which in turn gives rise to the time-dependent solution function \(B(t)\), \[B(t)=\sum_{i=1}^{4}C_{i}\mu_{i}\left(1+\mathrm{erf}(\mu_{i}\sqrt{t})\right) \mathrm{e}^{\mu_{i}^{2}t}, \tag{34}\] with \(\mathrm{erf}(x)\) being the error function and where \[C_{i}=\prod_{j\neq i}^{4}\frac{1}{\mu_{i}-\mu_{j}},\quad i=\overline{1,4}. \tag{35}\] In Eqs. (34)-(35), \(\mu_{1},\mu_{2},\mu_{3},\mu_{4}\) are four roots of the quartic secular equation: \[(\mu/\sqrt{\omega_{b}})^{4}+\rho_{1/2}\sqrt{\pi}(\mu/\sqrt{\omega_{b}})-1=0, \tag{36}\] where \(\rho_{1/2}\) is given by Eq. (29). According to the Viet theorem, the last equation always has two real roots (one is positive, another is negative) and two complex conjugated roots such that \[\mu_{1}>0,\ \ \mu_{2}<0,\ \ \mu_{3}=\mu_{4}^{*},\ \ 0<|{\rm Re}(\mu_{3},\mu_{4})| <|{\rm Im}(\mu_{3},\mu_{4})|. \tag{37}\] As a result of that, \[C_{1}\mu_{1}\left(1+{\rm erf}(\mu_{1}\sqrt{t})\right){\rm e}^{\mu_{1}^{2}t} \to 2C_{1}\mu_{1}{\rm e}^{\mu_{1}^{2}t},\ \ \ \mu_{1}^{2}t\to\infty, \tag{38}\] in Eq. (34) and \(\mu_{1}^{2}\) may be associated with the long-time rate \(s_{1}\) (14) of the solution function \(B(t)\). As far as \(\mu_{2}<0\), \[C_{2}\mu_{2}\left(1+{\rm erf}(\mu_{2}\sqrt{t})\right){\rm e}^{\mu_{2}^{2}t} \to\frac{C_{2}}{|\mu_{2}|\sqrt{t}},\ \ \ \mu_{2}^{2}t\to\infty, \tag{39}\] and this term shows a power-law decay with time. The last two terms in the solution function \(B(t)\) (34) can be rewritten as \[\left(C_{+}(t){\rm cos}(\Omega t)+C_{-}(t){\rm sin}(\Omega t)\right){\rm e}^{- \Gamma t}, \tag{40}\] where \[\Omega=|{\rm Im}(\mu_{3}^{2},\mu_{4}^{2})|,\ \ \ \Gamma=|{\rm Re}(\mu_{3}^{2},\mu_{4}^{2})|,\] \[C_{\pm}(t)=C_{3}\left(1+{\rm erf}(\mu_{3}\sqrt{t})\right)\pm C_ {4}\left(1+{\rm erf}(\mu_{4}\sqrt{t})\right). \tag{41}\] As in the case (26) of the exponential \(f(x)={\rm exp}(-x)\) memory function, the using of the power-law \(f(x)=x^{-1/2}\) memory function gives also rise to the appearance of characteristic shape oscillations of the nuclear shape \(q(t)\), see Eqs. (10), (34) and (40). In Figs. 3 and 4, we compared the values of the frequency \(\Omega\) and damping rate \(\Gamma\) of the characteristic shape oscillations, produced by the long-range (28) and short-range (15) memory effects. Solid lines in the Figures represent the corresponding results of the calculations of \(\Omega\) and \(\Gamma\) (41) for the power-law \(f(x)=x^{-1/2}\) memory function and dashed lines give the values of the frequency and damping rate (27) for the exponential \(f(x)={\rm exp}(-x)\) memory function. Dotted line in Fig. 3 is the frequency of the oscillations (20) of the trajectory of the descent \(B(t)\) in the Markovian restorative limit (18). From these two Figures we conclude that the typical values of the frequency \(\Omega\) and damping rate \(\Gamma\) of the memory-induced time oscillations are of comparable size. The only thing is an opposite tendency of the damping rate \(\Gamma\) of the oscillations with the correlation time \(\tau\). With the growth of \(\tau\), the oscillations, produced by the exponential \(f(t-t^{\prime})=\exp(-|t-t^{\prime}|/\tau)\) memory function, become undamped, while the power-law \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-1/2}\) memory function gives rise to more damped characteristic oscillations of the nuclear shape variable \(q(t)\). Finally, in Fig. 5 we plotted the entire solution function \(B(t)\) (34) at two values of the correlation time \(\omega_{b}\tau=10^{-3}\) (left panel) and \(\omega_{b}\tau=10^{-2}\) (right panel). At both these values of the correlation time the nuclear descent dynamics (11), governed by the exponential \(\exp(-x)\) memory function, is Markovian and the corresponding trajectory Figure 4: The same as in Fig. 3 but for the damping rate \(\Gamma\) of the memory–induced time oscillations. Figure 3: Frequency \(\Omega\) of the memory–induced time oscillations of the trajectory of the descent \(B(t)\), produced either by the power–law \(f(x)=x^{-1/2}\) (41) (solid line) or by the exponential \(f(x)=\exp(-x)\) (27) (dashed line) memory functions. The frequency \(\Omega\) of the Markovian restorative limit (18), (20) is shown by dotted line. of the descent \(B(t)\) (17) is exponentially growing in time. The trajectory of the non-Markovian descent \(B(t)\) (34), subject to the power-law \(f(x)=x^{-1/2}\) memory function, changes from almost exponentially growing at \(\omega_{b}\tau=10^{-3}\) to oscillatory one at \(\omega_{b}\tau=10^{-2}\). The former can be associated with a regime of quite weak long-range memory effects, while the latter situation can be associated with sufficiently pronounced long-range memory effects in the nuclear descent dynamics (11). The transition between the regimes occurs roughly at \(\omega_{b}\tau\approx 2\times 10^{-3}\), implying much smaller values of the correlation time \(\tau\) than in the case of using the short-range \(f(x)=\exp(-x)\) memory effects, which become prominent at \(\omega_{b}\tau\sim 1\), see Eqs. (16) and (18). ## V Time of descent Having founded the entire solution function \(B(t)\) (34), we are able to estimate a duration of the nuclear descent, \(\rm t_{des}\). We define it as a time of passing of the mean nuclear trajectory \(\langle q(t)\rangle=q_{b}+B(t)v0\) (10) from the top \(q=q_{b}\) (point \(B\) in Fig. 1) of fission barrier to the scission point \(q=q_{sc}\) (point \(C\) in Fig. 1), defined by the following condition [26]: \[\rm E_{pot}(q_{b})-\rm E_{pot}(q_{sc})=-20~{}MeV, \tag{42}\] see also Eq. (4). The initial velocity \(v_{0}\) is determined from the initial kinetic energy of the nuclear system [26], \[\frac{1}{2}M_{b}v_{0}^{2}=\frac{\pi T}{4}, \tag{43}\] at the temperature \(T=2\) MeV. In Fig. 6, we calculated the time \(\rm t_{des}\) of the non-Markovian nuclear descent (10), (11) in the presence of either the power-law \(f(x)=x^{-1/2}\) (solid line) or exponential \(f(x)=\exp(-x)\) (dashed line) memory function. It is seen from the Figure 6 that the non-Markovian dynamics (11) of the nuclear descent \(q(t)\) (10) takes much longer times \(\rm t_{des}\) in the presence of the power-law \(f(x)=x^{-1/2}\) than the exponential \(f(x)=\exp(-x)\) memory function. As it follows from Fig. 6, the most part of the descent time the nuclear system \(q(t)\) (10) remains blocked in a close vicinity of the top \(q_{b}\) of fission barrier, undergoing characteristic shape oscillations (40). This quite long oscillatory part of the descent is then followed by a relatively short period of time during which the nuclear system \(q(t)\) exponentially fast reaches the scission point \(q_{sc}\) (42). It is worth to point out that varying the correlation time \(\tau\) from \(10^{-24}\) s up to \(10^{-22}\) s, one can regulate the values of the descent time within a fairly wide interval \(\rm t_{des}\sim[10^{-20}\div 10^{-17}]\) s]. Figure 6: Time of the descent \(\rm t_{des}\) of \({}^{236}U\) from the top of fission barrier (4) to the scission point \(q_{sc}\) (42). Solid and dashed line correspond to the calculations with the power–law \(f(x)=x^{-1/2}\) and with the exponential \(f(x)=\exp(-x)\) memory functions, respectively. Thus, at \(\tau\sim 10^{-23}\) s the descent time becomes comparable to typical fission time scale of actinide nuclei [\(17\times 10^{-20}\div 40\times 10^{-20}\)] s] [27]. ## VI Summary In the present study we have investigated nuclear descent from fission barrier within the generalized Langevin approach (1)-(2) with the power-law memory function \(f(t-t^{\prime})=(|t-t^{\prime}|/\tau)^{-\alpha}\). We have linearized the Langevin equation of motion (1) for a nuclear shape variable \(q(t)\) in vicinity of the top of the parabolic fission barrier (4) and represented its solution (10)-(12) in terms of the Laplace transform \(\tilde{f}(s)\) of the memory function \(f(t-t^{\prime})\). In the long-time run trajectory of the descent is an exponentially unstable, \(q(t)\sim{\rm e}^{s_{1}t},\;\;t\to\infty\), and determined by the largest positive zero \(s_{1}\) of the denominator of the rational expression (12). We have found extremely strong suppression of the values of \(s_{1}\) (30) at exponents \(0<\alpha<1\) of the power-law \(f(x)=x^{-\alpha}\) memory function (solid lines in Fig. 2). On the same time, the exponential \(f(x)=exp(-x)\) memory function gives rise to much larger (by several orders) values of \(s_{1}\) (25) (dashed line in Fig. 2). Thus, the explicit form of the memory function \(f(t-t^{\prime})\), measuring time non-local properties of the friction and random forces in the generalized Langevin equation of motion (1)-(2), plays a crucial role and one needs microscopic models of \(f(t-t^{\prime})\) To define entire time evolution of the nuclear descent trajectory \(q(t)\), we have calculated \(q(t)\) analytically at a particular choice of the exponent \(\alpha=1/2\) of the power-law memory function (28). The obtained solution (34), except the exponentially unstable \(\sim{\rm e}^{s_{1}t}\) (38), also contains the algebraically decaying \(\sim 1/\sqrt{t}\) (39) and the oscillating \(\sim{\rm e}^{\pm i\Omega t-\Gamma t}\) (40) modes of motion. The frequency \(\Omega\) and damping rate \(\Gamma\) (41) of the oscillating modes of motion are growing functions of the correlation time \(\tau\) (Figs. 3 and 4), implying more oscillating and damped character of the nuclear descent (1), (28) with the increase of \(\tau\). By that, the nuclear system \(q(t)\) remains blocked in the close vicinity of the top of fission barrier (Fig. 5) for sufficiently long time. We would like to comment the following fact. On one hand, there have not been found any principal difference in qualitative manifestation of the long-range (28) and short-range (15) memory effects in the nuclear descent dynamics (1) - both of them lead to the appearance of characteristic oscillations of the nuclear shape, see Eqs. (40) and (26). The found difference is rather a quantitative one - the delay in the descent, caused by the memory effects, takes much longer times in the case of using the power-law memory function (28). One of the reasons for that is the presence of very slowly-decaying term \(\sim 1/\sqrt{t}\) (39) in the solution (34). We have also estimated a time of the non-Markovian descent (1) from the top \(B\) of fission barrier to scission \(C\) (Fig. 1). With that purpose, the time of the descent has been calculated as a time of the first hit of the mean nuclear trajectory \(\langle q(t)\rangle\) with the scission point \(q_{sc}\) (42). The initial velocity of the system was taken as the mean value of Maxwell distribution (43) with fixed temperature of \(T=2\ {\rm MeV}\). For the nucleus \({}^{236}\)U, we have found that the nuclear descent (1) in the presence of the long-range memory effects (28) takes over times larger than \(10^{-20}\) s at the range of the correlation time values \(\tau\sim[10^{-24}\div 10^{-23}]\) s (solid line in Fig. 6). That implies much smaller estimation for the correlation time \(\tau\) than the one \(\tau\approx 8\times 10^{-23}\) s, obtained earlier in Ref. [17] with the exponential memory function (15) (dashed line in Fig. 6). We also point out that at \(\tau>10^{-23}\) s the time of the descent becomes comparable to typical fission time scale of actinide nuclei [27].
2309.10682
Encoding Robust and Fast Memories in Bulk and Nanoscale Amorphous Solids
We investigate the memory effects under oscillatory shear deformation of amorphous solids through computer simulations. Applications of shear deformations in all orthogonal directions show that encoded memories via this protocol are more robust while performing reading. Our extensive system size analysis of memory effects shows that memory encoding in small systems is faster than in larger systems and is probably impossible in thermodynamically large system sizes. In addition to demonstrating how to encode robust memories in 3D bulk amorphous materials, we devise protocols for encoding and reading memories in pseudo-1D materials in the form of amorphous nano-rods. With this, we show that memory encoding and retrieving can also be done in systems with open surfaces, which all materials would necessarily have in practice, and is thus essential to capitalise on the effectiveness of smaller system sizes to encode memories faster. All in all, we provide protocols for encoding robust and faster memories in amorphous solids both at bulk and nanoscale.
Monoj Adhikari, Rishabh Sharma, Smarajit Karmakar
2023-09-19T15:06:46Z
http://arxiv.org/abs/2309.10682v1
# Encoding Robust and Fast Memories in Bulk and Nanoscale Amorphous Solids ###### Abstract We investigate the memory effects under oscillatory shear deformation of amorphous solids through computer simulations. Applications of shear deformations in all orthogonal directions show that encoded memories via this protocol are more robust while performing reading. Our extensive system size analysis of memory effects shows that memory encoding in small systems is faster than in larger systems and is probably impossible in thermodynamically large system sizes. In addition to demonstrating how to encode robust memories in 3D bulk amorphous materials, we devise protocols for encoding and reading memories in pseudo-1D materials in the form of amorphous nano-rods. With this, we show that memory encoding and retrieving can also be done in systems with open surfaces, which all materials would necessarily have in practice, and is thus essential to capitalise on the effectiveness of smaller system sizes to encode memories faster. All in all, we provide protocols for encoding robust and faster memories in amorphous solids both at bulk and nanoscale. **Introduction:** Memory is a ubiquitous concept that can be seen both in biological and mechanical systems and is often thought of as a defining feature of non-equilibrium systems [1]. Although the existence of history dependence/memory is well-established, the challenge usually is to bring out this aspect by quantifying it using some well-stated protocols. Classic instances of memory in materials might range from shape memory in alloys [2] or hysteretic memory in magnetic materials [3]. A simple model of Non-Brownian suspension [4; 5] was seen to retain the memory of the amplitude of the oscillating drive [6; 7]. Similar memory was also found in the much more complex case of glasses. In these systems, memory might take the form of the system "remembering" the mechanical drive (the amplitude of the oscillations applied) [8; 9] or the thermal perturbations that it was subjected to. The case of ageing and rejuvenation is another interesting aspect where the material seems to remember the temperature at which it was aged [10]. The nature of memory that a system can be made to store, along with key attributes like the speed of encoding and reading, the possibility of storing multiple memories, its persistence, etc., all offer valuable insights towards understanding the underlying landscape. In this article, we focus on two crucial aspects of memory in glasses: the speed of reading and writing and fault tolerance. In particular, we provide ways that can be used to encode and read memories faster and in a robust manner. A wide range of disorder systems have been found to have the ability to store and read the memory [7; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Specifically, there has been a notable interest in amorphous solids subjected to cyclic shear deformation. The amorphous solids exhibit a non-equilibrium phase transition under the application of cyclic shear deformation [21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. When the deformation amplitude is smaller than a critical value known as the yielding amplitude (\(\gamma_{Y}\)), the system reaches an absorbing state where the configurations do not change when looked at stroboscopically (at the end of the cycle at zero deformation \(\gamma=0\)). Conversely, when the deformation amplitude exceeds \(\gamma_{Y}\), the configurations change, and the system reaches a diffusive state. In the absorbing phase, following a transient period, the system converges to a limit cycle, cycling through the same sequence of states, effectively retaining a'memory' of the deformation amplitude [8; 9; 18; 19; 31; 32; 33; 34]. Upon analyzing these states, Ref.[35] showed that the memory observed in glasses exhibits partial Return Point Memory (RPM). These encoded memories exhibit robustness when subjected to various reading protocols [9]. Furthermore, a recent study in Ref.[36] suggests that the microscopic signals associated with this form of memory can manifest in macroscopic quantities such as energy dissipation. These memory signals can be enhanced by introducing asymmetric shear deformation protocol during training and reading, as was observed in a different model system studied in Ref.[37]. However, one common theme that emerges in all these protocols is that the correct reading of the memory relies on the underlying assumption that the shear direction during reading and training is the same. This points towards the possible practical limitations of storing memory in these solids. In this work, we want to investigate how to circumvent some of these issues by employing multiple shear directions during memory encoding. We would also like to understand the effects of system size on memory formation, especially the time taken to encode the memory in the sample, and whether large systems or smaller systems are better for encoding memory. We show via extensive computer simulations that encoding memory using a multi-directional oscillatory shear protocol is a much more robust method. This is because it does not require any prior knowledge to read the information. We also see that smaller systems are more suitable for practical reasons associated with the time taken for encoding, as encoding time seems to diverge with system size as a power law. Furthermore, we show that encoding and reading memory in the presence of open surfaces is possible using amorphous nano-rods, thus going from three-dimensional bulk storage to pseudo one-dimensional storage devices. **Simulation details:** We have studied a well-known glass-forming model, the Kob-Andersen Binary mixture (\(A_{80}B_{20}\)) with Lennard Jones interactions between particles (BMLJ) [38]. The details of the interaction potential is given in the SM. We simulate BMLJ samples consisting of \(N=500-50000\) particles in 3 dimensions while keeping the number density fixed, \(\rho=1.2\) (\(\rho=N/V\), \(V\) is the volume of the simulation box). Initially, the system is equilibrated at a reduced temperature \(T=1.0\) following a constant temperature molecular dynamics simulation(NVT). After equilibration, the systems are subjected to energy minimization using the conjugate-gradient algorithm to obtain inherent structures [39]. We use the Athermal Quasi-Static (AQS) procedure [21; 40] to perform cyclic shear deformation as described below. Samples undergo oscillatory shear deformation, where a single cycle is defined as follows 0 \(\rightarrow\gamma_{max}\to 0\rightarrow-\gamma_{max}\to 0\), with \(\gamma_{max}\) representing the deformation amplitude. A large number of cycles are applied repeatedly until they reach a steady state. This phase is referred to as training. After training, we perform a reading operation which implies performing a single cycle of shear deformation with varying amplitude: \(0\rightarrow\gamma_{read}\to 0\rightarrow-\gamma_{read}\to 0\). The simulations for bulk systems with periodic boundary conditions (PBC) reported here are performed in LAMMPS [41]. The simulations of nanorods are done using our in-house parallel molecular dynamics code. For the reading protocol, our measured quantity is the mean squared displacement (MSD). We denote \(MSD_{o}\) as the mean squared displacement when measured with respect to the original configuration. Further details of the simulation protocol can be found in the SM. **Results:** We investigate the stability of memory under the different shear directions. We have taken a system that is trained at \(\gamma_{max}=0.03\) with the shear direction being \(xz\). Previous studies [8; 9] show when the direction of shear during training and reading is the same, a kink is observed at the training amplitude. We now ask what happens if the shear direction during reading differs from the shear direction during training. In the inset of Fig. 1, we show \(MSD_{o}\) as a function of \(\gamma_{read}\). When the shear direction of training and reading is the same, which is \(xz\) direction, as expected, \(MSD_{o}\) shows a kink. However, no kink is observed if the shear direction is chosen differently than the training. In this case, we choose \(xy\). This result signifies that the memory encoded with fixed amplitude and a single direction of shear deformation is not robust under perturbations and possible reading mistakes; one needs to know the shear direction of training beforehand to read the memory correctly and indestructibly. The encoded memory is permanently lost if the reading is mistakenly done in the wrong shear direction. We want to explore a training protocol where memory could be stable in any shear direction and thus be robust during the reading protocol. Recently, it has been found that a system reaches a steady state with lower energy when shear is applied in all orthogonal directions of shear, \(xy\), \(xz\) and \(yz\)[42] for a three-dimensional system. We want to understand whether the steady state obtained via oscillatory shear applied on all the orthogonal shear directions sequentially encodes memory; if the answer is yes, then how stable those memories are. We have taken a sample that is trained at an amplitude of 0.03. First, we apply an oscillatory shear cycle in the \(xy\) plane, then \(xz\) and \(yz\). We repeat this sequence for a large number of cycles until the absorbing state is reached. In this protocol, we could use three different stroboscopic configurations (stroboscopic under any of the three orthogonal directions) for reading. We have taken a configuration where the \(xz\) direction of shear is applied last. Then, we perform reading operations with varying directions of shear. When the direction of shear during reading is \(xz\), the expected kink is observed at \(\gamma_{read}=0.03\). Interestingly, \(MSD_{o}\) shows a kink even when the shear direction during reading is \(xy\) or \(yz\) as shown in Fig. 1 (main panel). Similar behaviour can be observed for any starting configuration. Thus, training with multi-direction of shear is more stable for encoding any memory. We need not worry about the shear direction of training while performing reading. Figure 1: \(MSD_{o}\) as a function of \(\gamma_{read}\). Inset: The system is trained at \(\gamma_{max}=0.03\) with the shear direction being \(xz\) for two different reading protocols where the shear direction is varied. When the direction of shear during reading and training is the same, the MSD shows a kink at the training amplitude, whereas there is no kink when the direction of shear during reading and training is different. Main Panel: \(MSD_{o}\) as a function of \(\gamma_{read}\) for the system trained with three different shear directions: \(xz\), \(xy\), \(yz\). The starting configuration for the reading operation is the sample where the \(xz\) direction of shear is applied last. The training amplitude is 0.03. For all reading protocols, \(MSD_{o}\) shows a kink at the encoded training amplitude irrespective of the shear directions. Next, we want to systematically investigate the effect of system size on memory formation in amorphous solids to ascertain the suitability both in terms of stability and operational difficulty of memory encoding under different system sizes. We prepare systems of different sizes ranging from 500 to 50000. We apply shear deformation with fixed amplitude in each system for many cycles. In Fig. 2 (a), we show stroboscopic energy as a function of the number of cycles for different system sizes. We observe that with increasing system sizes, the number of cycles to reach the steady state is increasing. The final energy is also becoming less and less with increasing system sizes. When we plot the number of cycles to reach the steady state, \(\tau\), we observe it grows with system size as a power law with an exponent \(\beta\) as \(\tau\sim N^{\beta}\), with \(\beta\) changing from 0.45 for small amplitude of \(\gamma_{max}=0.02\) to 0.90 for \(\gamma_{max}=0.06\) systematically as shown in Fig.2(b). Note that the estimated yield strain for this system is \(\gamma_{Y}\simeq 0.07\). Thus, this observation implies that the system will never find a steady state for an infinitely large system. Correspondingly, it will be difficult to store memory in a large system size than when the system size is small. To understand the reason for the increase in the number of cycles \(\tau\) with system size, we look at the number of plastic events the system encounters during the oscillatory shear cycles. It is well known that the number of plastic events increases with increasing system size [43]. Under oscillatory shear deformation, the stroboscopic configurations become the same when the system reaches a steady state. However, between the cycles, the system jumps from one minimum to another, and correspondingly, many plastic drops can be observed in a cycle. However, in a cycle, it cancels so that the endpoints of the loop (beginning and ending of the loop) become the same, and the energy loop is closed. As we can see in Fig. 2 (d), with increasing system size, the system visits a larger number of minima compared to a small system size. A pertaining question is what happens if the system size is large enough so that it has to visit a nearly infinite number of minima; can it still retrace the path to having a closed loop? Interestingly, the number drops in a steady state also increase with system size as a power law with the exponent close to 0.6 as shown in Fig. 2(d). Interestingly, the power-law exponent remains close to 0.6 for all the Figure 2: (a) The system is trained with \(\gamma_{max}=0.06\). Stroboscopic energy, \(E(\gamma=0)\) is plotted as a function of \(N_{cycles}\). The steady state for different system sizes reaches at an increasingly larger number of cycles with increasing system sizes. (b) The relaxation time to reach the steady state is plotted as a function of system sizes. We observe that \(\tau\sim N^{\beta}\), where \(\beta\) is different for different \(\gamma_{max}\). (c) \(\Delta E\) is plotted as a function of \(\gamma\) in the steady state for two different system sizes, as shown in the legend. \(\Delta E=E_{\gamma}-E_{fit}\), where \(E_{\gamma}\) is the actual energy in the steady state in a cycle and \(E_{fit}\) is obtained by parabolic fitting of the \(E_{\gamma}\). (d) The number of drops is plotted as a function of system sizes at the steady state. A number of drops also show a power law, and the slope is the same for every \(\gamma\). (e) Number of drops (\(N_{drop}^{trans}\)) to reach the steady state as a function of system size, \(N\) for different \(\gamma_{max}\). (f) \(\tau\) is plotted as a function of total number drops to reach the steady state for all \(\gamma\) for different \(N\). The line has a slope of around 0.5. studied strain amplitude \(\gamma_{max}\). It is very tempting to invoke the well-known system size scaling result obtained in uniform shear and oscillatory shear deformation studies on various model systems across spatial dimensions. In steady state, it has been shown that the average strain between two successive plastic drops shows a subextensive scaling behaviour with system size as \(\langle\Delta\gamma\rangle\sim N^{-2/3}\) in both two and three-dimensional systems. This suggests that the average strain between two successive plastic drops will vanish in the asymptotic thermodynamic size limit. Now, if we assume the same scaling behaviour for the number of drops in the absorbing state as a function of system size for a given strain amplitude \(\gamma_{max}\), then it is natural to expect that the number of plastic drops will grow with system size as \(\sim N^{2/3}\) which is very similar to the observation we have. Although one may question the validity of the argument for the absorbing state as \(N^{-2/3}\) scaling for the average strain interval between two successive plastic events is true at the steady state. However, similar scaling with a slightly different scaling exponent also holds for the first plastic event statistics [43]. Nevertheless, it demonstrates that encoding memory into a large system will be difficult as compared to a smaller system. Surprisingly, we observe that the number of cycles (\(\tau\)) needed to reach a steady absorbing state for all system sizes and strain amplitudes is universally related to the total number of states the system visits (or equivalently, the total number of plastic drops, \(N_{drop}^{trans}\), it encounters) until reaching the absorbing state as \(\tau\sim(N_{drop}^{trans})^{\alpha}\), with \(\alpha\simeq 0.5\) as shown Fig 2(f). This observation is indeed very intriguing, but a microscopic understanding of the exponent is still lacking. Memory effects in Nanorods: We have just demonstrated that a smaller system size is preferable for encoding memory. However, real-world finite systems inevitably have open boundaries. Thus, it becomes crucial to establish the existence of memory encoding and possible reading protocols for such systems. Here, we take amorphous nano-rods and encode memory by applying tensile and compressive cycles along the long axis. The choice of this particular geometry and protocol serves a 2-fold purpose. Firstly, breaking the symmetry by hand (since the rod has a long axis) ensures that the reading can always be done in the correct direction. This bypasses the need to encode robust memories by train Figure 3: Panel (a.) shows the visualisation of one tension-compression cycle applied to a glass nano-rod. Configurations of 0, maximum and minimum strain are also shown. In panel (b.), we see the approach to the limit cycle in a system driven at \(\gamma_{max}=0.02\). Panels (c.) and (d.) show the absorbing state transition for \(\gamma_{max}=0.03\) and \(\gamma_{max}=0.02\), respectively. The insets in each of the panels show the parallel reading protocol. In both cases, the training amplitude can be read off by the sudden large change in the slope of the \(MDS_{0}\) curve. Sequential reading protocols for the same are included in the supplementary material. System size was taken to be \(N=32,000\). ing samples in all orthogonal directions, as required for a symmetric cube under oscillatory shear. Thus, this eliminates the risk of inadvertently erasing a memory by applying a reading cycle in the "wrong" direction, as seen in the inset of Fig. 1. Additionally, using a varied deformation protocol like tension and compression cycles, we show that memory effects in dense amorphous solids aren't confined to just the extensively studied oscillatory shear case. Instead, it highlights the predisposition of such systems to form memories under very general (cyclic) perturbations. We create the rod geometry by implementing free boundary conditions in the \(y\) and \(z\) dimensions while taking periodic boundary conditions along \(x\) (the long axis of the rod). Each step of the deformation involves scaling the rod length by a factor of \((1+\epsilon)\) along with providing the corresponding affine displacements to the \(x\) coordinates of the particles: \(L_{x}=L_{x}(1+\epsilon)\) and \(x_{i}=x_{i}(1+\epsilon)\). An energy minimization step follows this. As before, one cycle involves going from strain: \(0\rightarrow\gamma_{max}\)\(\to 0\rightarrow-\gamma_{max}\to 0\), where \(\gamma_{max}\) is the deformation amplitude. This cycle is demonstrated in panel (a.) of Figure 3. Here \(\epsilon=5\times 10^{-5}\), and the rod dimensions along the x, y and z axis are taken to be in the ratio of \(4:1:1\) respectively. We find that an absorbing state transition occurs in amorphous nanorods, vibrating with an amplitude less than the yielding amplitude \(\gamma_{Y}\). This allows for encoding memory of the training amplitude. We read the memory using parallel and sequential reading protocols. Parallel reading is shown in Fig. 3 panels (c.) & (d.), for \(\gamma_{max}=0.03\) and \(0.02\) respectively. The clear kinks in the \(MSD_{o}\) curves in both cases clearly demonstrate the memory encoding and reading in nanorods. Again, the system reaches a diffusive state for amplitude greater than \(\gamma_{Y}\) and rejuvenates. We see that going from bulk (3D system) to using nanorods (pseudo 1D system) paves the way for a much more efficient way to store and retrieve memory in glassy systems. The efficiency results from using less material, resulting in faster reading and writing time (or faster memory access), a desirable feature in any real-world computational system. Furthermore, less material would, in general, be less dissipative and hence more energy efficient as there are fewer atomic rearrangements (that is, fewer stress drops), which are all considerations to be kept in mind while fabricating new devices. Moreover, a smaller system does not seem to introduce more noise in encoding or reading memory; thus, we feel that our results will encourage future experiments to ascertain the possibility of memory encoding in amorphous nanorods. Degree of annealing [44; 45; 46; 25; 47; 48; 49] is another aspect that might make such systems even better, as our preliminary results suggest faster encodings in better-annealed rods. **Conclusions:** We have shown that the multi-directional shear can be a much better method to encode memory, which will be more stable for reading as one does not have to worry about the shear direction of training beforehand during the reading process and inadvertent memory loss by applying wrong shear protocol during a reading cycle. Our results also show that the large system takes a long time to reach a steady state, which is essential to encode the memory. Thus, reaching a steady state and corresponding memory encoding in a bulk system can be practically impossible. The fundamental relation between the number of shear cycles taken to reach an absorbing state and the total number of states the system visits (equivalently, the number of plastic drops) during the process indicates a deep relation with the underlying complexity of the landscape. It can be an interesting way to study the landscape properties itself. Finally, our results on the existence of absorbing states in amorphous nanorods and demonstration of memory encoding in a few training cycles suggest that amorphous solids at the nanoscale can be an ideal material for encoding memory for industrial applications. **Acknowledgement:** We acknowledge funding by intramural funds at TIFR Hyderabad from the Department of Atomic Energy (DAE) under Project Identification No. RTI 4007. SK acknowledges Swarna Jayanti Fellowship grants DST/SJF/PSA01/2018-19 and SB/SFJ/2019-20/05 from the Science and Engineering Research Board (SERB) and Department of Science and Technology (DST) and the National Super Computing Mission (NSM) grant DST/NSM/R&D_HPC_Applications/2021/29 for generous funding. Most of the computations are done using the HPC clusters procured using Swarna Jayanti Fellowship grants DST/SJF/PSA01/2018-19, SB/SFJ/2019-20/05 and Core Research Grant CRG/2019/005373. MA acknowledges support from NSM grant DST/NSM/R&D_HPC_Applications/2021/29 for financial support. ## References * Keim _et al._ (2019)N. C. Keim, J. D. Paulsen, Z. Zeravcic, S. Sastry, and S. R. Nagel, Reviews of Modern Physics **91**, 035002 (2019). * Bhattacharya (2003)K. Bhattacharya, _Microstructure of martensite: why it forms and how it gives rise to the shape-memory effect_, Vol. 2 (Oxford University Press, 2003). * Pierce _et al._ (2005)M. S. Pierce, C. R. Buechler, L. B. Sorensen, J. J. Turner, S. D. Kevan, E. A. Jagla, J. M. Deutsch, T. Mai, O. Narayan, J. E. Davies, K. Liu, J. H. Dunn, K. M. Chesnel, J. B. Kortright, O. Hellwig, and E. E. Fullerton, Phys. Rev. Lett. **94**, 017202 (2005). * Corte _et al._ (2008)L. Corte, P. Chaikin, J. P. Gollub, and D. Pine, Nature Physics **4**, 420 (2008). * Pine _et al._ (2005)D. Pine, J. P. Gollub, J. Brady, and A. Leshansky, Nature **438**, 997 (2005). * Keim and Nagel (2011)N. C. Keim and S. R. Nagel, Phys. Rev. Lett. **107**, 010603 (2011). * Paulsen _et al._ (2014)J. D. Paulsen, N. C. Keim, and S. R. Nagel, Phys. Rev. Lett. **113**, 068301 (2014). * Fiocco _et al._ (2014)D. Fiocco, G. Foffi, and S. Sastry, Phys. Rev. Lett. **112**, 025702 (2014). * Adhikari and Sastry (2018)M. Adhikari and S. Sastry, The European Physical Journal E **41**, 1 (2018). * Scalliet and Berthier (2019)C. Scalliet and L. Berthier, Physical review letters **122**, 255502 (2019). * Sethna _et al._ (1993)J. P. Sethna, K. Dahmen, S. Kartha, J. A. Krumhansl, B. W. Roberts, and J. D. Shore, Phys. Rev. Lett. **70**, 3347 (1993). * Littlewood (1987)P. B. Littlewood, Japanese Journal of Applied Physics **26**, 1901 (1987). * Middleton (1992)A. A. Middleton, Phys. Rev. Lett. **68**, 670 (1992). * Lilly _et al._ (1993)M. P. Lilly, P. T. Finley, and R. B. Hallock, Phys. Rev. Lett. **71**, 4186 (1993). * Gilbert _et al._ (2015)I. Gilbert, G.-W. Chern, B. Fore, Y. Lao, S. Zhang, C. Nisoli, and P. Schiffer, Phys. Rev. B **92**, 104417 (2015). * Bandi _et al._ (2017)M. Bandi, H. G. E. Hentschel, I. Procaccia, S. Roy, and J. Zylberg, arXiv preprint arXiv:1711.09382 (2017). * Lahini _et al._ (2017)Y. Lahini, O. Gottesman, A. Amir, and S. M. Rubinstein, Phys. Rev. Lett. **118**, 085501 (2017). * Keim _et al._ (2020)N. C. Keim, J. Hass, B. Kroger, and D. Wieker, Physical Review Research **2**, 012004 (2020). * Lindeman and Nagel (2021)C. W. Lindeman and S. R. Nagel, Science Advances **7**, eabg7133 (2021). * Chattopadhyay and Majumdar (2022)S. Chattopadhyay and S. Majumdar, The Journal of Chemical Physics **156** (2022). * Fiocco _et al._ (2013)D. Fiocco, G. Foffi, and S. Sastry, Phys. Rev. E **88**, 020301 (2013). * Regev _et al._ (2013)I. Regev, T. Lookman, and C. Reichhardt, Phys. Rev. E **88**, 062401 (2013). * Stat. Nonlinear, Soft Matter Phys. **87**, 1 (2013), arXiv:arXiv:1301.1666v1. * Leishangthem _et al._ (2017)P. Leishangthem, A. D. Parmar, and S. Sastry, Nature Communications **8**, 14653 (2017). * Bhaumik _et al._ (2021)H. Bhaumik, G. Foffi, and S. Sastry, Proceedings of the National Academy of Sciences **118** (2021). * Yeh _et al._ (2020)W.-T. Yeh, M. Ozawa, K. Miyazaki, T. Kawasaki, and L. Berthier, Physical review letters **124**, 225502 (2020). * Bhaumik _et al._ (2022)H. Bhaumik, G. Foffi, and S. Sastry, Physical Review Letters **128**, 098001 (2022). * Adhikari _et al._ (2022)M. Adhikari, M. Mungan, and S. Sastry, arXiv preprint arXiv:2201.06535 (2022). * Ozawa _et al._ (2023)M. Ozawa, Y. Iwashita, W. Kob, and F. Zamponi, Nature Communications **14**, 113 (2023). * Mutneja _et al._ (2023)A. Mutneja, B. P. Bhowmik, and S. Karmakar, arXiv preprint arXiv:2307.01002 (2023). * Mukherji _et al._ (2019)S. Mukherji, N. Kandula, A. Sood, and R. Ganapathy, Physical review letters **122**, 158001 (2019). * Schwen _et al._ (2020)E. M. Schwen, M. Ramaswamy, C.-M. Cheng, L. Jan, and I. Cohen, Soft matter **16**, 3746 (2020). * Arceri _et al._ (2021)F. Arceri, E. I. Corwin, and V. F. Hagh, Physical Review E **104**, 044907 (2021). * Benson _et al._ (2021)Z. A. Benson, A. Peshkov, D. C. Richardson, and W. Losert, Physical Review E **103**, 062906 (2021). * Mungan _et al._ (2019)M. Mungan, S. Sastry, K. Dahmen, and I. Regev, Physical review letters **123**, 178002 (2019). * Shohat and Lahini (2023)D. Shohat and Y. Lahini, Physical Review Letters **130**, 048202 (2023). * Jalowiec _et al._ (2023)T. R. Jalowiec, C. W. Lindeman, and N. C. Keim, arXiv preprint arXiv:2306.07177 (2023). * Kob and Andersen (1995)W. Kob and H. C. Andersen, Phys. Rev. E **52**, 4134 (1995). * Sastry _et al._ (1998)S. Sastry, P. G. Debenedetti, and F. H. Stillinger, Nature **393**, 554 (1998). * Maloney and Lemaitre (2004)C. Maloney and A. Lemaitre, Physical review letters **93**, 016001 (2004). * Plimpton (1995)S. Plimpton, Journal of computational physics **117**, 1 (1995). * Krishnan _et al._ (2023)V. V. Krishnan, K. Ramola, and S. Karmakar, Physical Review Applied **19**, 024004 (2023). * Karmakar _et al._ (2010)S. Karmakar, E. Lerner, and I. Procaccia, Physical Review E **82**, 055103 (2010). * Sastry (2021)S. Sastry, Physical Review Letters **126**, 255501 (2021). * Liu _et al._ (2021)C. Liu, E. Ferrero, E. Jagla, K. Martens, A. Rosso, and L. Talon, arXiv preprint arXiv:2012.15310 (2021). * Khirallah _et al._ (2021)K. Khirallah, B. Tyukodi, D. Vandembroucq, and C. E. Maloney, Phys. Rev. Lett. **126**, 218005 (2021). * Mungan and Sastry (2021)M. Mungan and S. Sastry, Phys. Rev. Lett. **127**, 248002 (2021). * Parley _et al._ (2021)J. T. Parley, S. Sastry, and P. Sollich, (2021), arXiv:2112.11578 [cond-mat.soft]. * Lamp _et al._ (2022)K. Lamp, N. Kuchler, and J. Horbach, The Journal of Chemical Physics **157** (2022). # Encoding Robust and Fast Memories in Bulk and Nanoscale Amorphous Solids Monoj Adhikari [email protected] Tata Institute of Fundamental Research, 36/P, Gopapapally Village, Serilingampally Mandal, Ranga Reddy District, Hyderabad 500046, Telangana, India Rishabh Sharma [email protected] Tata Institute of Fundamental Research, 36/P, Gopapally Village, Serilingampally Mandal, Ranga Reddy District, Hyderabad 500046, Telangana, India Smarajit Karmakar [email protected] Tata Institute of Fundamental Research, 36/P, Gopapally Village, Serilingampally Mandal, Ranga Reddy District, Hyderabad 500046, Telangana, India ###### Abstract **Details of BMLJ model** The interaction potential, with a quadratic cut-off, is given by \[V_{\alpha\beta}(r) = 4\epsilon_{\alpha\beta}\left[\left(\frac{\sigma_{\alpha\beta}}{r} \right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r}\right)^{6}\right.\] (S1) \[+ \left.c_{0}+c_{2}\left(\frac{r}{\sigma_{\alpha\beta}}\right)^{2} \right],r_{\alpha\beta}\leq r_{c,\alpha\beta}\] \[= 0,\hskip 42.679134ptr_{\alpha\beta}>r_{c,\alpha\beta}\] where \(\alpha,\beta\in\) (A,B), \(\epsilon_{AB}/\epsilon_{AA}=\epsilon_{BA}/\epsilon_{AA}=1.5\), \(\epsilon_{BB}/\)\(\epsilon_{AA}=0.5\), and \(\sigma_{AB}/\sigma_{AA}=\sigma_{BA}/\sigma_{AA}=0.8\), \(\sigma_{BB}/\sigma_{AA}=0.88\). The interaction potential has cut off, \(r_{c,\alpha\beta}=2.5\sigma_{\alpha\beta}\). We present results in reduced units, with units of length, energy and time scales being \(\sigma_{AA}\), \(\epsilon_{AA}\) and \(\sqrt{\frac{\sigma_{AA}^{2}m_{AA}}{\epsilon_{AA}}}\) respectively. **Preperation Protocol for nano rods:** The glass rods in this study were formed by taking the KA mixture at a density of \(\rho=1.2\) and equilibrating it at a high temperature of 1.0. This liquid was then cooled down to a low temperature of 0.01 using a cooling rate of \(\dot{T}=10^{-1}\) in the LJ time units. This was followed by an annealing phase assisted by active dynamics, using the protocol demonstrated in [1] with an active forcing \(f_{0}=1.9\). Following this the system was run in an NPT ensemble for additional 1000 MD steps at zero pressure. This was ultimately followed by removing periodic boundaries in the Y and Z directions, and minimizing the energy using conjugate gradient algorithm. Nose-Hoover thermo and barostats were used. The time step for integration was taken to be 0.005. pacs: 02.50.-c, 02.50.+r + [FOOTNO
2306.17768
Autonomous and asymptotically quasiconvex functionals with general growth conditions
We obtain local regularity for minimizers of autonomous vectorial integrals of the Calculus of Variations, assuming $\psi$-growth hypothesis and imposing $\varphi$ - quasiconvexity assumptions only in asymptotic sense, both in the sub-quadratic and the super-quadratic case. In particular we obtain $C^{1,\alpha}$ regularity at points $x_0$ such that $Du$ is large enough around $x_0$ and clearly Lipschitz regularity on a dense set. \\ The results hold for all couple of Young functions $(\varphi,\psi)$ with $\Delta_2$ condition.
Francesca Angrisani
2023-06-30T16:20:59Z
http://arxiv.org/abs/2306.17768v1
# Autonomous and Asymptotic Quasiconvex Functionals with General Growth Conditions. ###### Abstract. We obtain local regularity for minimizers of autonomous vectorial integrals of the Calculus of Variations, assuming \(\psi\)-growth hypothesis and imposing \(\varphi\) - quasiconvexity assumptions only in asymptotic sense, both in the sub-quadratic and the super-quadratic case. In particular we obtain \(C^{1,\alpha}\) regularity at points \(x_{0}\) such that \(Du\) is large enough around \(x_{0}\) and clearly Lipschitz regularity on a dense set. The results hold for all couple of Young functions \((\varphi,\psi)\) with \(\Delta_{2}\) condition. **Keywords:**\((\varphi,\psi)\)-growth conditions, asymptotic quasiconvexity, Orlicz growth conditions. **MSC:**\(35J47,35B65,46E30\). ## 1. Introduction In this paper we study multidimensional variational integrals of the type \[\mathcal{F}(u)=\int_{\Omega}f(Du(x))\,dx\quad\text{ for }u:\Omega\to\mathbb{R}^{N}\] where \(\Omega\) is an open bounded set in \(\mathbb{R}^{n}\), \(n\geq 2\), \(N\geq 1\). We consider Young functions \(\varphi\) and \(\psi\) of class \(C^{1}([0,+\infty))\cap C^{2}(0,+\infty)\) such that if \(h\in\{\varphi,\psi\}\): (H.0) \[p_{1}\frac{h^{\prime}(t)}{t}\leq h^{\prime\prime}(t)\leq q_{1}\frac{h^{\prime }(t)}{t}\] for all \(t\geq 0\), where \(p_{1},q_{1}>0\) are positive constants and a Lagrangian function \(f\) s.t. for both \(h\in\{\varphi,\psi\}\) the following assumptions hold: (H.1) **Regularity-**\(f\in C^{2}(\mathbb{R}^{nN},\mathbb{R})\). * **Asymptotical \(W^{1,\varphi}\)-quasiconvexity [12]-** There exists \(M>>0\), \(\gamma>0\) and a continuous function \(g\in W^{1,\varphi}(\mathbb{R}^{nN},\mathbb{R})\) such that \[f(z)=g(z),\quad\forall z:\,|z|>M\] and such that \(g\) is strictly \(W^{1,\varphi}\)-quasiconvex, i.e. satisfies \[\fint_{B_{1}}g(z+D\phi)\geq g(z)+\gamma\fint_{B_{1}}\varphi_{1+|z|}(|D\phi|), \quad\forall z,\,\forall\phi\in C_{0}^{\infty}(B_{1},\mathbb{R}^{N})\] where \(\varphi_{a}(t)\) is defined for any \(0<a\in\mathbb{R}\) via the following equality \[\varphi_{a}^{\prime}(t)=\varphi^{\prime}(a+t)\frac{t}{a+t}\quad\text{ and }\quad\varphi_{a}(0)=0\] and it was shown in [25] to satisfy the property \[\varphi_{a}(t)\sim t^{2}\varphi^{\prime\prime}(a+t).\] * **Growth conditions**- The following inequalities hold \[\Gamma^{\prime}\varphi(|z|)\leq f(z)\leq\Gamma^{\prime\prime}(1+\psi(|z|))\] \[|D^{2}f(z)|\leq\Gamma^{\prime\prime}(1+\psi^{\prime\prime}(|z|))\] for all \(z\in\mathbb{R}^{nN}\) for some positive constants \(\Gamma^{\prime},\Gamma^{\prime\prime}>0\). * **Range of anisotropy** We assume that, for any \(a>M\), the function \(\mathcal{N}_{a}=\phi_{a}\circ(\psi_{a}^{\prime})^{-1}\) is a Young function and the following inequality regarding its complementary Young function \(\mathcal{N}_{a}^{*}\) holds \[[\mathcal{N}_{a}]^{*}(t)\leq c\varphi_{a}^{\beta}(t)\] for all \(t>>1\) and some \(1\leq\beta<\frac{n}{n-1}\) Notice that from \((H.4)\) the inequality \[\psi(t)\leq c\varphi^{\beta}(t),\quad\forall t>>1\] follows. In the particular case \(\varphi(t)=t^{p}\) and \(\psi(t)=t^{q}\), \((H.4)\) is equivalent to \(q<p+\frac{1}{n}\). In [25], it was proven that condition \((H.2)\) is equivalent to: * There exists \(M>>0\), \(\gamma>0\) such that \[\forall z:\,|z|>M\] \[\fint_{B_{1}}f(z+D\phi)\geq f(z)+\gamma\fint_{B_{1}}\varphi_{1+|z|}(|D \phi|),\quad\forall\phi\in C_{0}^{\infty}(B_{1},\mathbb{R}^{N})\] in our contest. In particular, it follows from the fact that \(f\) is locally bounded from below. We will study local \(W^{1,\varphi}\)-minimizers of \(\mathcal{F}\), i.e. functions \(u\) such that \[\mathcal{F}(u)\leq\mathcal{F}(u+\phi)\quad\forall\phi\in W^{1,\varphi}_{0}( \Omega,\mathbb{R}^{N}).\] In the case of a globally quasiconvex functional and with the superquadratic hypothesis \(\phi(t)>a(t^{2}-1)\), D. Breit and A. Verde in [5] proved that if \(u\) is a local minimizer of \(\mathcal{F}\), then \(u\) is in \(C^{1,\alpha}\) in a open dense subset of \(\Omega\). Adapting and generalizing their arguments, we prove partial \(C^{1,\alpha}\) regularity of a local minimizer \(u\) of \(\mathcal{F}\) around points \(x_{0}\) such that there exists a ball \(B_{r}(x_{0})\) where \(|Du|\) is greater than \(M\). The subject of asymptotic regular problems has often been dealt in recent years: regularity theory for integrals with a particular structure in a neighborhood of the infinity has been investigated first by M.Chipot in [9] and by many others, for example by T.Isernia, C.Leone, A.Verde in [25]. It often happen that, not asking the global quasiconvexity, but simply localizing the natural assumptions at infinity, in both scalar and vectorial case, is sufficient to obtain Holder regularity near points where the integral function is close to the value \(z_{0}\). (see also [26], [31], [11], [15], [33], [30], [20], [21], [14], [17]). More precisely, we will obtain the following result: **Theorem 1**.: _Let \(f,\varphi,\psi\) satisfy hypotheses \((H.0)\),\((H.1)\),\((H.2)\),\((H.3)\) and \((H.4)\) and let \(u\) be a local minimizer of the corresponding functional \(\mathcal{F}\). Let \(z_{0}\in\mathbb{R}^{nN}\) such that \(|z_{0}|>M+1\) and assume there is a \(x_{0}\in\mathbb{R}^{n}\) with the property that_ \[\lim_{\rho\to 0^{+}}\mathchoice{\mathop{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt \kern-3.0pt\kern-3.0pt\kern-3.0pt\kern-3.0pt\kern-3.0pt\kern-3.0pt \int}}{\mathop{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt\kern-3.0pt\kern-3.0pt\kern-3.0pt \int}}{\mathop{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt\kern-3.0pt\kern-3.0pt \int}}{\mathop{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt\kern-3.0pt\kern-3.0pt \int}}\int_{B_{\rho}(x_{0})}|V(Du(x))-V(z_{0})|^{2}=0,\] _then \(x_{0}\in\text{Reg}(u)\), where \(\text{Reg}(u)=\{x\in\Omega:\,u\text{ is locally }C^{1,\alpha}(\Omega)\text{ around }x\}\) and \(V(z)\) is defined in section 3._ This theorem has the following interesting corollary **Corollary 2**.: _In the hypotheses and notation of Theorem 1, the set of points of local Lipschitzianity of \(u\) is a dense open subset of \(\Omega\)._ In the power case the most common idea used in this kind of contest is to use the blow-up argument. Since in the case of general growth it lacks the homogeneity (fundamental in blow up method), we use the so-called \(\mathcal{A}\)-harmonic approximation proved in [12]. Thank to this approach, one can compare the solutions of our problem with the solution of the regular one in term of closeness of the gradient. A partial regularity result for our type of integrals (i.e. the minimizers are Lipschitz continuous on an open and dense subset of \(\Omega\)) has been obtained in [31]. Our result is optimal in this sense: it is no possible to establish regularity outside a negligible set, so the fact that our minimizer are \(C^{1,\alpha}\) on a dense subset of \(\Omega\) is in this sense. ## 2. Young functions and their properties **Definition**.: A real function \(\varphi:[0,+\infty)\to[0,+\infty)\) is called a **Young function** if 1. \(\varphi(0)=0\) 2. \(\varphi\) is differentiable and \(\varphi^{\prime}\) is right-continuous, non-decreasing, and satisfies \[\varphi^{\prime}(0)=0,\quad\varphi^{\prime}(t)>0\ \forall t>0\] 3. \(\lim_{t\to+\infty}\varphi^{\prime}(t)=+\infty\) In this paper, we will also assume Young functions to be \(C^{2}(0,\infty)\cap C^{1}([0,+\infty))\), which is not a substantially restrictive assumption. Notice it easily follows from the definition that a Young function is necessarily convex. **Definition**.: A Young function \(\varphi\) satisfies the \(\Delta_{2}\) condition if there exists a positive constant \(k_{1}>0\) such that, for all \(t>0\) the following holds: \[\varphi(2t)\leq k_{1}\varphi(t)\] and \(\Delta_{2}(\varphi):=\sup_{t>0}\frac{\varphi(2t)}{\varphi(t)}\) If \(\varphi\) satisfies the \(\Delta_{2}\) condition, then \(\varphi(at)\sim\varphi(t)\) for all \(a>1\). **Definition**.: The space \(L^{\varphi}\) of functions defined by \[L^{\varphi}(\Omega):=\{f:\Omega\to\mathbb{R}\ \text{measurable and s.t.}\int_{\Omega}\varphi(|f|)\,dx<+\infty\}\] is called **Orlicz space** related to the Young function \(\varphi\). The **Orlicz-Sobolev space**\(W^{1,\varphi}\) is the space of \(L^{\varphi}\) functions admitting a weak derivative that is itself in the space \(L^{\varphi}\). Also, by \(W^{1,\varphi}_{0}\) we mean the closure in \(W^{1,\varphi}\) of the space of \(C^{\infty}\) functions with compact support. For any Young function \(\varphi\), since \(\varphi^{\prime}\) is non-decreasing, the following generalized inverse is well-defined \[(\varphi^{\prime})^{-1}(t):=\sup\{s\in[0,+\infty):\varphi^{\prime}(s)\leq t\}\] and this allows the following definition **Definition**.: For any Young function \(\varphi\), the complementary function \(\varphi^{*}\), which is also a Young function, is implicitely defined by: \[\varphi^{*}(t)^{\prime}=(\varphi^{\prime})^{-1},\quad\varphi^{*}(0)=0\] ### Examples An example of a family of functions \(f\) satisfying hypotheses \((H.0)\) through \((H.4)\): \[f(z):=\tilde{f}(|z|),\quad\text{where }\tilde{f}(t):=t^{p}\log^{\alpha}(1+t)[ \sin^{2}(t)+\cos^{2}(t)t^{\beta}],\] \[0\leq\alpha,\ 0\leq\beta<\frac{1}{n}\] with choice of \(\varphi\) and \(\psi\) as \[\varphi:=t^{p}\log^{\alpha}(1+t)\quad\text{ and }\quad\psi:=t^{p+\beta}\log^{ \alpha}(1+t)\] which collapses to simpler examples if \(\alpha=0\) or \(\beta=0\). For more examples of Orlicz Spaces and in particular Orlicz Spaces generated by functions with \(\Delta_{2}\) condition see [3]. ## 3. Technical lemmas and definitions Next lemma is a technical tool to prove the Caccioppoli estimate (for the proof see [19]): **Lemma 3**.: _Let \(-\infty<r<s<+\infty\) and a continuous nondecreasing function \(\Xi:[r,s]\to\mathbb{R}\) be given. Then there are \(\tilde{r}\in[r,\frac{2r+s}{3}]\) and \(\tilde{s}\in[\frac{r+2s}{3},s]\), for which hold:_ \[\frac{\Xi(t)-\Xi(\tilde{r})}{t-\tilde{r}}\leq 3\frac{\Xi(s)-\Xi(r)}{s-r}\] _and_ \[\frac{\Xi(\tilde{s})-\Xi(t)}{\tilde{s}-t}\leq 3\frac{\Xi(s)-\Xi(r)}{s-r}\] _for every \(t\in(\tilde{r},\tilde{s})\). In particular, we have \(\frac{s-r}{3}\leq\tilde{s}-\tilde{r}\leq s-r.\)_ The following lemma is related to Young functions satisfying hypothesis \((H.0)\): **Lemma 4**.: _Let \(h\) be a Young function satisfying \((H.0)\). Then we have_ * \(h\) _satisfies_ \(\Delta_{2}(h)<+\infty\) _and_ \(\Delta_{2}(h^{*})<+\infty\)__ * _For all_ \(t>0\) _the following inequality holds:_ \[h(1)(t^{p}-1)\leq h(t)\leq h(1)(t^{q}+1)\] _where_ \(p=p_{1}+1\) _and_ \(q=q_{1}+1\)__ * _For all_ \(t>0\)_,_ \(h^{\prime}(t)t\) _is equivalent to_ \(h(t)\)_._ For the proof, see Lemma 3.1 in [16]. Let us now introduce the _excess function_ appropriate at our hypotheses of growth: **Definition** (Excess).: For any \(z\in\mathbb{R}^{nN}\) let us define the quantity \[V(z):=\sqrt{\frac{\varphi^{\prime}(|z|)}{|z|}}z\] and let us notice that, with our hypotheses, we have \[|V(z_{1})-V(z_{2})|^{2}\simeq\varphi_{|z_{1}|}(|z_{1}-z_{2}|).\] We also define the excess function \[\Phi_{\varphi}(u,x_{0},\rho,z):=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt} }{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-12.149815pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}\!\int_{B_{\rho}(x_{ 0})}|V(Du)-V(z)|^{2}\,dx\] and \[\Phi_{\varphi}(u,x_{0},\rho):=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794 pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}\!\int_{B_{\rho}(x_{ 0})}|V(Du)-V[(Du)_{B_{\rho}(x_{0})}]|^{2}\,dx\] where by putting a set as a pedix to a function we refer to the integral average of the function over the set, i.e. \([V(Du)]_{B_{\rho}(x_{0})}=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794 pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}\!\int_{B_{\rho}(x_{ 0})}V(Du)\,dx\). We immediately notice that \[\Phi_{\varphi}(u,x_{0},\rho,z)\simeq\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{ \hbox{$-$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}\!\int\varphi_{|z|}(|Du-z|)\,dx \tag{1}\] _Remark 1_.: Under hypotheses \((H.3)\), assumptions \((H.2)\) and \((H.2^{\prime})\) are known to be equivalent. ## 4. Outline of proof of main theorem and preliminary lemmas In [5], D. Breit and A. Verde proved that if \(u\) is a \(W^{1,\varphi}\)- minimizer of \(\mathcal{F}\) on \(B_{\rho}(x_{0})\), for all \(L>0\) and \(\alpha\in(0,1)\) there exists \(\varepsilon_{0}>0\) such that if \[\Phi_{\varphi}(u,x_{0},\rho)\leq\varepsilon_{0}\quad\text{ and }\left|\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794 pt}}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}\!\int_{B_{\rho}(x_{ 0})}Du\right|\leq\frac{L}{2}\] then \(u\in C^{1,\alpha}_{loc}(B_{\rho}(x_{0});\mathbb{R}^{n})\). We will now replicate their reasoning in the weaker hypothesis of asymptotic \(\varphi\)-quasiconvexity and without assumption of superquadratic growth behaviour of \(\varphi\).. Once we proved Caccioppoli inequality and we used \(\mathcal{A}\)-harmonic approximation, it is essential to acquire growth estimates for the excess, because the perturbation terms from the Caccioppoli inequality can be handled using smallness of the excess. Using Campanato's integral characterization of Holder continuity ([6]) we are able to obtain the thesis. Let us start proving the following preliminary lemma: **Lemma 5**.: _If there exists \(z_{0}\), \(|z_{0}|>M+1\) and \(x_{0}\) such that:_ \[\fint_{B_{\rho}(x_{0})}\left|V\left(Du\right)-V(z_{0})\right|^{2}\to 0\text{ as }\rho \to 0^{+}\] _then there exists \(r_{1}=r_{1}(x_{0},z_{0})\)such that for all \(r<r_{1}\)_ \[\left|\fint_{B_{r}(x_{0})}Du\right|>M+1.\] Proof.: Let \(|z_{0}|=M+1+\varepsilon\). Then by definition of limit there must be a \(r_{1}\) (depending on the specific values of \(x_{0}\) and \(z_{0}\)) such that for all \(r<r_{1}\) we have, thanks to (1),that: \[\fint_{B_{r}(x_{0})}\varphi_{|z_{0}|}\left(|Du-z_{0}|\right)\leq\varphi_{|z_{0 }|}\left(\frac{\varepsilon}{2}\right),\quad\forall r<r_{1}\] Using Jensen inequality we obtain also that: \[\left|\fint_{B_{r}(x_{0})}Du-z_{0}\right|\leq\frac{\varepsilon}{2},\quad \forall r<r_{1}\] which gives: \[\left|\fint_{B_{r}(x_{0})}Du\right|\geq|z_{0}|-\frac{\varepsilon}{2}=M+1+ \varepsilon-\frac{\varepsilon}{2}>M+1,\quad\forall r<r_{1}.\] Now we enunciate Lemma 2.5 of [5], that is a generalization of the extension operator from [19] to Orlicz spaces, because it is an useful tool also for as: **Lemma 6**.: _Let \(0<r<s\) and \(\alpha\geq p\). Then there exists a linear operator_ \[T_{r,s}:W^{1,\varphi}(\Omega)\to W^{1,\varphi}(\Omega)\] _defined as_ \[T_{r,s}u(x)=\fint_{B_{1}(0)}u(x+\xi(x)y)\,dy,\quad\text{ with }\xi(x):=\frac{ \max\{0,\min\{|x|-r,s-|x|\}\}}{2}\] _such that_ * \(T_{r,s}u=u\) _on_ \(B_{r}\) _and outside_ \(\overline{B}_{s}\)_;_ * \(T_{r,s}u\in u+W_{0}^{1,\varphi}(B_{s}\setminus\overline{B}_{r},\mathbb{R}^{n})\)_;_ * \(|DT_{r,s}u|\leq cT_{r,s}|Du|\)__ * _The following estimates holds:_ \[\int_{B_{s}\setminus B_{r}}\varphi(|T_{r,s}u|)\,dx \leq c\int_{B_{s}\setminus B_{r}}\varphi(|u|)\,dx\] \[\int_{B_{s}\setminus B_{r}}\varphi(|DT_{r,s}u|)\,dx \leq c\int_{B_{s}\setminus B_{r}}\varphi(|Du|)\,dx\] \[\int_{B_{s}\setminus B_{r}}\varphi^{\beta}(|T_{r,s}u|)\,dx\leq c(s-r)^{-n\beta+n+ \beta}\left[\sup_{r\leq t\leq s}\frac{\theta(t)-\theta(r)}{t-r}+\sup_{r\leq t \leq s}\frac{\theta(s)-\theta(t)}{s-t}\right]\] \[\int_{B_{s}\setminus B_{r}}\varphi^{\beta}(|DT_{r,s}u|)\,dx\leq c(s-r)^{-n \beta+n+\beta}\left[\sup_{r\leq t\leq s}\frac{\Theta(t)-\Theta(r)}{t-r}+\sup_ {r\leq t\leq s}\frac{\Theta(s)-\Theta(t)}{s-t}\right]\] _where_ \[\theta(t):=\int_{B_{t}}\varphi(|u|)\,dx,\quad\Theta(t):=\int_{B_{t}}\varphi(| Du|)\,dx.\] Last preliminary lemma is an estimate that will be useful for use to prove Caccioppoli inequality handling both subquadratic and superquadratic case: **Lemma 7**.: _Choose any positive constant \(L\) larger than \(M\) and let \(a\) be a real number in \((M,L)\). Then, for any \(t>0\) we have_ \[\psi_{a}(t)\leq K\cdot H(\varphi_{a}(t))\] _where \(K=K(M,L,\beta,\psi,\varphi)\) is a positive real constant depending on \(L\) and \(H(t):=t+t^{\beta}\)_ Proof.: Let us start with the case \(t\leq 1\). In this case, by \((H.2)\), \[\psi_{a}(t)\simeq\psi^{\prime\prime}(a+t)t^{2}\simeq\psi(a+t)\frac{t^{2}}{(a+t )^{2}}\leq\max_{[M,L+1]}\psi\cdot\frac{t^{2}}{(a+t)^{2}}\leq\\ \leq K_{1}\min_{[M,L+1]}\phi\cdot\frac{t^{2}}{(a+t)^{2}}\leq K_{1} \phi(a+t)\frac{t^{2}}{(a+t)^{2}}\simeq K_{1}\phi_{a}(t)\] where \[K_{1}=\frac{\max_{[M,L+1]}\psi}{\min_{[M,L+1]}\phi}\in(0,+\infty)\] depends only on \(M,L,\psi\) and \(\varphi\). On the other hand, if \(t>1\), \[\psi_{a}(t)\simeq\psi^{\prime\prime}(a+t)t^{2}\simeq\psi(a+t)\frac{t^{2}}{(a +t)^{2}}\leq\varphi^{\beta}(a+t)[\frac{t^{2}}{(a+t)^{2}}]^{\beta}\cdot\left(1+ \frac{a}{t}\right)^{2\beta-2}\leq\\ \leq K_{2}\left[\frac{\varphi(a+t)t^{2}}{(a+t)^{2}}\right]^{\beta }\simeq K_{2}\varphi_{a}(t)^{\beta}\] where \(K_{2}=(1+L)^{2\beta-2}\). The thesis follows with \(K=\max\{K_{1},K_{2}\}\). ## 5. Caccioppoli Inequality We are now ready to prove the Caccioppoli inequality This section is dedicated to the Caccioppoli inequality, which is the main tool to prove partial regularity of solutions of this type of problem : **Lemma 8**.: _Let the assumptions \((H.0)-(H.4)\) hold for a given \(M\). Choose any positive constant \(L>M>0\) and a consider \(W^{1,\varphi}\)-minimizer \(u\in W^{1,\varphi}(B_{\rho}(x_{0});\mathbb{R}^{N})\) of \(\mathcal{F}\) on a ball \(B_{\rho}(x_{0})\) contained in \(\Omega\). Then, for all \(z\in\mathbb{R}^{nN}\) with \(M<|z|<L+1\), let \(q(x)\) be an affine function with gradient \(z\) and \(v(x)=u(x)-q(x)\). We have:_ \[\fint_{B_{\frac{\rho}{2}}}\varphi_{|z|}(|Dv|)\,dx\leq c\fint_{B_{ \rho}}\varphi_{|z|}\left(\frac{|v|}{\rho}\right)\,dx+\\ +c\left\{\fint_{B_{\rho}}\left[\varphi_{|z|}(|Dv|)+\varphi_{|z|} \left(\frac{|v|}{\rho}\right)\right]\,dx\right\}^{\beta}. \tag{2}\] Proof.: Let us assume for simplicity \(x_{0}=0\) and choose \[\frac{\rho}{2}\leq r<s\leq\rho.\] Let us define: \[\Xi(t):=\int_{B_{t}}\left[\varphi_{|z|}(|Dv|)+\varphi_{|z|}\left(\left|\frac{v }{\tilde{s}-\tilde{r}}\right|\right)\right]\,dx.\] We choose in addition \(r\leq\tilde{r}<\tilde{s}\leq s\) as in Lemma 3. Let \(\eta\) denote a smooth cut-off functions with support in \(B_{\tilde{s}}\) satisfying \(\eta\equiv 1\) in \(\overline{B_{\tilde{r}}}\) and \(0\leq\eta\leq 1\), \(|\nabla\eta|\leq\frac{2}{\tilde{s}-\tilde{r}}\) on \(B_{\rho}\). Using the extension operator from Lemma 6, we set: \[\zeta:=T_{\tilde{r},\tilde{s}}[(1-\eta)v]\text{ and }\xi:=v-\zeta.\] By \(W^{1,\varphi}\)quasiconvexity we have: \[\gamma\int_{B_{\tilde{s}}}\varphi_{|z|}(|D\xi|)\leq\int_{B_{ \tilde{s}}}f(z+D\xi)-f(z)=\\ =\int_{B_{\tilde{s}}}f(z+D\xi)-f(Du)+f(Du)-f(Du-D\xi)+f(Du-D\xi)-f (z)\] Since \(f(Du)-f(Du-D\xi)\leq 0\) and \(Du=z+D\xi+D\zeta\), we obtain: \[\int_{B_{\tilde{s}}}f(z+D\xi)-f(Du)+f(Du)-f(Du-D\xi)+f(Du-D\xi)-f (z)\leq\\ \leq\int_{B_{\tilde{s}}}f(z+D\xi)-f(z+D\xi+D\zeta)+\int_{B_{ \tilde{s}}}f(z+D\xi)-f(z)\leq\\ \leq\int_{B_{\tilde{s}}}\int_{0}^{1}|Df(z+\theta D\zeta)-Df(z)||D \zeta|\,d\theta,dx+\\ +\int_{B_{\tilde{s}}}\int_{0}^{1}|Df(z+D\xi+\theta D\zeta)-Df(z)|| D\zeta|\,d\theta,dx=:\mathcal{I}_{1}+\mathcal{I}_{2}.\] Let us start reasoning on \(\mathcal{I}_{1}\), remembering our growth hypotheses \((H.3)\) and using also Lemma 3.2 from [25]. We deduce: \[\mathcal{I}_{1} \leq\int_{B_{\delta}}\int_{0}^{1}\int_{0}^{1}|D^{2}f(tz+(1-t)(z+ \theta D\zeta))||\theta D\zeta|D\zeta|\,dt\,d\theta,dx\leq\] \[\leq\Gamma^{\prime\prime}\int_{B_{\delta}}\int_{0}^{1}\int_{0}^{1 }|\psi^{\prime\prime}(|tz+(1-t)(z+\theta D\zeta|))||D\zeta|^{2}\,dt\,d\theta,dx\leq\] \[\leq c\int_{B_{\delta}}\frac{\psi^{\prime}(2|z|+|z+D\zeta|)}{2|z|+ |z+D\zeta|}\,dx\leq\] \[\leq c\int_{B_{\delta}}\psi_{|z|}(|D\zeta|)\,dx.\] Regarding \(\mathcal{I}_{2}\) we can deduce: \[\mathcal{I}_{2}\leq\int_{B_{\delta}}\int_{0}^{1}\int_{0}^{1}|D^{ 2}f(t(z+D\xi+\theta D\zeta)+(1-t)z)||D\xi+\theta D\zeta||D\zeta|\,dt\,d\theta \,dx\leq\] \[\leq c\int_{B_{\delta}}\int_{0}^{1}\int_{0}^{1}|\psi^{\prime \prime}(t(z+D\xi+\theta D\zeta)+(1-t)z)||D\xi+\theta D\zeta||D\zeta|\,dt\,d \theta\,dx\leq\] \[\leq c\int_{B_{\delta}}\psi^{\prime\prime}(|z|+|D\xi|+|D\zeta|)(|D \xi|+|D\zeta|)|D\zeta|\,dx\leq\] \[\leq\int_{B_{\delta}}\psi^{\prime}_{|z|}(|D\xi|+|D\zeta|)|D\zeta| \,dx\leq\] \[\leq c\int_{B_{\delta}}\psi^{\prime}_{|z|}(|D\xi|)|D\zeta|+c\int_{ B_{\delta}}\psi^{\prime}_{|z|}(|D\zeta|)|D\zeta|\leq\] \[\leq c\int_{B_{\delta}}\psi^{\prime}_{|z|}(|D\xi|)|D\zeta|+c\int_{ B_{\delta}}\psi_{|z|}(|D\zeta|)\,dx\] Combining our estimates, we have: \[\gamma\int_{B_{\delta}}\varphi_{|z|}(|D\xi|)\leq c\int_{B_{\delta}}\psi^{ \prime}_{|z|}(|D\xi|)|D\zeta|+c\int_{B_{\delta}\setminus B_{\delta}}\psi_{|z| }(|D\zeta|)\,dx\] and using our anisotropy assumption \((H.3)\) and our Lemma 7 we obtain the following estimate: \[\gamma\int_{B_{\hat{s}}}\varphi_{|z|}(|D\xi|)\leq c\int_{B_{\hat{s}}} H[\varphi_{|z|}(|D\zeta|)]\,dx+\\ +c\left[\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}(|D \xi|)\,dx+\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}^{\beta}(|D\zeta| )\,dx\right]\leq\\ \leq c\left[\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}( |D\zeta|)\,dx+\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}^{\beta}(|D \zeta|)\,dx+\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}(|D\xi|)\right]= \\ =\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}(|DT_{\tilde{ r},\tilde{s}}[(1-\eta)v]|)\,dx+\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}^{ \beta}(|DT_{\tilde{r},\tilde{s}}[(1-\eta)v]|)+\\ +c\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\phi_{|z|}(|Dv|)\leq \\ \leq c\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\varphi_{|z|}(|D \eta||v|+|Dv|)+c(\tilde{s}-\tilde{r})^{-n\beta+n+\beta}\left[\sup_{[\tilde{r}, \tilde{s}]}\frac{\Xi(t)-\Xi(\tilde{r})}{t-\tilde{r}}+\sup_{[\tilde{r},\tilde{s} ]}\frac{\Xi(\tilde{s})-\Xi(t)}{\tilde{s}-t}\right]^{\beta}+\\ +c\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\phi_{|z|}(|Dv|)\,dx\leq \\ \leq c^{\prime}\int_{B_{\hat{s}}\setminus B_{\hat{r}}}\phi_{|z|} \left(\left|\frac{v}{\tilde{s}-\tilde{r}}\right|\right)+\varphi_{|z|}(|Dv|)\,dx+ \\ +c(s-r)^{-n\beta+n}[\Xi(s)-\Xi(r)]^{\beta}\] where we have used point (d) of Lemma 6 and Lemma 3. Now, starting again from the left, we obtain: \[\int_{B_{r}}\varphi_{|z|}(|Dv|)\,dx \leq c\int_{B_{\rho}}\varphi_{|z|}\left(\frac{|v|}{\tilde{s}- \tilde{r}}\right)+\] \[+c^{\prime}\int_{B_{s}\setminus B_{r}}\varphi_{|z|}(|Dv|)\,dx+c( s-r)^{n}\left[\frac{\Xi(\rho)}{(s-r)^{n}}\right]^{\beta}\] Using the hole-filling method we have: \[\int_{B_{r}}\varphi_{|z|}(|Dv|)\,dx\leq\frac{c^{\prime}}{1+c^{ \prime}}\int_{B_{s}}\varphi_{|z|}(|Dv|)+\\ +c(s-r)^{n}\left[(s-r)^{-n}\int_{B_{\rho}}\varphi_{|z|}(|Dv|)+ \varphi_{|z|}\left(\frac{|v|}{\tilde{s}-\tilde{r}}\right)\right]^{\beta}+\\ +c\int_{B_{\rho}}\varphi_{|z|}\left(\frac{|v|}{s-r}\right)\,dx\] A well-known lemma of Giaquinta (see [21], Chapter V, Lemma 3.1) concludes the proof. ## 6. \(\mathcal{A}\)-harmonicity Let us consider a bilinear form \(\mathcal{A}\) on \(\mathbb{R}^{nN}\) and assume that the upper bound \[|\mathcal{A}|\leq\Lambda \tag{3}\] with a constant \(\Lambda>0\) holds and that the Legendre-Hadamard condition \[\mathcal{A}(yx^{T},yx^{T})\geq\lambda|x|^{2}|y|^{2}\quad\text{ for all }x\in\mathbb{R}^{n},y\in\mathbb{R}^{N} \tag{4}\] with ellipticity constant \(\lambda>0\) is satisfied. We say that \(h\in W^{1,1}_{loc}(\Omega,\mathbb{R}^{N})\) is \(\mathcal{A}\)-harmonic on \(\Omega\) iff \[\int_{\Omega}\mathcal{A}(Dh,D\phi)\,dx=0\] holds for all smooth \(\phi:\Omega\to\mathbb{R}^{N}\) with compact support in \(\Omega\). The following lemma will ensure that, for large \(z\), the bilinear form \(\mathcal{A}=D^{2}f(z)\) satisfies the Legendre-Hadamard condition. **Lemma 9**.: _Let \(f\) satisfy \((H.0)\) and \((H.2^{\prime})\) for a given \(M>0\). Then, for any given \(z\) such that \(|z|>M\), we have that \(\mathcal{A}=D^{2}f(z)\) satisfies the Legendre-Hadamard condition_ \[\mathcal{A}(\zeta x^{T},\zeta x^{T})\geq\lambda|x|^{2}|\zeta|^{2}\quad\text{ for all }x\in\mathbb{R}^{n}\text{ and }\zeta\in\mathbb{R}^{N}\] _with ellipticity constant \(\lambda=2\gamma\)._ Proof.: Let \(u\) be the affine function \(u(x)=zx\) with \(z\) such that \(|z|>M\). \(W^{1,\varphi}\)-quasiconvexity in \(z\) ensures that \(u\) is a \(W^{1,\varphi}\)-minimizer of the functional \(\mathcal{F}\) induced by \(f\) and that the function: \[G(t)=G_{\Phi}(t):=\mathcal{F}_{B_{1}}(u+t\Phi)-\gamma\int_{B_{1}}\varphi_{1+|z |}(|tD\Phi|)\,dx\] has a minimum in \(t=0\) for any \(\phi\in W^{1,\varphi}_{0}(B_{1},\mathbb{R}^{N})\) and, in the same way as it is done in ([23], Prop. 5.2), from \(G^{\prime}_{\Phi}(0)=0\) and \(G^{\prime\prime}_{\Phi}(0)\geq 0\) the Legendre-Hadamard condition will follow. As a matter of fact, from \(G^{\prime\prime}(0)\geq 0\), we obtain: \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta }}(z_{0})D_{k}\phi^{\alpha}D_{j}\phi^{\beta}\,dx\geq 2\gamma\int_{B_{1}}|D \phi^{2}|\,dx \tag{5}\] for every \(\phi\in C^{1}_{c}(B_{1},\mathbb{R}^{N})\). Let us choose \(\phi=\nu+i\mu\) and write (5) for \(\nu\) and for \(\mu\), i.e.: \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta }}(z_{0})D_{k}\nu^{\alpha}D_{j}\nu^{\beta}\,dx\geq 2\gamma\int_{B_{1}}|D\nu^{2}| \,dx \tag{6}\] and \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta }}(z_{0})D_{k}\mu^{\alpha}D_{j}\mu^{\beta}\,dx\geq 2\gamma\int_{B_{1}}|D\mu^{2}| \,dx. \tag{7}\] We obtain: \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta}}(z_ {0})\left[D_{k}\nu^{\alpha}D_{j}\nu^{\beta}+D_{k}\mu^{\alpha}D_{j}\mu^{\beta} \right]\,dx\geq 2\gamma\int_{B_{1}}|D\nu^{2}|+|D\mu^{2}|\,dx \tag{8}\] and hence: \[\operatorname{Re}\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha} \partial z_{j}^{\beta}}(z_{0})D_{k}\phi^{\alpha}D_{j}\overline{\phi}^{\beta} \,dx\geq 2\gamma\int_{B_{1}}|D\phi|^{2}\,dx\] Now, let us consider any \(\xi\in\mathbb{R}^{n}\), \(\eta\in\mathbb{R}^{N}\), \(\tau\in\mathbb{R}\) and \(\Psi(x)\in C_{c}^{\infty}(B_{1},\mathbb{R})\) and choose \(\phi\) such that \(\phi(x)=\eta e^{i\tau(\xi\cdot x)}\Psi(x)\). Since \(\phi^{\alpha}(x)=\eta^{\alpha}\Psi(x)e^{i\tau\xi\cdot x}\), we have \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta }}(z_{0})\eta^{\alpha}\eta^{\beta}[\tau^{2}\xi_{k}\xi_{j}\Psi^{2}+D_{k}\Psi D_ {j}\Psi]\,dx\geq 2\gamma|\eta|^{2}\int_{B_{1}}(|D\Psi|^{2}+\tau^{2}|\xi|^{2}| \Psi(x)|^{2})\,dx.\] Dividing by \(\tau^{2}\) and letting \(\tau\to\infty\) we get: \[\int_{B_{1}}\frac{\partial^{2}F}{\partial z_{k}^{\alpha}\partial z_{j}^{\beta }}(z_{0})\xi_{k}\xi_{j}\eta^{\alpha}\eta^{\beta}\Psi^{2}(x)\,dx\geq 2\gamma| \eta|^{2}|\xi|^{2}\int_{B_{1}}\Psi^{2}(x)\,dx\] and since this holds for all \(\Psi\in C_{c}^{\infty}(B_{1},\mathbb{R})\) the proposition is proved. _Remark 2_.: If \(f\in C_{\text{loc}}^{2}(\mathbb{R}^{nN})\), for each \(L>0\), there exists a modulus of continuity \(\omega_{L}:[0,+\infty[\to[0,+\infty[\) satisfying \(\lim\limits_{z\to 0}\omega_{L}(z)=0\) such that for all \(z_{1},z_{2}\in\mathbb{R}^{nN}\) we have: \[|z_{1}|\leq L,\ |z_{2}|\leq L+1\Rightarrow|D^{2}f(z_{1})-D^{2}f(z_{2})| \leq\omega_{L}(|z_{1}-z_{2}|^{2}).\] Moreover, \(\omega_{L}\) can be chosen such that the following properties hold: 1. \(\omega_{L}\) is non-decreasing, 2. \(\omega_{L}^{2}\) is concave, 3. \(\omega_{L}^{2}(z)\geq z\) for all \(z\geq 0\). The following lemma essentially proves that if \(Du\) is close to \(z\), subtracting from \(u\) an affine function with gradient \(z\) gives a function "almost" \(Df^{2}(z)\)-harmonic. **Lemma 10**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{9}\] _where \(\Phi_{\varphi}:=\Phi_{\varphi}(u,x_{0},\rho,z)\), the constant \(c\) depends only on \(n\),\(N\),\(\Gamma\),\(\Gamma^{\prime}\),\(L\) and \(\omega_{L}\) is the modulus of continuity of Remark 2(see also [32])._ Proof.: We first prove the claim. **Lemma 11**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{10}\] _where \(\Phi_{\varphi}:=\Phi_{\varphi}(u,x_{0},\rho,z)\), the constant \(c\) depends only on \(n\),\(N\),\(\Gamma^{\prime}\),\(\Gamma^{\prime\prime}\),\(L\) and \(\omega_{L}\) is the modulus of continuity of Remark 2(see also [32])._ Proof.: We first prove the claim. **Lemma 12**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{11}\] Proof.: We first prove the claim. **Lemma 13**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{12}\] Proof.: We first prove the claim. **Lemma 14**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{13}\] Proof.: We first prove the claim. **Lemma 15**.: _Let \(f\) satisfy \((H.0)\) through \((H.4)\) for a given \(M>0\). Let us choose any \(L>M>0\) and take \(u\in W^{1,\varphi}(\mathbb{R}^{n},\mathbb{R}^{N})\) to be a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on some ball \(B_{\rho}(x_{0})\). Then for all \(z:\ M<|z|\leq L\) and \(\phi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) we have_ \[\left|\fint_{B_{\rho}(x_{0})}D^{2}f(z)(Du-z,D\phi)\,dx\right|\leq c\sqrt{\Phi_ {\varphi}}\omega_{L}(\Phi_{\varphi})\sup_{B_{\rho}(x_{0})}|D\phi|. \tag{14}\] Proof.: We first prove the claim. **Lemma Proof.: Setting \(v(x):=u(x)-zx\), the Euler equation of \(\mathcal{F}\) gives \[\left|\fint_{B_{\rho}}D^{2}f(z)(Dv,D\phi)\,dx\right|=\\ =\left|\fint_{B_{\rho}}\left[D^{2}f(z)(Dv,D\phi)+Df(z)D\phi-Df(Du)D \phi\right]\,dx\right|.\] If \(|Dv|\leq 1\) we have \[|D^{2}f(z)(Dv,D\phi)+Df(z)D\phi-Df(Du)D\phi|\leq\\ \leq\int_{0}^{1}\left|D^{2}f(z)-D^{2}f(z+tDv)\right|\,dt|Dv|\|D \phi\|_{\infty}\leq\\ \leq\omega_{L}(|Dv|^{2})|Dv|\|D\phi\|_{\infty}\leq\\ \leq c\omega_{L}(\varphi_{|z|}(|Dv|))\varphi_{|z|}(|Dv|)\|D\phi\| _{\infty}\] where in the last step we have used that \[|Dv|^{2}<\inf_{t\in[M,L+1]}\varphi^{\prime\prime}(t)|Dv|^{2}\leq\varphi^{ \prime\prime}(|z|+|Dv|)|Dv|^{2}\simeq\varphi_{|z|}(|Dv|)\] from \((H.1)\). If, instead, \(|Dv|>1\), since \(M\leq|z|\leq L\), the \((H.1)\) imply that \(\psi^{\prime}(t)>ct\) on \(t>1\) and \((H.3)\) holds, we obtain that: \[|D^{2}f(z)(Dv,D\phi)+Df(z)D\phi-Df(Du)D\phi|\leq\\ \leq c\left(|Dv|+|Dv|\int_{0}^{1}D^{2}f(|z+t(Du-z)|)\,dt\right)\|D \phi\|_{\infty}\leq\\ c\left(|Dv|+\int_{0}^{1}\frac{\psi^{\prime}(|z+t(Du-z)|)}{|z+t( Du-z)|}\,dt\right)\|D\phi\|_{\infty}\leq\\ \leq c\left[\psi^{\prime}(|Dv|)+\frac{\psi^{\prime}(|z|+|Du-z|)}{|z |+|Du-z|}\right]\|D\phi\|_{\infty}\leq\\ \leq c\left[\psi^{\prime}(|Dv|)+\frac{\psi^{\prime}(2|z|)}{|z|}+ \frac{\psi^{\prime}(2|Du-z|)}{2|Du-z|}\right]\|D\phi\|_{\infty}\leq\\ \leq c\left[\psi^{\prime}(|Dv|)+1\right]\|D\phi\|_{\infty}\leq\\ \leq c\varphi(|Dv|)\|D\phi\|_{\infty}\leq c\varphi_{|z|}(|Dv|)\|D \phi\|_{\infty}.\] Now, by the fact that \(\omega_{L}^{2}(t)\geq t\) for \(t\geq 0\) we get \[\left|\fint_{B_{\rho}}D^{2}f(z)(Dv,D\phi)\,dx\right|\leq c\|D\phi\|_{\infty} \fint_{B_{\rho}}\omega_{L}(\varphi_{|z|}(|Dv|))\sqrt{\varphi_{|z|}(|Dv|)}\,dx\] and since \(\omega_{L}\) is non-decreasing, using Cauchy-Schwartz and Jensen inequalities we get \[\left|\fint_{B_{\rho}}D^{2}f(z)(Dv,D\phi)\,dx\right|\leq c\sqrt{\Phi_{\varphi }}\omega_{L}(\Phi_{\varphi})\|D\psi\|_{\infty}\] which concludes the proof. Now we give the statement of Theorem 3.3 from [24], because it is useful also for us: **Lemma 11**.: _Let \(0<\lambda\leq\Lambda<\infty\) and \(\varepsilon>0\). Then there exists a \(\delta(n,N,\varphi,\varphi^{*},\Lambda,\lambda,\varepsilon)>0\) such that the following assertion holds: for all \(\kappa>0\), for all \(\mathcal{A}\) satisfying (3) and (4) and for each \(u\in W^{1,\varphi}(B_{\rho}(x_{0});\mathbb{R}^{N})\) satisfying_ \[\left|\fint_{B_{\rho}(x_{0})}\mathcal{A}(Du,D\phi)\,dx\right|\leq\delta\kappa \sup_{B_{\rho}(x_{0})}|D\phi|\] _for all smooth \(\phi:B_{\rho}(x_{0})\to\mathbb{R}^{N}\) with compact support in \(B_{\rho}(x_{0})\) there exists an \(\mathcal{A}\)-harmonic function \(h\in C^{\infty}_{loc}(B_{\rho}(x_{0}),\mathbb{R}^{N})\) such that:_ \[\sup_{B_{\rho/2}(x_{0})}|Dh|+\rho\sup_{B_{\rho/2}(x_{0})}|D^{2}h|\leq c^{*} \varphi_{|z|}^{-1}\left(\fint_{B_{\rho}(x_{0})}\varphi_{|z|}(|Du|)\right)\] _and_ \[\fint_{B_{\rho/2}(x_{0})}\varphi_{|z|}\left(\frac{|u-h|}{\rho}\right)\,dx\leq \varepsilon\left[\fint_{B_{\rho}(x_{0})}\varphi_{|z|}(|Du|)+\varphi(\gamma) \right].\] _Here \(c^{*}\) denotes a constant depending only on \(n,N,q_{1},\Lambda,\lambda\)._ ## 7. Excess decay estimate **Proposition 12**.: _Let \(z_{0}\) be s.t. \(|z_{0}|>M+1\) and \(x_{0}\) be s.t._ \[\lim_{\rho\to 0}\fint_{B_{\rho}(x_{0})}\left|V(Du(x))-V(z_{0})\right|^{2}=0\] _then_ \[\Phi_{p}(u,x_{0},\rho)\to 0\quad\text{ as }\quad\rho\to 0.\] Proof.: Let us consider \((Du)_{\rho}:=\fint_{B_{\rho}(x_{0})}|Du|\). We have, by triangular inequality that: \[\Phi_{p}(u,x_{0},\rho)=\fint_{B_{\rho}(z_{0})}\left|V(Du(x))-V((Du)_{\rho}) \right|^{2}dx\leq\\ \leq c\left[\fint_{B_{\rho}(x_{0})}\left|V(Du(x))-V(z_{0})\right|^ {2}\,dx+\fint_{B_{\rho}(x_{0})}\left|V(Du_{\rho})-V(z_{0})\right|^{2}\,dx \right].\] First summand of the right side goes to \(0\) by hypothesis. The second summand is equivalent to \[\phi_{|z_{0}|}\left(\left|\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{\rho(x_{0})}}Du(x)-z_{0}\,dx \right|\right)\] By Jensen inequality, we obtain: \[\phi_{|z_{0}|}\left(\left|\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{B_{\rho(x_{0})}}Du(x)-z_{0}\,dx\right|\right)\leq\mathchoice{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}} \!\int_{B_{\rho(x_{0})}}\phi_{|z_{0}|}(|Du(x)-z_{0}|)\,dx.\] The second member of this inequality is equivalent to \[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-13.499794pt}}\!\int_{B_{\rho(x_{0})}}|V(Du(x))-V(z_{0})|^{2}\] which vanishes by hypothesis. Finally, we can prove the excess decay: **Proposition 13**.: _Assume \(f\), \(\varphi\) and \(\psi\) satisfy hypotheses \((H.0)\) through \((H.4)\) for given \(p_{1},q_{1}\) and \(M\). Choose any \(L>M+1>0\), \(\alpha\in(0,1)\) and \(z_{0}\in\mathbb{R}^{nN}\) such that \(L>|z_{0}|>M+1\). Then there exist constants \(\varepsilon_{0}>0\), \(\theta\in(0,1)\) and a radius \(\rho^{*}>0\) depending on \(n,N,L,p_{1},q_{1},\Gamma,\alpha,\gamma,x_{0},z_{0}\) and \(\Lambda_{L}:=\max_{B_{L+2}}|D^{2}f|\) and with \(\varepsilon_{0}\) depending additionally on \(\omega_{L}\) such that the following result holds. Consider \(u\) a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on \(B_{\rho}(x_{0})\), with \(\rho<\rho^{*}\) and \(x_{0}\in\mathbb{R}^{n}\) satisfying_ \[\lim_{\rho\to 0}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{B_{\rho}(x_{0})}|V((Du(x))-V(z_{0})|^{2}=0.\] _If_ \[\Phi_{\varphi}(u,x_{0},\rho)\leq\varepsilon_{0} \tag{10}\] _then_ \[\Phi_{\varphi}(u,x_{0},\theta\rho)\leq\theta^{2\alpha}\Phi_{\varphi}(u,x_{0}, \rho).\] Proof.: Let \(z_{0}\) be such that \(|z_{0}|>M+1\) and \(x_{0}\) any point such that \[\lim_{\rho\to 0}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{B_{\rho}(x_{0})}|V(Du(x))-V(z_{0})|^{2}=0.\] In what follows, for simplicity of notation, we assume that \(x_{0}=0\) and we abbreviate \[z=(Du)_{\rho}:=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-13.499794pt}}\! \int_{B_{\rho}}Du\,dx\] and \[\Phi_{\varphi}(\cdot):=\Phi_{\varphi}(u,0,\cdot).\] where \(\rho>0\) is any positive value small enough (smaller than a \(\rho^{*}\) that will be determined throughout the proof). As the claim is trivial if \(\exists\rho\) s.t. \(\Phi_{\varphi}(\rho)=0\) we can assume \(\Phi_{\varphi}(\rho)\neq 0\). Setting \[w(x):=u(x)-zx\] we have by our equivalent definition of \(\Phi_{\varphi}(\rho).\) that \[\fint_{B_{\rho}}\varphi_{|z|}(|Dw|)\,dx=\Phi_{\varphi}(\rho)\] Next we will make an approximation by \(\mathcal{A}\)-harmonic functions, where \(\mathcal{A}:=D^{2}f(z)\). If \(\rho\) is chosen sufficiently small we have \(L>|z|>M+1\), hence, from \(|\mathcal{A}|\leq\max_{B_{L+2}}|D^{2}f|=:\Lambda_{L}\) and from Lemma 6 we deduce that \(\mathcal{A}\) satisfies (4) with ellipticity constant \(2\gamma\). Lemma 10 yields the estimate: \[\left|\fint_{B_{\rho}}\mathcal{A}(Dw,D\phi)\,dx\right|\leq C_{2}\sqrt{\Phi_{ \varphi}(\rho)}\omega_{L}\left(\Phi_{\varphi}(\rho)\right)\sup_{B_{\rho}}|D\phi|\] for all \(\rho<\rho^{*}\) and for all smooth functions \(\phi:B_{\rho}\to\mathbb{R}^{N}\) with compact support in \(B_{\rho}\), where \(C_{2}\) is a positive constant depending on \(n,N,p_{1},q_{1},\Gamma,L,\Lambda_{L}\). For \(\varepsilon>0\) to be specified later, let us fix the corresponding constant \(\delta(n,N,\varphi,\Lambda_{L},\gamma,\varepsilon)>0\) from Lemma 11. Now, let \(\varepsilon_{0}=\varepsilon_{0}(n,N,\varphi,\Lambda_{L},\gamma,\varepsilon)\) be small enough so that (10) implies: \[C_{2}\omega_{L}(\Phi_{\varphi}(\rho))\leq\delta \tag{11}\] \[\kappa=\sqrt{\Phi_{\varphi}(\rho)}\leq 1. \tag{12}\] We apply Lemma 11 obtaining an \(\mathcal{A}\)-harmonic function \(h\in C_{loc}^{\infty}(B_{\rho};\mathbb{R}^{N})\) such that \[\sup_{B_{\rho/2}}|Dh|+\rho\sup_{B_{\rho/2}}|D^{2}h|\leq c^{*}\varphi_{|z|}^{-1 }\left(\Phi_{\varphi}(\rho)\right)\] where \(c^{*}=c^{*}(n,N,\varphi,\Lambda_{L},\gamma)\) and \[\fint_{B_{\rho/2}}\varphi_{|z|}\left(\frac{|w-h|}{\rho}\right)\,dx\leq \varepsilon\left[\Phi_{\varphi}(\rho)+\varphi_{|z|}(\kappa)\right]\leq c \varepsilon\Phi_{\varphi}(\rho), \tag{13}\] where this last step follows by noticing that \(\varphi_{|z|}(t)\simeq t^{2}\) when \(t<1\). Now fix \(\theta\in(0,1/4]\). Taylor expansion implies the estimate: \[\sup_{x\in B_{2\theta\rho}}|h(x)-h(0)-Dh(0)x|\leq\\ \leq\frac{1}{2}(2\theta\rho)^{2}\sup_{x\in B_{\rho/2}}|D^{2}h|\leq 2 c^{*}\theta^{2}\rho\varphi_{|z|}^{-1} \left(\Phi_{\varphi}(\rho)\right).\] It follows: \[\fint_{B_{2\theta\rho}}\varphi_{|z|}\left(\frac{|w(x)-h(0)-Dh(0)x|} {2\theta\rho}\right)\,dx\leq\\ \leq c\Big{[}\theta^{-q_{1}-1}\fint_{B_{\rho/2}}\varphi_{|z|}\left( \frac{|w-h|}{\rho}\right)\,dx+\\ +\fint_{B_{2\theta\rho}}\varphi_{|z|}\left(\frac{|h(x)-h(0)-Dh(0) x|}{2\theta\rho}\right)\,dx\Big{]}\leq\\ \leq c\left[\theta^{-q_{1}-1}\varepsilon\Phi_{\varphi}(\rho)+ \varphi_{|z|}(\theta\kappa)\right]\leq\\ \leq c\left[\theta^{-q_{1}-1}\varepsilon\Phi_{\varphi}(\rho)+ \theta^{2}\Phi_{\varphi}(\rho)\right]\leq c\theta^{2}\Phi_{\varphi}(\rho)\] where the last step is obtained by choosing \(\varepsilon:=\varepsilon(\theta)=\theta^{q_{1}+3}\) (so, remember that \(\varepsilon\) and hence \(\delta\) and \(\varepsilon_{0}\) depend on whatever \(\theta\) is) and recalling the definition of \(w\) we have: \[\fint_{B_{2\theta\rho}}\varphi_{|z|}\left(\frac{|u(x)-zx-(h(0)+Dh(0)x)|}{2 \theta\rho}\right)\,dx\leq c\theta^{2}\Phi_{\varphi}(\rho). \tag{14}\] On the other hand, we remark that, using the definition of \(s\) and properties of \(h\): \[|Dh(0)|^{2}\leq(c^{*})^{2}\left[\varphi_{|z|}^{-1}\left(\Phi_{\varphi}(\rho) \right)\right]^{2}. \tag{15}\] We can take \(\varepsilon_{0}\) small enough such that (10) implies \[|Dh(0)|^{2}\leq 1. \tag{16}\] Using this fact together with (15) we get \[\Phi_{\varphi}(2\theta\rho,z+Dh(0))\leq\\ \leq c\left[(2\theta)^{-n}\left(\fint_{B_{\rho}}|V(Du(x))-V(z)|^{ 2}\,dx+\varphi_{|z|}(|Dh(0)|)\right)\right]\leq\\ \leq c\left[\theta^{-n}\left(\Phi_{\varphi}(\rho)+\Phi_{\varphi}( \rho)\right)\right]\leq c\theta^{-n}\Phi_{\varphi}(\rho). \tag{17}\] Now we need to use (2) with \(\zeta=h(0)\) and \(z+Dh(0)\) instead of \(z\), and we can be sure that \(|z+Dh(0)|>M\) because \(|Dh(0)|\leq 1\). Now, we can combine (14) and (17) and Caccioppoli inequality (2) with \(\zeta=h(0)\) and \(z+Dh(0)\) instead of \(z\), and we get: \[\Phi_{\varphi}(\theta\rho,z+Dh(0))\leq c\left[\theta^{2}\Phi_{\varphi}(\rho)+ \theta^{2\beta}\Phi_{\varphi}(\rho)^{\beta}+\theta^{-n\beta}\Phi_{\varphi}( \rho)^{\beta}\right]. \tag{18}\] Thereby the condition \(|z+Dh(0)|\leq L+1\) of Lemma 8 can be deduced from (16). Now, if \(\varepsilon_{0}\) is chosen small enough, depending on \(\theta\), (10) implies the following: \[\theta^{-n\beta}\Phi_{\varphi}(\rho)^{\beta-1}\leq\theta^{2}, \tag{19}\] and from the fact that \(\theta\leq 1\) we have \[\Phi_{\varphi}(\theta\rho,z+Dh(0))\leq c\theta^{2}\Phi_{\varphi}(\rho).\] Adapting Lemma 6.2 in [32] (it just uses simples ideas like the ones from Proposition 12) we deduce, from (19): \[\Phi_{\varphi}(\theta\rho)\leq C_{3}\theta^{2}\Phi_{\varphi}(\rho), \tag{20}\] where \(C_{3}>0\) depends on \(n,N,\varphi,\Gamma,\gamma,\Lambda_{L},L\). Finally, we choose \(\theta\in(0,\frac{1}{4}]\) (depending on \(\alpha\) and whatever \(C_{3}\) depends on) small enough such that \[C_{3}\theta^{2}\leq\theta^{2\alpha} \tag{21}\] holds, and \(\varepsilon_{0}\) small enough such that (11), (12), (19) follow from (10). Taking into account (20) and (21) the proof of the proposition is complete. The following adaptation of ([32], Lemma 7.10) is then a pretty straightforward consequence of reiteratively applying this last proposition. **Theorem 14**.: _Let us assume \(f\), \(\varphi\) and \(\psi\) satisfy hypotheses \((H.0)\) through \((H.4)\) for given \(p_{1},q_{1}\) and \(M\). Choose any \(L>2M+2>0\), \(\alpha\in(0,1)\) and \(z_{0}\in\mathbb{R}^{nN}\) such that \(\frac{L}{2}>|z_{0}|>M+1\). Then there exist a constant \(\tilde{\varepsilon}_{0}>0\) and a radius \(\rho^{*}>0\) depending on \(n,N,L,p_{1},q_{1},\Gamma,\alpha,\gamma,x_{0},z_{0}\) and \(\Lambda_{L}:=\max\limits_{B_{L+2}}|D^{2}f|\) and with \(\tilde{\varepsilon}_{0}\) depending additionally on \(\omega_{L}\) such that the following holds. Let us consider \(u\) a \(W^{1,\varphi}\)-minimizer of \(\mathcal{F}\) on \(B_{\rho}(x_{0})\), with \(\rho<\rho^{*}\) and \(x_{0}\in\mathbb{R}^{n}\) satisfying_ \[\lim_{\rho\to 0}\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.999863pt}}\! \int_{B_{\rho}(x_{0})}|V(Du(x))-V(z_{0})|^{2}=0.\] _If_ \[\Phi_{\varphi}(u,x_{0},\rho)\leq\tilde{\varepsilon}_{0} \tag{22}\] _then there exists a constant \(c\) depending on \(n,N,L,p_{1},q_{1},\Gamma,\alpha,\gamma,x_{0},z_{0}\) such that_ \[\Phi_{\varphi}(u,x_{0},r)\leq c\left(\frac{r}{\rho}\right)^{2\alpha}\Phi_{ \varphi}(u,x_{0},\rho)\] _for any \(r<\rho\)._ Proof.: The theorem announced in the introduction follows from Campanato's integral characterization of Holder continuity. ## 8. Acknowledgements The author is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
2302.14796
Particle-based Online Bayesian Sampling
Online optimization has gained increasing interest due to its capability of tracking real-world streaming data. Although online optimization methods have been widely studied in the setting of frequentist statistics, few works have considered online optimization with the Bayesian sampling problem. In this paper, we study an Online Particle-based Variational Inference (OPVI) algorithm that uses a set of particles to represent the approximating distribution. To reduce the gradient error caused by the use of stochastic approximation, we include a sublinear increasing batch-size method to reduce the variance. To track the performance of the OPVI algorithm with respect to a sequence of dynamically changing target posterior, we provide a detailed theoretical analysis from the perspective of Wasserstein gradient flow with a dynamic regret. Synthetic and Bayesian Neural Network experiments show that the proposed algorithm achieves better results than naively applying existing Bayesian sampling methods in the online setting.
Yifan Yang, Chang Liu, Zheng Zhang
2023-02-28T17:46:32Z
http://arxiv.org/abs/2302.14796v1
# Particle-based Online Bayesian Sampling ###### Abstract Online learning has gained increasing interest due to its capability of tracking real-world streaming data. Although it has been widely studied in the setting of frequentist statistics, few works have considered online learning with the Bayesian sampling problem. In this paper, we study an Online Particle-based Variational Inference (OPVI) algorithm that updates a set of particles to gradually approximate the Bayesian posterior. To reduce the gradient error caused by the use of stochastic approximation, we include a sublinear increasing batch-size method to reduce the variance. To track the performance of the OPVI algorithm with respect to a sequence of dynamically changing target posterior, we provide a detailed theoretical analysis from the perspective of Wasserstein gradient flow with a dynamic regret. Synthetic and Bayesian Neural Network experiments show that the proposed algorithm achieves better results than naively applying existing Bayesian sampling methods in the online setting. Machine Learning, Bayesian Sampling, Bayesian Sampling, Bayesian Sampling ## 1 Introduction Online learning is an indispensable paradigm for problems in the real world, as a machine learning system is often expected to adapt to newly arrived data and respond in real-time. The key challenge in this setting is that the model cannot be updated with all data in history each time, which grows linearly and would make the system unsustainable. There are quite a few online optimization methods developed over the decades that address the challenge by only taking the last arrived batch of data for each update and by using a shrinking step size to control the increase of error. They have been successfully applied to a wide range of tasks like online ranking, network scheduling and portfolio selection (Yu et al., 2017; Pang et al., 2022). Online optimization methods can directly be applied to update models that are fully specified by a certain value of its parameters. Beyond such models, there is another class of models known as Bayesian models that treat the parameters as random variables, thus giving an output also as a random variable (often the expectation is taken as the final output on par with the conventional case). The stochasticity enables Bayesian models to provide diverse outputs, characterize prediction uncertainty, and be more robust to adversarial attacks (Hernandez-Lobato and Adams, 2015; Li and Gal, 2017; Yoon et al., 2018; Zhang et al., 2019; Tolpin et al., 2021; Wagner et al., 2023). Hence Bayesian models are receiving increasing attention in research and practice, and an online learning method for them is highly desired. Nevertheless, the learning procedure of Bayesian models is different from conventional models, which poses a challenge in directly applying online optimization methods in an online setting. This is because a Bayesian model is characterized by the distribution of its parameters but not a single value, and the learning task, a.k.a. Bayesian inference, is to approximate the posterior distribution of the parameters given received data. A tractable solution is Variational Inference (VI) (Jordan et al., 1999; Blundell et al., 2015), which approaches the posterior using a parameterized approximating distribution, which enables optimization methods again (Hoffman et al., 2010; Broderick et al., 2013; Foti et al., 2014; Cherief-Abdellatif et al., 2019). However, the accuracy is restricted by the expressiveness of the approximating distribution which is not systematically improvable. A more accurate method is Monte Carlo which aims to draw samples from the posterior. As the posterior is only known with an unnormalized density function, direct sampling is intractable, and Markov chain Monte Carlo (MCMC) is employed. While it makes sampling tractable, it comes with the issue of sample efficiency due to the correlation among the samples. Recently, a new class of Bayesian inference methods is developed, known as particle-based variational inference (ParVI) (Liu and Wang, 2016; Chen et al., 2018; Liu et al., 2019; Zhu et al., 2020; Zhang et al., 2020; Korba et al., 2021; Liu and Zhu, 2022). They try to approximate the posterior using a set of particles (i.e., samples) of a given size, which are iteratively updated to minimize the difference between the particle distribution from the posterior. The accuracy of the method can be systematically improved with more particles, and due to the limited number of particles, sample efficiency is enforced so as to minimize the difference. While ParVI methods have been successfully applied to the full-batch and mini-batch settings, to our knowledge there is no online version of ParVI. In this work, we develop an Online Particle-based Variational Inference (OPVI) method to meet this desideratum and also provide an analysis on its regret bound which can achieve a sublinear order in the number of iterations. The method and analysis are inspired by the distribution optimization view of ParVI on the Wasserstein space, under which we could leverage techniques and theory of conventional online optimization methods. To do this, we first extend existing Maximum a Posterior (MAP) methods to better handle the prior term, and give the regret bound analysis for the online MAP algorithm. We then extend the results to the Wasserstein space as an online sampling method by leveraging the Riemannian structure of the space. Notably, we leverage techniques from online optimization that improves upon naively applying existing ParVI methods in an online setting. Here, we bound the dynamic regret under the Wasserstein space by using a trigonometric distance inequality for the inexact gradient descent method. We study the empirical performance of the method on a 2-dimensional synthetic setting which allows easy visualization, and real-world applications using Bayesian neural networks for image classification. The results suggest better posterior approximation and classification accuracy than naive online ParVI methods and online MCMC methods, which is even comparable to full batch results. ## 2 Related Work Since (Cesa-Bianchi and Lugosi, 2006) study the online properties of VI, there are a couple of works showing online VI gives good performance in practice cases (Hoffman et al., 2010, 2013; Broderick et al., 2013). Furthermore, researchers in (Cherief-Abdellatif et al., 2019) derive the theoretical results for the generalization properties of the Online VI algorithm. Even though online VI is well studied, few papers pay attention to the problem of online MCMC, except (Chopin, 2002; Kantas et al., 2009; Christensen et al., 2012) study a series of sequential Monte Carlo methods that combine importance sampling with Monte Carlo schemes to track the changing distribution. Unfortunately, no previous work considers an online MCMC method from the perspective of optimization methods, not to mention the theory behind them. Our method employs a gradient descent-based optimization strategy to update particles toward the target posterior. However, the target posterior is dynamically changing with streaming data arriving in the system, which makes the optimal solutions change. To solve this problem, we consider a performance metric called dynamic regret in our analysis. In previous research, it has been proved that algorithms that achieve low regret under the traditional regret may perform poorly in dynamic environment (Besbes et al., 2015) and it's impossible to achieve a sublinear dynamic regret for an arbitrary sequence of loss functions (Yang et al., 2016). To achieve a sublinear regret, researchers propose different constraints on the sequence of loss functions, like the functional variation (Besbes et al., 2015), gradient variation (Rakhlin and Sridharan, 2013; Yang et al., 2014) and path variation (Yang et al., 2016; Bedi et al., 2018; Cesa-Bianchi et al., 2012). However, even though this dynamic problem is essential to be considered in the analysis of Bayesian inference algorithms, no previous papers considered this. As a result, existing theoretical guarantees regarding the online VI (e.g. (Cherief-Abdellatif et al., 2019)) may be insufficient under the dynamic changing online environment. The stochastic gradient descent algorithm is widely used as an incremental gradient algorithm that offers inexpensive iterations by approximating the gradient with a mini-batch of observations. Through the past decade, it has been used in a wide variety of problems with different variations, like network optimization (Pang et al., 2022; Zhou et al., 2022) reinforcement learning (Liu et al., 2021, 2021), federated learning (Sun and Wei, 2022) and recommendation system (Yang et al., 2020). However, this method, at the same time, incurs gradient error when approximating the gradient. In most of the novel sampling methods, we normally obtain diverse solutions by injecting diffusion noises (e.g. Langevin Dynamic (LD) (Neal et al., 2011), Stochastic Gradient Langevin Dynamics (SGLD) (Welling and Teh, 2011), which makes this type of algorithm sensitive to the noise. For Stein Variational Gradient Descent (SVGD) (Liu and Wang, 2016), there is also a similar instability observed in the experiments, even though the reason is still unknown. This instability makes reducing the stochastic gradient error important. To reduce the gradient error, researchers studied multiple variance reduction methods, like using adaptive learning rates and increasing batch size. In the previous work, an adaptive learning rate was used to adapt the optimization to the most informative features with Adagrad (Ward et al., 2019) and estimate the momentum for Adam(Kingma and Ba, 2014). Compared with the adaptive methods, the increasing batch size methods have greater parallelism and shorter training times (Smith et al., 2017) and are also studied in offline and online cases (Friedlander and Schmidt, 2012; Zhou et al., 2018), which shows great importance to achieve applicable convergence rate and sublinear regret bound. Especially, (Bedi et al., 2018; Yang et al., 2016), give algorithms that consider a more general case of optimization with inexact gradient. ## 3 The online Maximum a Posterior on Euclidean Space \(\mathcal{W}\) In this section, we first introduce an online MAP algorithm on Euclidean decision space \(\mathcal{W}\) with gradient descent method, which helps the reader to understand our OPVI sampling method on Wasserstein space. Here, we give some prior knowledge about the online MAP problem and the dynamic regret metric. Then, we give a detailed policy using an online stochastic gradient descent algorithm to solve the online MAP problem and a detailed theoretical analysis based on the dynamic regret metric. ### Preliminaries For an online MAP algorithm run with time slots \(t\in[1,T]\), let \(\mathcal{W}\in\mathcal{R}^{d}\) denote a convex set, set \(w_{t}\in\mathcal{W}\) be some parameter of interest and \(\mathcal{N}_{T}=\{d_{1},\cdots,d_{N_{T}}\}\) be the set of i.i.d. observations. In a typical problem of MAP, we aim to maximize a target posterior \(p(w):=p_{0}(w)\prod_{k=1}^{N_{T}}p(d_{k}\mid w)\), where we usually take logarithm on both sides to simplify the computation as \(\log p(w)=\log p_{0}(w)+\sum_{k}\log p(d_{k}|w)\). Different from the offline MAP, we set a \(\eta_{t}=\frac{6}{\pi^{2}t^{2}}\) adaptive weight for the prior in our online setting, which divides the whole prior for each update with \(\sum_{t=1}^{T}\eta_{t}=1\) when \(T\rightarrow\infty\). Then, the goal of the online MAP problem on \(\mathcal{W}\) is to find parameter \(w_{t}\) that maximizes the cumulative of a linear combination of minus likelihood and partial prior, which can be given as: \[\max_{w_{t}\in\mathcal{W}}\sum_{t=1}^{T}(\sum_{k=1}^{N_{T}}\log p(d_{k}\mid w_ {t})+\eta_{t}\log p_{0}(w_{t})), \tag{1}\] To simplify the notation, we use \(c_{t}^{k}(w_{t}):=-\log p(d_{k}\mid w)\) to denote the log-likelihood with data \(d_{k}\) and \(c_{0}(w_{t}):=-\log p_{0}(w_{t})\) to denote the log-prior, where \(c\) is called the cost function in the literature of optimization and we take minus logarithm since we want to make sure the cost function to be positive all the time. We denote \(c_{t}(w_{t})=\sum_{k=1}^{N_{T}}c_{t}^{k}(w_{t})\) as the true likelihood considering all data in the dataset. Then, we can formulate the goal eq. (1) to be an optimization problem with \(c_{t}+\eta_{t}c_{0}\) as the objective function and follow the goal of: \[\min_{w_{t}\in\mathcal{W}}\sum_{t=1}^{T}c_{t}(w_{t})+\eta_{t}c_{0}(w_{t})\] As we have mentioned in Section 2, the target posterior is dynamically changing with the new observations, we are interested in using dynamic regret as the performance metric for our problem, which is defined as the difference between the total cost incurred at each time slot and a sequence of optimal solutions \(\{w_{t}^{*}\}\) in hindsight, i.e., \[R(T)=\sum_{t=1}^{T}c_{t}(w_{t})+\eta_{t}c_{0}(w_{t})-c_{t}(w_{t}^{*})-\eta_{t}c _{0}(w_{t}^{*}). \tag{2}\] In this paper, instead of using all data in \(\mathcal{N}_{T}\), we consider using mini-batch \(\mathcal{B}_{t}\) to approximate the gradient \(\nabla c_{t}(w_{t})\) as \(\nabla\hat{c}_{t}\), where the approximation leads to a gradient error \(e_{t}:=\nabla\hat{c}_{t}(w_{t})-\nabla c_{t}(w_{t})\) that can calculated by: \[e_{t}=\frac{1}{B_{t}}\sum_{k\in\mathcal{B}_{t}}\nabla c_{t}^{k}(w_{t})-\nabla c _{t}(w_{t}), \tag{3}\] where \(B_{t}=|\mathcal{B}_{t}|\) is the batch size. Note that the gradient error \(e_{t}\) can be deterministic or stochastic, depending on the way we set up the mini-batch. In this paper, we choose to select samples for mini-batch \(\mathcal{B}_{t}\) arbitrarily from \(\mathcal{N}_{T}\), which makes the gradient error to be stochastic in this paper. As a result, the expectation of \(\|e_{t}\|\) can be bounded by some time-varying variable \(\epsilon_{t}\) as \(\mathbb{E}[\|e_{t}\|]\leq\epsilon_{t}\). Then, we introduce an error bound \(E_{T}\) to measure the cumulative gradient error lead by the stochastic gradient approximation over \(t\in[1,T]\), which is given by: \[E_{T}:=\sum_{t=1}^{T}\epsilon_{t} \tag{4}\] We will show a sublinear increasing batch size is enough to keep \(E_{T}\) growing sublinear, which enables the online MAP algorithm to enjoy a sublinear dynamic regret. ### Dynamic Algorithm for Online Maximum a Posterior It is well known that the online gradient descent algorithm can be used to solve online optimization problems (Zinkevich, 2003; Besbes et al., 2015; Yang et al., 2022). Here, we give an online stochastic gradient descent algorithm with increasing batch size for the online MAP problem in the following updating policy: \[\mathbf{w}_{t}=\begin{cases}\mathbf{w}_{1}\in\mathcal{W}&t=1\\ \Pi_{\mathcal{W}}[w_{t-1}-\alpha v_{t}(w_{t-1})]&t>1\end{cases}, \tag{5}\] where \(v_{t}(w_{t-1})=\nabla\hat{c}_{t-1}(w_{t-1})+\eta_{t}\nabla c_{0}(w_{t-1})\) where \(\Pi_{\mathcal{W}}\) is the projection back to the convex set \(\mathcal{W}\). We will illustrate the relationship between \(e_{t}\) and \(B_{t}\) in the following analysis. Next, we first introduce some widely used assumptions required for the theoretical analysis. **Assumption 1**.: (Bounded Convex Set) For any two decisions \(w_{1},w_{2}\in\mathcal{W}\), we have \(d(w_{1},w_{2})\leq R\). **Assumption 2**.: (Convexity and Lipschitz smooth) The function \(c_{t}+\eta_{t}c_{0}\) is convex and Lipschitz smooth, so its derivatives are Lipschitz continuous with constant \(L\) with a constant \(L\), i.e., for two real \(w_{1},w_{2}\in\mathcal{W}\), we have: \[\|\nabla c_{t}(w_{1})+\eta_{t}\nabla c_{0}(w_{1})-\nabla c_{t}(w_{ 2})-\eta_{t}\nabla c_{0}(w_{2})\|\] \[\leq L\|w_{1}-w_{2}\|\quad t\in[1,T].\] **Assumption 3**.: (Vanishing gradient) We assume the optimal solutions \(w_{t}^{*}\) lie in the interior of the convex set \(\mathcal{W}\), where we assume there exists \(w_{t}^{*}\) such that \(\nabla c_{t}(w_{t}^{*})+\eta_{t}\nabla c_{0}(w_{t}^{*})=0\) We give a sublinear regret upper bound in the next subsection, which means \(\|w_{t}-w_{t}^{*}\|\) is decreasing and the parameter of interest \(w_{t}\) can converge to the dynamic changing optimal solutions \(w_{t}^{*}\) when \(T\) is large enough. That indicates we can obtain a promising MAP result with the policy in eq. (15). Furthermore, we also give an analysis of the influence of the increasing batch-size setting on the performance of the algorithm. ### Theoretical Analysis for Online MAP In this section, we begin with the proof of the online MAP algorithm following the policy in eq. (15) over the Euclidean space \(\mathcal{W}\). As we mentioned in Section 2, it is impossible to achieve a sublinear regret bound for any sequence of cost functions. To solve this problem, we consider a path variation budget \(V_{T}\) for the sequence of optimal solutions \(\{w_{t}^{*}\}\), which bound the cumulative path length of the optimal solutions as \(V_{T}:=\sum_{t=1}^{T}\|w_{t}^{*}-w_{t-1}^{*}\|\). We give the following Theorem following the proof of (Bedi et al., 2018, Theorem 2). Note that following eq. 3, we use true gradient \(\nabla c_{t}(w_{t})\) and a gradient error \(e_{t}\) to represent the approximated gradient \(\nabla\hat{c}_{t}(w_{t})\) to highlight the influence of the gradient error and simplify the proof. The result is summarized in Theorem 4, which gives the sublinear bound for the dynamic regret \(\mathcal{R}(T)\). **Theorem 4**.: _(Regret Bound under \(\mathcal{W}\)(Bedi et al., 2018, Theorem 2)) Under the Assumption 1 - 3, given a sequence of optimal solutions \(\{w_{t}^{*}\}\), variational budget \(V_{T}\) and gradient error bound \(E_{T}\). Following the updating policy in eq. (15) on Euclidean Space \(\mathcal{W}\in\mathcal{R}^{n}\), we have the dynamic regret:_ \[\mathbb{E}[\mathcal{R}(T)]\leq\mathcal{O}(\max(1,E_{T},V_{T})).\] Proof.: Detail of the proof can be found in Appendix A. To further find the relationship between \(E_{T}\) and \(B_{t}\) to bound \(E_{T}\), we give some analysis for the gradient error led by the stochastic batch sampling with the sublinear increasing batch size following Theorem 4. Base on section 2.8 in Lohr, 2021), we have: \[\mathbb{E}[\|e_{t}\|^{2}]=\frac{N_{T}-B_{t}}{N_{T}B_{t}}\Lambda^{2}, \tag{6}\] where \(N_{T}\) is the total number of data samples we have and \(\Lambda\) is a bound on the sample variance of the gradients, which is defined by: \[\frac{1}{N_{T}-1}\sum_{i=1}^{N^{T}}\left\|\nabla c_{t}^{i}(\mathbf{w})-\nabla c _{t}(\mathbf{w})\right\|^{2}\leq\Lambda^{2}\quad\mathbf{w}\in\mathcal{W}\] To fulfill the requirement of \(\mathbb{E}[\|e_{t}\|^{2}]\) in eq. (6), we assume \(\epsilon_{t}=\sqrt{\frac{1}{B_{t}}-\frac{1}{N_{T}}}\) and the sublinear increasing batch-size as \(B_{t}=\frac{N_{T}t^{\rho}}{N_{T}+t^{\rho}}\quad\rho>0\). Then, we can bound \(E_{T}\) as: \[E_{T}=\sum_{t=1}^{T}\epsilon_{t}\leq\sum_{t=1}^{T}\sqrt{\frac{1}{t^{q}}}\leq \frac{2}{2-\rho}T^{1-\frac{q}{2}} \tag{7}\] We can see when the batch size \(B_{t}\) is growing sublinear, the gradient error bound \(E_{T}\) is sublinear. Thus if the variational budget \(V_{T}\) is constrained to be sublinear, the regret bound is proved to be sublinear. Note that in the regret analysis, we set a static stepsize \(\alpha\) for convenience. The algorithm can also achieve a sublinear regret bound when the stepsize is set to be digressive like \(\alpha_{t}=t^{0.55}\). Next, we illustrate why a static batch size fills to achieve a sublinear regret. **Remark:** Set the batch size to be static as \(B\). The agent update \(w_{t}\) for over \(T\) rounds and use a total of \(N_{T}\) data samples, where \(N_{T}\) can be calculated by \(N_{T}=\sum_{t=1}^{T}B=BT\). Following a similar setting in eq. (7), we bound the gradient error over \(t\in[1,T]\) as: \[E_{T}\!=\!\sum_{t=1}^{T}\epsilon_{t}\!=\!\sum_{t=1}^{T}\sqrt{\frac{1}{B}-\frac {1}{N_{T}}}\leq\sum_{t=1}^{T}\sqrt{\frac{1}{B}(1-\frac{1}{T})}\!\leq\mathcal{O }(T)\] which gives a linear increasing gradient error bound \(E_{T}\leq\mathcal{O}(T)\). That makes it impossible to give a sublinear regret bound, which is necessary to ensure the algorithm can finally converge to the optimal solutions. ## 4 Online Particle-based Variational Inference on Wasserstein Space \(\mathcal{P}_{2}(\mathcal{W})\) In this section, we propose the OPVI algorithm on \(\mathcal{P}_{2}(\mathcal{W})\), which formulate the online MAP problem in Section 3 as an online sampling method from the perspective of Wasserstein gradient flow. To begin with, we first introduce some preliminary knowledge about the \(2\)-Wasserstein Space \(\mathcal{P}_{2}(\mathcal{W})\), as well as its Riemannian structure and the gradient flow on it. Then, we give a brief introduction to a well-known ParVI method, called SVGD (Liu and Wang, 2016) and take it as an example to illustrate how to simulate a ParVI problem as a gradient flow on \(\mathcal{P}_{2}(\mathcal{W})\). Based on this idea, we give the theoretical analysis for OPVI as a distribution optimization flow on \(\mathcal{P}_{2}(\mathcal{W})\) to show a sublinear dynamic regret. For convenience, we only consider Wasserstein Space supported on the Euclidean space \(\mathcal{W}\) in our analysis. Here, we first clarify the notation used in this section. We use \(\mathcal{C}_{c}^{\infty}\) as a set of compactly supported \(R^{D}-\)valued functions on \(\mathcal{W}\) and use \(\mathcal{C}_{c}^{\infty}\) to denote the scalar-valued functions in \(\mathcal{C}_{c}^{\infty}\). Except for the Euclidean space \(\mathcal{W}\) and Wasserstein space \(\mathcal{P}_{2}(\mathcal{W})\) we just mentioned, we consider two other types of space in this paper, the Hilbert space \(\mathcal{L}_{q}^{2}\) and the vector-valued Reproducing Kernel Hilbert Space (RKHS) \(\mathcal{H}^{D}\) of a kernel \(K\). The Hilbert space \(\mathcal{L}_{q}^{2}\), is a space of \(\mathbf{R}^{D}\)-valued functions \(\left\{u:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\mid\int\|u(x)\|_{2}^{2}\, \mathrm{d}q<\infty\right\}\) with inner product \(\left\langle u,v\right\rangle_{\mathcal{L}_{q}^{2}}:=\int u(x)\cdot v(x) \mathrm{d}q\). The RKHS \(\mathcal{H}\) is a kernel version of the Hilbert space \(\mathcal{L}_{q}^{2}\), which is the closure of linear span \(\left\{f:f(x)=\sum_{i=1}^{m}a_{i}k\left(w,w_{i}\right),a_{i}\in\mathbb{R},m \in\mathbb{N},w_{i}\in\mathcal{W}\right\}\) equipped with inner products \(\left\langle f,g\right\rangle_{\mathcal{L}_{q}^{2}}=\sum_{ij}a_{i}b_{j}k \left(w_{i},w_{j}\right)\) for \(g(w)=\sum_{i}b_{i}k\left(w,w_{i}\right)\). The Wasserstein Space \(\mathcal{P}_{2}(\mathcal{W})\), its Riemannian Structure and the Gradient Flow Generally, the Wasserstein space is a metric space equipped with Wasserstein distance \(d(\cdot,\cdot)\). Set \(P(\mathcal{W})\) as the space of probability measures on the Euclidean support space \(\mathcal{W}\). The \(2\)-Wasserstein space on \(\mathcal{W}\) can be defined as \(\mathcal{P}_{2}(\mathcal{W}):=\left\{\mu\in P(\mathcal{W}):\int_{\mathcal{W}} \|w\|^{2}d\mu(w)<\infty\right\}\). Since the Riemannian structure of Wasserstein space is discovered (Otto, 2001; Benamou and Brenier, 2000), several interesting quantities have been defined, like the gradient and the inner product on it. To define the gradient of a smooth curve \((q_{t})_{t}\) on \(\mathcal{P}_{2}(\mathcal{W})\), we can set a time-dependent vector field \(v_{t}(w)\) on \(\mathcal{W}\), such that for a.e. \(t\in\mathbb{R},\partial_{t}q_{t}+\nabla\cdot(v_{t}q_{t})=0\) and \(v_{t}\in\overline{\left\{\nabla\varphi:\varphi\in C_{c}^{\infty}\right\}}^{ \mathcal{L}_{q_{t}}^{2}}\), where the overline means closure (Villani, 2009). Note that the vector field \(v_{t}\) here is the so-called tangent vector of the curve \((q_{t})_{t}\) at \(q_{t}\) and the closure is denoted as tangent space \(T_{q_{t}}\mathcal{P}_{2}\) at \(q_{t}\), whose elements are the tangent vectors for the curves passing through the point \(q_{t}\). The relation between \(T_{q_{t}}\mathcal{P}_{2}\), \(v_{t}\) and \(\mathcal{P}_{2}(\mathcal{W})\) can be found in Fig. 1. The inner product in the tangent space \(T_{q_{t}}\mathcal{P}_{2}\) is defined on \(\mathcal{L}_{q}^{2}\), which defines the Riemannian structure on \(\mathcal{P}_{2}(\mathcal{W})\) and is consistent with the Wasserstein distance due to the Benamou-Brenier formula (Benamou and Brenier, 2000). An important role of the vector field representation is that we can approximate the change of distribution \(q_{t}\) within a distribution curve \((q_{t})_{t}\). For a single update in each time slot, we can set \((\mathrm{id}+\varepsilon v_{t})_{\#}q_{t}\) as a first-order approximation of the updated distribution \(q_{t+1}\) in the next time slot (Ambrosio et al., 2005). Therefore, for a set of particles \(\{x_{t}^{(i)}\}_{i}\) that obey distribution \(q_{t}\) at time \(t\), we can update these particles with a stepsize of \(\varepsilon\) as \(\{x_{t}^{(i)}+\varepsilon v_{t}(x_{t}^{(i)})\}_{i}\), to approximate distribution \(q_{t+1}\) in time \(t+1\), when \(\varepsilon\) is small. We show this approximation as a red arrow in Fig. 1. Another important concept on \(\mathcal{P}(\mathcal{W})\) is the definition of the gradient flow. Given a function \(F\), the gradient flow can be described as the family of descending curves \(\{(q_{t})_{t}\}\) that maximize the decreasing rate of the derivative of \(F\). In \(\mathcal{P}_{2}(\mathcal{W})\), the tangent vector of the gradient flow \((q_{t})_{t}\) can be defined by the gradient of \(F\) at \(q_{t}\), which is given by: \[\mathrm{grad}\,F\left(q_{t}\right):=\max_{v:\|v\|_{\mathcal{T}_{q_{t}}}p_{2}=1 }\frac{\mathrm{d}}{\mathrm{d}\varepsilon}F\left((\mathrm{id}+\varepsilon v)_{ \#}q_{t}\right)\Biggr{|}_{\varepsilon=0},\] where we define a measurable transformation \(\mathcal{T}:\mathcal{W}\rightarrow\mathcal{W}\) and denote \(\mathcal{T}_{\#q_{t}}\) as the \(\mathcal{T}\)-transformed distribution for \(q_{t}\). In the task of Bayesian inference, our goal is to minimize the KL-divergence between a current estimated distribution \(q_{t}\) and the target posterior \(p\) as \(KL_{p}(q_{t}):=\int_{\mathcal{W}}\log(q_{t}|p)dq_{t}\), which has the tangent vector for its gradient flow \((q_{t})_{t}\) as a vector field of: \[v_{t}=-\nabla_{q_{t}}\mathrm{KL}_{p}(q_{t})=\nabla\log p-\nabla\log q_{t},\] ### Particle-based Variational Inference Methods In this section, we first use SVGD as an example to illustrate the ParVI methods. Then we show how to simulate SVGD as the gradient flow on Wasserstein space \(\mathcal{P}_{2}(\mathcal{W})\), which can help the analysis of OPVI in the following subsection. For SVGD, let \(\{x_{t}^{(i)}\}_{i=1}^{n}\) be a set of particles that obey an empirical measure of distribution \(q_{t}\). We initialize \(q_{t}\) as some simple distribution \(q_{0}\), then use a vector field \(v\) to update these particles toward the target posterior \(p\): \(x_{t+1}^{(i)}=x_{t}^{(i)}+\varepsilon v(x_{t}^{(i)})\), where \(v\) should be chosen to maximize the decreasing of the KL-divergence Figure 1: Illustration for the updating of \(q_{t}\) over the gradient flow \((q_{t})_{t}\) on \(\mathcal{P}_{2}(\mathcal{W})\), and the relationship between the update of particles \(\{x_{t}^{(i)}\}\) over \(\mathcal{H}\) and the update of distribution \(q_{t}\) over \(\mathcal{P}_{2}(\mathcal{W})\). \(-\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\mathrm{KL}_{p}\left((\mathrm{id}+ \varepsilon v)_{\#}q\right)\big{|}_{\varepsilon=0}\). In SVGD, the vector field is chosen to be optimized over RKHS \(\mathcal{H}\) with a closed-form solution: \[v_{\mathcal{H}}^{\text{SVGD}}(\cdot):=\nabla\log p(x)k(x,\cdot)+\nabla k(x,\cdot) \tag{8}\] Note that the updating of SVGD particles is actually an approximation of the \(\mathcal{P}_{2}(\mathcal{W})\) gradient flow by taking \(\mathcal{H}\) as its tangent space instead of \(\mathcal{L}_{q_{t}}^{2}\), since the function in \(\mathcal{H}\) is roughly a kernel smoothed function in \(\mathcal{L}_{q_{t}}^{2}\)(Liu and Zhang, 2019). Thus, the vector field \(v_{\mathcal{H}}^{\text{SVGD}}\) in eq. (8) can be used to approximate the vector field \(v_{\mathcal{C}_{q_{t}}^{2}}^{\text{VGD}}\) in \(\mathcal{L}_{q_{t}}^{2}\) on \(P_{2}(\mathcal{W})\)(Liu et al., 2019, Theorem 2), where the solution gives: \[v_{\mathcal{H}}^{\text{SVGD}}=\max_{v\in\mathcal{H},\|v\|_{\mathcal{H}=1}} \langle v_{\mathcal{L}_{q_{t}}^{2}}^{\text{SVGD}},v\rangle_{\mathcal{L}_{q_{t }}^{2}} \tag{9}\] That enables us to use \(v_{\mathcal{C}_{q_{t}}^{2}}^{\text{SVGD}}\) to approximate the vector field \(v_{\mathcal{H}}^{\text{SVGD}}\) on \(P_{2}(\mathcal{W})\) in the following analysis, which like doing a projection from \(\mathcal{H}\) to \(\mathcal{L}_{q_{t}}^{2}\). ### Online Particle-based Variational Inference on \(\mathcal{P}_{2}(\mathcal{W})\) In this section, we aim to develop an online sampling method on \(\mathcal{P}_{2}(\mathcal{W})\) and proposed the OPVI algorithm. We first illustrate the policy of OPVI over RKHS \(\mathcal{H}\). Then, we interpret the OPVI as the gradient flow on \(\mathcal{P}_{2}(\mathcal{W})\) and conduct the theoretical analysis by transferring the proof in 3 from Euclidean space \(\mathcal{W}\) to Wasserstein space \(\mathcal{P}_{2}(\mathcal{W})\). Note that we use \(v_{t}^{\text{OPVI-}\mathcal{H}}\) as the vector field on RKHS \(\mathcal{H}\) and \(v_{t}^{\text{OPVI-}\mathcal{L}^{2}}\) as the vector field on \(\mathcal{L}_{q_{t}}^{2}\). We begin with reviewing the KL-divergence in an offline setting, which is given as: \[\mathrm{KL}_{p}(q_{t})=\mathbb{E}_{q_{t}}[\log p]-\mathbb{E}_{q_{ t}}[\log q_{t}]\] \[= \sum_{k=1}^{N_{T}}\mathbb{E}_{q_{t}}[\log p(d_{k}|\cdot)]+\mathbb{ E}_{q_{t}}[\log p_{0}]-\mathbb{E}_{q_{t}}[\log q_{t}],\] where \(N_{T}\) is the number of data samples in the dataset. Following a similar idea as the online MAP algorithm, we set a \(\eta_{t}=\frac{6}{\pi^{2}t^{2}}\) adaptive weight for the prior in our online setting and using mini-batch with batch size \(B_{t}\) to approximate the likelihood. Thus, we give an online stochastic version of KL-divergence between \(q_{t}\) and the dynamic changing posterior \(p_{t}\) as: \[\text{O-KL}_{p_{t}}(q_{t})\] \[= \sum_{k=1}^{B_{t}}\mathbb{E}_{q_{t}}[\log p(d_{k}|\cdot)]+\eta_{t }\mathbb{E}_{q_{t}}[\log p_{0}]-\mathbb{E}_{q_{t}}[\log q_{t}] \tag{10}\] Similar to SVGD, we first draw a set of particles \(\{x_{0}^{(i)}\}_{i=1}^{n}\) that obey some simple initial distribution \(q_{0}\). Then, we update these particles with a gradient descent updating scheme with step size \(\alpha\): \[x_{t+1}^{(i)}=x_{t}^{(i)}+\alpha v_{t}^{\text{OPVI-}\mathcal{H}},\] where \(v_{t}^{\text{OPVI-}\mathcal{H}}\) is the vector field on \(\mathcal{H}\) that maximizes the decrease of online stochastic KL-divergence \(-\frac{d}{d\alpha}\text{O-KL}_{p_{t}}((id+\alpha v_{t})_{\#q})|_{\alpha=0}\) to give a closed-form solution: \[v_{t}^{\text{OPVI-}\mathcal{H}}(\cdot)=\mathbb{E}_{q(x)}[K(x, \cdot)\nabla\sum_{k=1}^{B_{t}}\log p(d_{k}|x)\] \[+\eta_{t}K(x,\cdot)p_{0}(x)+\nabla K(x,\cdot)],\] where \(K(x,x^{\prime})\) is satisfied by commonly used kernels like the exponential kernel \(K(x,x^{\prime})=\exp(-\frac{1}{h}\|x-x^{\prime}\|_{2}^{2})\)and the general workflow of the OPVI algorithm is summarized in Alg. 1. ``` Initialize particles \(\{x_{0}^{(i)}\}_{i=1}^{N}\) for\(t=1,\cdots,T\)do \(x_{t+1}^{(i)}=x_{t}^{(i)}+\alpha v_{t}^{\text{OPVI-}\mathcal{H}}(x_{t}^{(i)})\) where: \(v_{t}^{\text{OPVI-}\mathcal{H}}(x_{t}^{(i)})=\mathbb{E}_{q(x)}[K(x,x_{t}^{(i) })\nabla\sum_{k=1}^{B_{t}}\log p(d_{k}|x)\) \(+\eta_{t}K(x,x_{t}^{(i)})\nabla p_{0}(x)+\nabla K(x,x_{t}^{(i)})]\) endfor ``` **Algorithm 1** Online Particle-based Variational Inference ### Proof of Dynamic Regret Bound under \(\mathcal{P}_{2}(\mathcal{W})\) To begin with, we first formulate the updating rule in Alg. 1 as a Wasserstein gradient flow. Here, we ignore the kernel smooth used in the implementation of the algorithm by approximating the vector field \(v_{t}^{\text{OPVI-}\mathcal{H}}\) on RKHS \(\mathcal{H}\) with the vector field \(v_{t}^{\text{OPVI-}\mathcal{L}^{2}}\) on Hilbert space \(\mathcal{L}^{2}\). To simplify the proof, we denote \(c_{t}^{k}(q_{t})=-\mathbb{E}_{q_{t}}[\log p(d_{k}|\cdot)]\) and \(c_{t}^{0}(q_{t})=-\eta_{t}\mathbb{E}_{q_{t}}[\log p_{0}]+\mathbb{E}_{q_{t}}[ \log q_{t}]\) in eq. (10) and follow eq. (3) to represent the stochastic approximation as the sum of the true gradient and a gradient error \(e_{t}\), which gives: \[v_{t}^{\text{OPVI-}\mathcal{L}^{2}}(q_{t})=-(c_{t}(q_{t})+e_{t}+c_{t}^{0}(q_{t}))\] Then, the updating of the particles can be formulated as an optimal transport for distribution \(q_{t}\) over \(\mathcal{P}_{2}(\mathcal{W})\) as: \[q_{t+1}=\mathrm{Exp}_{q_{t}}(-\alpha(c_{t}(q_{t})+e_{t}+c_{t}^{0}(q_{t}))) \tag{11}\] Before we give the proof for the regret bound, we first re-assume some assumption under the \(\mathcal{P}_{2}(\mathcal{W})\). **Assumption 5**.: (Bounded geodesically-convex (g-convex) set on \(\mathcal{P}_{2}(\mathcal{W})\) ) Assume \(\mathcal{K}\) to be a g-convex set on some Wasserstein space \(\mathcal{P}_{2}(\mathcal{W})\) supported on \(\mathcal{W}\). From Theorem 2 of (Gibbs and Su, 2002), we can establish a bound for the maximum Wasserstein distance in a bounded support space with \(\dim(\mathcal{W})<R\). Then \(\forall q_{1},q_{2}\in\mathcal{P}_{2}(\mathcal{W})\), we have: \[d_{\mathcal{K}}(q_{1},q_{2})\leq 1+R\] which bound the geodesically convex set \(\mathcal{K}\). **Assumption 6**.: (Geodesically-L-Lipschitz (g-L-Lipschitz)). Similar to the definition over \(\mathcal{W}\), we assume \(c_{t}(q_{1})+c_{t}^{0}(q_{1})\) to be a g-convex function and has a geodesically L-Lipschitz continuous gradient on \(\mathcal{P}_{2}\mathcal{W}\) if there exists a constant \(L>0\) that: \[|\nabla c_{t}(q_{1})+\nabla c_{t}^{0}(q_{1})-\nabla c_{t}(q_{2})- \nabla c_{t}^{0}(q_{1})|\leq L\cdot d(q_{1},q_{2}),\] \[\forall q_{1},q_{2}\in\mathcal{P}_{2}(\mathcal{W}),\] where \(d(a,b)\) should be some Wasserstein distance. Compared with the proof on \(\mathcal{W}\), the key difference is the way to obtain Lemma 9. Instead of updating a set of parameters of interest over \(\mathcal{W}\), we update the distribution \(q_{t}\) by optimal transport over \(\mathcal{P}_{2}(\mathcal{W})\). **Lemma 7**.: _Suppose that \(\mathcal{P}_{2}(\mathcal{W})\) is a Wasserstein space supported on Euclidean space \(\mathcal{W}\) with the sectional curvature lower bounded by \(-\kappa(\kappa>0)\). Under Assumption 3, 5, 6, for any \(q_{t}\in\mathcal{K}\), following the updating rule in eq. (11), we have:_ \[\mathbb{E}[d(q_{t+1},q_{t}^{*})]\leq\mathbb{E}[d(q_{t},q_{t}^{*})]\] \[-\frac{\Phi}{R}\mathbb{E}[(c_{t}(q_{t})+c_{t}^{0}(q_{t})-c_{t}(q_ {t}^{*})-c_{t}^{0}(q_{t}^{*}))]\] \[+\sqrt{2\alpha\epsilon_{t}^{2}\zeta(\kappa,R)+2\alpha\epsilon_{t} R},\] _where \(\Phi=2\alpha-3L\alpha^{2}\zeta(\kappa,R)\)._ Proof.: The proof can be found in Appendix B Using Lemma 7 and the definition of dynamic regret in eq. (2), we give the dynamic regret bound on \(\mathcal{P}_{2}(\mathcal{W})\) in the following Theorem. **Theorem 8**.: _(Regret Bound over \(\mathcal{P}_{2}(\mathcal{W})\)) Under the Assumption 3, 5, 6, given a sequence of optimal solutions \(\{q_{t}^{*}\}\), define the variational budget \(V_{T}:=\sum_{t=1}^{T}d(q_{t}^{*},q_{t+1}^{*})\) and the error bound \(E_{T}\). Following the updating rule in eq. (11), we have the dynamic regret bound:_ \[\mathcal{R}_{\mathcal{P}_{2}(\mathcal{W})}\leq\mathcal{O}(\max(1,E_{T},V_{T})) \tag{12}\] Proof.: The detail of the proof can be found in Appendix C. Different from the proof of the inexact gradient descent on Euclidean space, we include the trigonometric distance inequality introduced in (Zhang et al., 2016) and give the first glance. Since the gradient error is denied in \(\mathbb{R}^{D}\), we can follow the same analysis as Section 3.3 to bound the gradient error bound \(E_{T}\), which gives a sublinear error bound. As a result, by setting a sublinear increasing constraint for the variational budget \(V_{T}\), we can make sure \(R_{\mathcal{P}_{2}(\mathcal{W})}(T)\) is increasing sublinear. That means the OPVI methods can converge to the dynamic changing target posterior \(p_{t}\) when \(T\) is large enough. In SVGD, the author didn't consider this gradient error in their algorithm. However, since the gradient error can be viewed as a part of noise added into the updating process, we should not use the whole diffusion noise \(\nabla K(x,\cdot)\) in eq. (8). In the experiment, we set the diffusion term as \(0.1\cdot\nabla K(x,\cdot)\) for OPVI. We observe that this trick gives tremendous improvements in performance, especially in some high-dimensional tasks like image classification. ## 5 Experiments In this section, we test the performance of the proposed OPVI algorithm, and compare it with two famous Bayesian sampling methods, the LD (Welling and Teh, 2011) and SVGD (Liu and Wang, 2016). We run these methods with three types of batch settings, mini-batch with increasing batch size, mini-batch with static batch size, and full batch. To make the comparison fair, we set a Fixed Iterations and Total Data Samples (FITDS) policy for experiments under the mini-batch setting, which means we set the total number of data samples \(N_{T}\) and the total number of time slots \(T\) to be same for each experiment. Figure 2: Synthetic experiments for different methods. All methods run 500 rounds. Except for the full batch methods (which use much more data samples), other methods use the same number of data samples. Except for the full-batch methods, all algorithms follow the FITDS policy. For a dataset of nearly 10k data samples, we run all methods for 500 rounds and set \(B=20\) for the static batch size methods and \(B_{t}=t^{0.55}\) for the increasing batch size methods to keep \(N_{T}\) same. For full batch methods, we use all 10k data samples in each round to show the best possible results. All experiments are run under the same setting (unless otherwise stated), codes for these experiments are available at [https://github.com/yifanycc/OPVI](https://github.com/yifanycc/OPVI). ### Synthetic Experiments The synthetic experiments follow the setting in (Welling and Teh, 2011) that conduct a simple example with two parameters, based on the mixture Gaussian distribution: \[(\theta_{1},\theta_{2})\sim\mathcal{N}\left((0,0),\operatorname{ diag}\left(\sigma_{1}^{2},\sigma_{2}^{2}\right)\right)\] \[x_{i}\sim 0.5\cdot\mathcal{N}\left(\theta_{1},\sigma_{x}^{2} \right)+0.5\cdot\mathcal{N}\left(\theta_{1}+\theta_{2},\sigma_{x}^{2}\right),\] where \(\sigma_{1}^{2}=10\), \(\sigma_{2}^{2}=1\) and \(\sigma_{x}=2\). Here, we draw approximately 10,000 data samples from the above distribution with \(\theta_{1}=0\) and \(\theta_{2}=1\). Except for the full-batch methods, all algorithms follow the FITDS policy. Fig. 2 shows the results for the OPVI, SVGD, and LD with 100 particles, where the true posteriors are shown as contour and the inference results are represented by the particles. As we can observe from the result, the proposed increasing batch size OPVI gives a better result than the static batch size OPVI, which is caused by the use of increasing batch size as a variance reduction method. Compared with previous SVGD and LD, the OPVI method shows much better performance for tracking the posterior. That should be led by the influence of the gradient noise on the noise injection process of the LD method since we use a smaller diffusion term to offset the gradient error. In the last two figures, we can see the performance of OPVI is approaching or even better than the full batch methods. ### Bayesian Neural Network (BNN) Experiment In this subsection, we further compare our work with SVGD and LD on some Bayesian Neural Networks (BNN) tasks. We follow the experiment setting in (Liu and Tao, 2015), which uses a single hidden layer BNN with 50 hidden units. We use a Gamma(1, 0.1) function in the prior distribution, Kin8nm as the dataset and divide the dataset randomly 90% for training and 10% for testing. For all methods, we set the number of particles to 20. All ParVI methods use the same stepsize, except for LD, which uses a smaller but best possible stepsize. We test the Root Mean Squared Error (RMSE) and the test Log-Likelihood (LL). The experiment results are shown in Table. 1. The OPVI algorithm can achieve an **11.8% and 20.1% improvement** compared with SVGD and LD with the same total number of data \(N_{T}\) and the same total time slots \(T\) respectively. This result is even comparable to the full batch SVGD algorithm. Note that the running time for OPVI is the same as the SVGD algorithm, which is less than half of the full batch methods. ### Image classification Task Finally, we conduct experiments to test the performance of the proposed algorithm on a high-dimensional image classification problem. The dataset we used is the MNIST dataset, which contains 60,000 training cases and 10,000 test cases. We consider a two-layer BNN model with 100 hidden variables, with a sigmoid input layer and a softmax output layer. All experiments are using 20 particles. The comparison result is shown in Fig. 3. As we can see from the figure, except for the full batch LD algorithm, the OPVI algorithm with an increasing batch size achieves the best result. However, the full batch LD method uses much more time (30 times) and data samples (500 times), and the result is similar. We can observe that the noise of the increasing batch size OPVI is decreasing with \(t\) increase, which verifies our analysis for the gradient error. An interesting thing is that SVGD shows poor performance in this high-dimensional task, which may lead by an incorrect approximation for the diffusion term with limited particle numbers. Instead, we improve the diffusion term in OPVI, which solves this problem. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Methods & Avg. RMSE & Avg. LL & Time \\ \hline OPVI \(B_{t}=t^{0.55}\) & \(.127\pm.008\) & \(.653\pm.060\) & 2.4 \\ OPVI \(B=20\) & \(.145\pm.003\) & \(.516\pm.021\) & 2.4 \\ SVGD \(B=20\) & \(.144\pm.003\) & \(.525\pm.019\) & 2.4 \\ SVGD \(B=10k\) & \(.112\pm.002\) & \(.783\pm.017\) & 5.8 \\ LD \(B=20\) & \(.159\pm.004\) & \(.425\pm.024\) & 1.7 \\ LD \(B=10k\) & \(.143\pm.002\) & \(.527\pm.015\) & 5.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on a BNN classification task on the Kin8nm dataset, averaged over 20 tries. Figure 3: Learning Curve ## 6 Conclusion In this paper, we consider the OPVI algorithm as a possible sampling method for the intractable posterior under the online setting. To reduce the variance, we include an increasing batch size scheme and analyze the influence of the choice of batch size on the performance of the algorithm. Furthermore, we develop a detailed analysis by understanding the algorithm as a Wasserstein gradient flow. Experiments show the proposed algorithm outperforms other naive online particle-based VI and online MCMC methods.
2309.12095
Bayesian sparsification for deep neural networks with Bayesian model reduction
Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a crucial approach, facilitating the design of models that are both computationally efficient and competitive in terms of performance across various deep learning applications. The state-of-the-art -- in Bayesian sparsification of deep neural networks -- combines structural shrinkage priors on model weights with an approximate inference scheme based on stochastic variational inference. However, model inversion of the full generative model is exceptionally computationally demanding, especially when compared to standard deep learning of point estimates. In this context, we advocate for the use of Bayesian model reduction (BMR) as a more efficient alternative for pruning of model weights. As a generalization of the Savage-Dickey ratio, BMR allows a post-hoc elimination of redundant model weights based on the posterior estimates under a straightforward (non-hierarchical) generative model. Our comparative study highlights the advantages of the BMR method relative to established approaches based on hierarchical horseshoe priors over model weights. We illustrate the potential of BMR across various deep learning architectures, from classical networks like LeNet to modern frameworks such as Vision Transformers and MLP-Mixers.
Dimitrije Marković, Karl J. Friston, Stefan J. Kiebel
2023-09-21T14:10:47Z
http://arxiv.org/abs/2309.12095v2
# Bayesian sparsification for deep neural networks with Bayesian model reduction ###### Abstract Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a crucial approach, facilitating the design of models that are both computationally efficient and competitive in terms of performance across various deep learning applications. The state-of-the-art - in Bayesian sparsification of deep neural networks - combines structural shrinkage priors on model weights with an approximate inference scheme based on stochastic variational inference. However, model inversion of the full generative model is exceptionally computationally demanding, especially when compared to standard deep learning of point estimates. In this context, we advocate for the use of Bayesian model reduction (BMR) as a more efficient alternative for pruning of model weights. As a generalization of the Savage-Dickey ratio, BMR allows a post-hoc elimination of redundant model weights based on the posterior estimates under a straightforward (non-hierarchical) generative model. Our comparative study highlights the advantages of the BMR method relative to established approaches based on hierarchical horseshoe priors over model weights. We illustrate the potential of BMR across various deep learning architectures, from classical networks like LeNet to modern frameworks such as Vision Transformers and MLP-Mixers. Bayesian model reduction, Stochastic variational inference, Deep neural networks ## 1 Introduction Bayesian deep learning integrates the principles of Bayesian methodology with the objectives of deep learning, facilitating the training of expansive parametric models tailored for classifying and generating intricate audio-visual data, including images, text, and speech (Wang and Yeung, 2020; Wilson, 2020; Wang and Yeung, 2016). Notably, the Bayesian approach frames the challenge of model optimization as an inference problem. This perspective is especially apt for scenarios necessitating decision-making under uncertainty (Murphy, 2022; Ghahramani, 2015). As a result, Bayesian formulations in deep learning have proven advantageous in various respects, offering enhancements in generalization (Wilson and Izmailov, 2020), accuracy, calibration (Izmailov et al., 2020; Luo and Kareem, 2020), and model compression (Louizos et al., 2017). These functional enhancements are intrinsically tied to judiciously chosen structural priors (Fortuin, 2022). The priors, integral to the probabilistic generative model, scaffold the architecture of the network, thereby reducing the data required for the inference of optimal parametric solutions. Recent studies have highlighted the efficacy of hierarchical shrinkage priors over model weights, a specific category of structural priors, in achieving highly-sparse network representations (Nalisnick et al., 2019; Louizos et al., 2017; Seto et al., 2021; Ghosh et al., 2018). Sparse representations not only reduce redundancy but also evince additional performance benefits. However, the adoption of shrinkage priors in all deep learning models presents a conundrum: the ballooning space of latent parameters and the diminishing scalability of prevailing approximate inference schemes (Snoek et al., 2015; Krishnan et al., 2019; Izmailov et al., 2020; Daxberger et al., 2021). In line with ongoing research on scalable Bayesian inference, we introduce an approximate inference scheme rooted in Bayesian model reduction (BMR). In essence, BMR extends the foundational principles of the Savage-Dickey Density Ratio method (Cameron, 2013). BMR is typically conceptualized as a combinatorial model comparison framework, enabling swift estimations of model evidence across an extensive array of models, that differ in their prior assumptions, to identify the most probable one. Originally conceived for model comparison within the dynamical causal modeling framework (Rosa et al., 2012; Friston and Penny, 2011), the scope of BMR has since broadened. Subsequent works expanded its methodology (Friston et al., 2016, 2017, 2018) and adapted it for structure learning (Smith et al., 2020). More recently, BMR has found applications in Bayesian nonlinear regression and classification tasks using Bayesian neural networks with variance backpropagation (Beckers et al., 2022; Haussmann et al., 2020). The BMR method is intimately connected with the spike-and-slab prior, a type of shrinkage prior (Mitchell and Beauchamp, 1988). Intriguingly, this specific structured shrinkage prior has parallels with Dropout regularization (Nalisnick et al., 2019). Such an association spurred researchers in Bayesian deep learning to formulate sparsification methods based on a different type of shrinkage prior--the hierarchical horseshoe prior (Piironen and Vehtari, 2017)--as a tool for automated depth determination. Subsequent studies suggested that merging horseshoe priors with structured variational approximations yields robust, highly sparse representations (Ghosh et al., 2018). The allure of continuous shrinkage priors (e.g., horseshoe priors) stems from the computational challenges associated with model inversion reliant on spike-and-slab priors (Nalisnick et al., 2019; Piironen and Vehtari, 2017). However, continuous shrinkage priors necessitate a considerably more expansive parameter space, to represent the approximate posterior, compared to optimizing neural networks using the traditional point estimate method. In this work, we reexamine the spike-and-slab prior within the framework of BMR-based sparsification, highlighting its efficiency. Notably, this approach circumvents the need to expand the approximate posterior beyond the conventional fully factorised mean-field ap proximation, making it more scalable than structured variational approximations (Ghosh et al., 2018). In this light, BMR can be seen as a layered stochastic and black-box variational inference technique, which we term _stochastic BMR_. We subject the stochastic BMR to rigorous validation across various image classification tasks and network architectures, including LeNet-5 (LeCun et al., 1989), Vision Transformers (Dosovitskiy et al., 2020), and MLP-Mixers (Tolstikhin et al., 2021). Central to our study is an empirical comparison of stochastic BMR with methods anchored in hierarchical horseshoe priors. Through multiple metrics - from Top-1 accuracy to expected calibration error and negative log-likelihood - we establish the competitive performance of stochastic BMR. We argue its computational efficiency, and remarkable sparsification rate, position BMR as an appealing choice, enhancing the scalability and proficiency of contemporary deep learning networks across diverse machine learning challenges, extending well beyond computer vision. We conclude with a discussion on potential avenues of future research that could further facilitate of BMR based pruning of deep neural networks. ## 2 Methods In this section, we first describe the methods and techniques used in our research to address the problem of efficient Bayesian sparsification of deep neural networks. We provide a detailed overview of our approach, starting with variational inference methods, followed by the formulation of the Bayesian model reduction (BMR), Bayesian neural networks with shrinkage priors, and the description of corresponding approximate posterior. ### Variational inference Given a joint density of latent variables, represented as \(\mathbf{z}=\left(z_{1},\ldots,z_{k}\right)\), and a dataset of \(n\) observations \(\mathbf{\mathcal{D}}=\left(y_{1},\ldots,y_{n}\right)\) we can express the joint density, that is, the generative model, as \[p\left(\mathbf{\mathcal{D}},\mathbf{z}\right)=p\left(\mathbf{z}\right)p\left(\mathbf{\mathcal{ D}}|\mathbf{z}\right).\] The posterior density is then obtained, following the Bayes rule, as \[p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{z}\right)p\left(\mathbf{ \mathcal{D}}|\mathbf{z}\right). \tag{1}\] For complex generative models, direct inference as described above becomes computationally prohibitive. To circumvent this, we approximate the exact posterior \(p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\), constraining it to a distribution \(q\left(z\right)\) that belongs to a named distribution family \(\mathcal{Q}\). We then seek \(q^{*}\left(z\right)\in\mathcal{Q}\), an approximate solution that minimizes the following Kullback-Leibler divergence (Blei et al., 2017) \[q^{*}\left(z\right)=\underset{q\in\mathcal{Q}}{\text{argmin}}D_{KL}\left(q \left(\mathbf{z}\right)||p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\right)=\underset{ q\in\mathcal{Q}}{\text{argmin}}F\left[q\right],\] where \(F\left[q\right]\) stands for the variational free energy (VFE), defined as \[F\left[q\right]=E_{q\left(\mathbf{z}\right)}\left[\ln q\left(\mathbf{z}\right)-\ln p \left(\mathbf{\mathcal{D}},\mathbf{z}\right)\right]\] VFE serves as an upper bound on the marginal log-likelihood \[F\left[q\right]=D_{KL}\left(q\left(\mathbf{z}\right)||p\left(\mathbf{z}|\mathbf{\mathcal{ D}}\right)\right)-\ln p\left(\mathbf{\mathcal{D}}\right)\geq-\ln p\left(\mathbf{ \mathcal{D}}\right)\] As KL-divergence is always greater or equal to zero, minimizing VFE brings the approximate solution as close as possible to the true posterior, without having to compute the exact posterior. The most straightforward way to obtain the approximate posterior \(q^{*}\left(\mathbf{z}\right)\), is to minimize the VFE along its negative gradient: \[\dot{\mathbf{\phi}}=-\nabla_{\mathbf{\phi}}F\left[q\right]\] where \(\mathbf{\phi}\) signifies the parameters of the approximate posterior \(q_{\mathbf{\phi}}\left(\mathbf{z}\right)=q\left(\mathbf{z}|\mathbf{\phi}\right)\). Thus, variational inference reframes the inference problem highlighted in eq. (1) as an optimization problem Beal (2003). ### Stochastic and black-box variational inference _Stochastic variational inference_ (SVI) improves the computational efficiency of gradient descent by approximating the variational free energy using a subset--\(\mathbf{\mathcal{K}}_{i}=\left(y_{s_{1}^{i}},\ldots,y_{s_{k}^{i}}\right);\,k\ll n\)--of the entire data set \(\mathbf{\mathcal{D}}\). This approach fosters a stochastic gradient descent (SGD) mechanism, capable of managing large datasets (Hoffman et al., 2013). Crucially, at every iteration step \(i\) of the SGD process, the subset \(\mathbf{\mathcal{K}}_{i}\) undergoes re-sampling. _Black-box Variational Inference_ (BBVI) facilitates the optimization of any (named or unnamed) posterior density \(q_{\mathbf{\phi}}\left(\mathbf{z}\right)\), through the integration of Monte Carlo estimates for variational gradients (Ranganath et al., 2014). This can be formulated as the following relation \[\nabla_{\mathbf{\phi}}F\left[q\right]\approx\nabla_{\mathbf{\phi}}\tilde{F}\left[q \right]=\frac{1}{S}\sum_{s=1}^{S}\nabla_{\mathbf{\phi}}\ln q_{\mathbf{\phi}}\left(\bm {z}\right)\left[\ln\frac{q_{\mathbf{\phi}}\left(\mathbf{z}\right)}{p\left(\mathbf{ \mathcal{D}},\mathbf{z}\right)}+1\right];\quad\mathbf{z}_{s}\sim q\left(\mathbf{z}|\mathbf{ \phi}\right) \tag{2}\] which is known as the REINFORCE estimator (Williams, 1992). To mitigate the variance inherent to Monte Carlo gradient estimations, we employ Rao-Blackwellization (Schulman et al., 2015), with an implementation sourced from NumPyro (Bingham et al., 2019). For optimizing the variational objective stochastically, we leverage the AdaBelief optimizer (Zhuang et al., 2020). As an adaptive algorithm, AdaBelief ensures swift convergence, robust generalization, and steady optimization. Notably, we use AdaBelief's implementation from the Optax package within the JAX ecosystem (Babuschkin et al., 2020). ### Bayesian model reduction Let us consider two generative processes for the data: a full model \[p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{\mathcal{D}}|\mathbf{z} \right)p\left(\mathbf{z}\right)\] and a reduced model 1 in which the original prior \(p\left(\mathbf{z}\right)\) is replaced with a more informative prior \(\tilde{p}\left(\mathbf{z}\right)=p\left(\mathbf{z}|\mathbf{\theta}\right)\) that depends on hyper-parameters \(\mathbf{\theta}\). This change leads to a different posterior Footnote 1: the reduction here implies applying constraints of any form to the prior to obtain a posterior with reduced entropy. \[\tilde{p}\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{\mathcal{D}}| \mathbf{z}\right)\tilde{p}\left(\mathbf{z}\right)\] Noting that as the following relation holds: \[1=\int\mathrm{d}\mathbf{z}\tilde{p}\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)=\int\mathrm{d }\mathbf{z}p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\frac{\tilde{p}\left(\mathbf{z}\right)p \left(\mathbf{\mathcal{D}}\right)}{p\left(\mathbf{z}\right)\tilde{p}\left(\mathbf{ \mathcal{D}}\right)},\] we can express the link between the models as: \[\begin{split}-\ln\tilde{p}\left(\mathbf{\mathcal{D}}\right)& =-\ln p\left(\mathbf{\mathcal{D}}\right)-\ln\int d\mathbf{z}p\left(\mathbf{z}| \mathbf{\mathcal{D}}\right)\frac{\tilde{p}\left(\mathbf{z}\right)}{p\left(\mathbf{z} \right)}\\ &\approx F\left(\mathbf{\phi}^{*}\right)-\ln\int d\mathbf{z}q_{\mathbf{\phi}^ {*}}\left(\mathbf{z}\right)\frac{\tilde{p}\left(\mathbf{z}\right)}{p\left(\mathbf{z} \right)}\end{split} \tag{3}\] where we assumed the approximate posterior for the full model corresponds to \(p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\approx q_{\mathbf{\phi}^{*}}\left(\mathbf{z}\right)\), and that \(-\ln p\left(\mathbf{\mathcal{D}}\right)\approx F\left(\mathbf{\phi}^{*}\right)\). From eq. (3) we obtain the free energy of the reduced model as \[-\ln\tilde{p}\left(\mathbf{\mathcal{D}}\right)\approx-\ln E_{q}\left[\frac{\tilde {p}\left(\mathbf{z}\right)}{p\left(\mathbf{z}\right)}\right]+F\left(\mathbf{\phi}^{*} \right)=-\Delta F\left(\mathbf{\theta}\right). \tag{4}\] where \(\Delta F\left(\mathbf{\theta}\right)\) denotes the change in the free energy of going from the full model to the reduced model, given hyper-parameters \(\mathbf{z}_{H}\). Note that for \(\Delta F\left(\mathbf{\theta}\right)>0\) the reduced model has a better variational free energy compared to the flat model. Consequently, the reduced model offers a model with a greater marginal likelihood; i.e., a better explanation for the data and improved generalization capabilities. Heuristically, this can be understood as minimising model complexity, without sacrificing accuracy (because log evidence can be expressed as accuracy minus complexity, where complexity is the KL divergence between posterior and prior beliefs). This relationship is pivotal in formulating efficient pruning criteria, especially for extensive parametric models commonly employed in deep learning. ### Bayesian neural networks In a general (nonlinear) regression problem, we model the relationship between predictors \(\mathbf{X}=\left(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\right)\) and target variables \(\mathbf{Y}=\left(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\right)\) using a likelihood distribution from an exponential family as \[\mathbf{y}_{i}\sim p\left(\mathbf{y}|\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)=h(y)\exp \left[\mathbf{\eta}\left(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)\right) \cdot\mathbf{T}\left(\mathbf{y}\right)-A\left(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i} \right)\right)\right]. \tag{5}\] Functions \(h(\cdot),\mathbf{\eta}(\cdot),\mathbf{T}(\cdot),A(\cdot)\) are known and selected depending on the task. For example in a regression problem the likelihood will correspond to a multivariate normal distribution and in a classification problem to a categorical distribution. In this work, we will only consider a categorical likelihood, as it is the most suitable for image classification tasks. The mapping \(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)\) represents a generic deep neural network of depth \(L\) defined as \[\mathbf{\mathcal{W}} =\left(\mathbf{W}_{1},\ldots,\mathbf{W}_{L}\right)\] \[\mathbf{h}_{i}^{0} =\mathbf{x}_{i}\] \[\mathbf{h}_{i}^{l} =\mathbf{g}\left(\mathbf{W}_{l}\cdot\left[\mathbf{h}_{i}^{l-1};1\right]\right)\] \[\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right) =\mathbf{W}_{L}\cdot\left[\mathbf{h}_{i}^{L-1};1\right]\] A probabilistic formulation of the deep learning task, that is, inferring model weights, introduces implicit bias to the parameters \(\boldsymbol{\mathcal{W}}\) of an artificial neural network in the form of a prior distribution \(p\left(\boldsymbol{\mathcal{W}}\right)\). Hence, parameter estimation is cast as an inference problem where \[p\left(\boldsymbol{\mathcal{W}}|\boldsymbol{\mathcal{D}}\right)\propto p\left( \boldsymbol{\mathcal{W}}\right)\prod_{i=1}^{n}p\left(\boldsymbol{y}_{i}| \boldsymbol{\mathcal{W}},\boldsymbol{x}_{i}\right)\] The choice of the prior distribution is crucial for optimal task performance, and a prior assumption of structural sparsity is essential for inferring sparse representations of over-parameterised models, such as deep neural networks. ### Bayesian neural networks with shrinkage priors Shrinkage priors instantiate a prior belief about the sparse structure of model parameters. Here, we will investigate two well-established forms of shrinkage priors for network weight parameters, a canonical spike-and-slab prior (George and McCulloch, 1993; Mitchell and Beauchamp, 1988) defined as \[w_{ijl} \sim\mathcal{N}\left(0,\lambda_{ijl}^{2}\gamma_{0}^{2}\right)\] \[\lambda_{ijl} \sim\text{Bernoulli}\left(\pi_{l}\right)\] \[\pi_{l} \sim\mathcal{B}e\left(\alpha_{0},\beta_{0}\right)\] and a regularised-horseshoe prior (Piironen and Vehtari, 2017) \[w_{ijl} \sim\mathcal{N}\left(0,\gamma_{il}^{2}\right)\] \[\gamma_{il}^{2} =\frac{c_{l}^{2}v_{l}^{2}\tau_{il}^{2}}{c_{l}^{2}+\tau_{il}^{2}v_ {l}^{2}} \tag{6}\] \[c_{l}^{-2} \sim\Gamma\left(2,6\right)\] \[\tau_{il} \sim\mathcal{C}^{+}(0,1)\] \[v_{l} \sim\mathcal{C}^{+}(0,\tau_{0})\] where \(i\in\{1,\ldots,K_{l}\}\), \(j\in[1,\ldots,K_{l-1}+1]\), and where \(w_{ijl}\) denotes \(ij\)th element of the weight matrix at depth \(l\). The symbols \(\mathcal{B}e\), and \(\mathcal{C}^{+}\) denote a Beta distribution and a half-Cauchy distribution, respectively. Importantly, the spike-and-slab prior relates to dropout regularisation, which is commonly introduced as a sparsification method in deep learning (Nalisnick et al., 2019; Mobiny et al., 2021). This type of prior is considered the gold standard in shrinkage priors and has been used in many recent applications of Bayesian sparsification on neuronal networks (Bai et al., 2020; Hubin and Storvik, 2023; Jantre et al., 2021; Sun et al., 2022; Ke and Fan, 2022) showing excellent sparsification rates. However, the inversion of the resulting hierarchical model is challenging and requires carefully constructed posterior approximations. Moreover, their dependence on discrete random variables renders them unsuitable for Markov-Chain Monte Carlo-based sampling schemes. As a result, researchers often use continuous formulations of the shrinkage-prior, with the horseshoe prior being a notable example. In contexts that involve sparse learning with scant data, the regularised horseshoe prior has emerged as one of the preferred choices within shrinkage prior families (Ghosh et al., 2019). A distinct advantage of this prior is its ability to define both the magnitude of regularisation for prominent coefficients and convey information about sparsity. It is worth noting a dependency highlighted in Ghosh et al. (2018): for \(v_{l}\tau_{il}\ll 1\) the equation simplifies to \(\gamma_{il}\approx v_{l}\tau_{il}\) recovering the original horseshoe prior. In contrast, for \(v_{l}\tau_{il}\gg 1\), the equation becomes \(\gamma_{il}^{2}\approx c_{l}^{2}\). In this latter scenario, the prior over the weights is defined as \(w_{ijl}\sim\mathcal{N}\left(0,c_{l}^{2}\right)_{i}\), with \(c_{l}\) serving as a weight decay hyper-parameter for layer \(l\). ### Approximate posterior for Bayesian neural networks To benchmark stochastic BMR, we explore two forms of prior distribution \(p\left(\boldsymbol{\mathcal{W}}\right)\)--a flat and a hierarchical structure--in conjunction with a fully factorised mean-field approximation. Firstly, let us consider the flat prior over model weights, represented in a non-centered parameterization: \[\begin{split} c_{l}^{-2}&\sim\Gamma(2,2)\\ \hat{w}_{ijl}&\sim\mathcal{N}\left(0,1\right)\\ w_{ijl}&=\gamma_{0}c_{l}\hat{w}_{ijl}\end{split} \tag{7}\] where we set \(\gamma_{0}=0.1\). Note that in the flat prior we incorporate a layer specific scale parameter, which we found to stabilise variational inference. Based on this, we describe a fully factorised approximate posterior as a composite of Normal and Log-Normal distributed random variables. Hence, \[\begin{split} q\left(\boldsymbol{\hat{\mathcal{W}}},\boldsymbol {c}\right)&=\prod_{l}q\left(c_{l}^{-2}\right)\prod_{i}\prod_{j}q \left(\hat{w}_{ijl}\right)\\ q\left(\hat{w}_{ijl}\right)&=\mathcal{N}\left( \mu_{ijl},\sigma_{ijl}^{2}\right)\\ q\left(c_{l}^{-2}\right)&=\mathcal{L}\mathcal{N} \left(\mu_{c,l},\sigma_{c,l}^{2}\right).\end{split} \tag{8}\] When inverting a hierarchical generative model over weights of artificial neural network, we exclusively apply stochastic black-box variational inference to the model variant with the regularised horseshoe prior. This choice is motivated by its documented superiority over the spike-and-slab prior, as established in Ghosh et al. (2018). We express the hierarchical prior in the non-centered parameterization as: \[a_{il},b_{il} \sim\Gamma\left(\frac{1}{2},1\right)\] \[\hat{a}_{l},\hat{b}_{l} \sim\Gamma\left(\frac{1}{2},1\right)\] \[\tau_{il} =\sqrt{\frac{a_{il}}{b_{il}}}\] \[v_{l} =\tau_{0}\sqrt{\frac{\hat{a}_{l}}{\hat{b}_{l}}}\] \[\hat{w}_{ijl} \sim\mathcal{N}\left(0,1\right)\] \[w_{ijl} =\gamma_{il}\hat{w}_{ijl}\] Note that the expressions above involve a reparameterization of Half-Cauchy distributed random variables as the square-root of the quotient of two Gamma distributed random variables, a strategy drawn from Wand et al. (2011) (see Appendix B for additional details). Such a reparameterization of the Half-Cauchy ensures capturing of fat-tails in the posterior, even when leveraging a fully-factorised mean-field posterior approximation, as referenced in Ghosh et al. (2018). For the fully-factorised mean-field approximation, the approximate posterior is portrayed as a composite of Normal and Log-Normal distributed random variables, expressed as: \[q\left(\hat{\mathbf{\mathcal{W}}},\mathbf{a},\mathbf{b},\mathbf{\hat{a}},\mathbf{ \hat{b}},\mathbf{c}\right) =\prod_{l}q\left(c_{l}^{-2}\right)q\left(\hat{a}_{l}\right)q \left(\hat{b}_{l}\right)\prod_{i}q\left(a_{il}\right)q\left(b_{il}\right)\prod _{j}q\left(\hat{w}_{ijl}\right)\] \[q\left(c_{l}\right) =\mathcal{LN}\left(\mu_{c,l},\sigma_{c,l}^{2}\right)\] \[q\left(\hat{a}_{l}\right) =\mathcal{LN}\left(\hat{\mu}_{a,l},\hat{\sigma}_{a,l}^{2}\right)\] \[q\left(\hat{b}_{l}\right) =\mathcal{LN}\left(\hat{\mu}_{b,l},\hat{\sigma}_{b,l}^{2}\right)\] \[q\left(a_{il}\right) =\mathcal{LN}\left(\mu_{a,il},\sigma_{a,il}^{2}\right)\] \[q\left(b_{il}\right) =\mathcal{LN}\left(\mu_{b,il},\sigma_{b,il}^{2}\right)\] \[q\left(\hat{w}_{ijl}\right) =\mathcal{N}\left(\mu_{w,ijl},\sigma_{w,ijl}^{2}\right)\] ### Application of stochastic BMR to Bayesian neural networks To apply BMR to Bayesian neural networks, we commence by estimating an approximate posterior for the flat model, as detailed in eq. (7). To retain high computational efficiency, we pair BMR solely with the fully factorised approximate posterior, as presented in eq. (8). While it is feasible to use this method alongside the structured posterior (Ghosh et al., 2018), it requires considerably more computationally intensive estimations of the reduced free energy. As shown below, we obtain satisfactory results with a fully factorised posterior. Therefore, we defer the exploration of BMR with a structured posterior to future endeavours. Given a fully factorised approximate posterior, we can determine the change in variational free energy, \(\Delta F\)--after substituting the prior \(\mathcal{N}\left(0,1\right)\) with \(\mathcal{N}\left(0,\theta_{ijl}^{2}\right)\) for the weight \(\hat{w}_{ijl}\)--as: \[\Delta F\left(\theta_{ijl}\right) =-\frac{1}{2}\ln\rho_{ijl}^{2}-\frac{1}{2}\frac{\mu_{ijl}^{2}}{ \sigma_{ijl}^{2}}\left(1-\frac{\theta_{ijl}^{2}}{\rho_{ijl}^{2}}\right)\] \[\rho_{ijl}^{2} =\theta_{ijl}^{2}+\sigma_{ijl}^{2}-\theta_{ijl}^{2}\sigma_{ijl}^{2}\] For the second hierarchical level of the approximate posterior, we aim to minimize the following form for the variational free energy: \[F=\sum_{l=1}^{L}E_{q\left(\boldsymbol{\theta}_{l}\right)}\left[-\sum_{i,j} \Delta F(\theta_{ijl})+\ln\frac{q\left(\boldsymbol{\theta}_{l}\right)}{p\left( \boldsymbol{\theta}_{l}\right)}\right] \tag{9}\] This minimization is done with respect to \(q\left(\boldsymbol{\Theta}\right)=\prod_{l}q\left(\boldsymbol{\theta}_{l}\right)\), the approximate posterior over hyper-parameters. Note the application of eq.4 in substituting the marginal log-likelihood with the change in the variational free energy. For the spike-and-slab prior we can write the following relation: \[\boldsymbol{\theta}_{l} =\left[\pi_{l},\lambda_{ijl}\right]\text{ for }i\in\left\{1,\ldots,K_{l}\right\}, \text{ and }j\in\left\{1,\ldots,K_{l-1}+1\right\}\right]\] \[\theta_{ijl} =\lambda_{ijl}\] Consequently, the approximate posterior at the second level of the hierarchy can be approximated as: \[q\left(\boldsymbol{\Theta}\right) =\prod_{l}q\left(\pi_{l}\right)\prod_{ij}q\left(\lambda_{ijl}\right)\] \[q\left(\lambda_{ijl}\right) =q_{ijl}^{\lambda_{ijl}}\left(1-q_{ijl}\right)^{1-\lambda_{ijl}}\] \[q\left(\pi_{l}\right) =\mathcal{B}\left(\alpha_{l},\beta_{l}\right)\] The iterative update to obtain the minimum of the simplified variational free energy (eq.9) is then: \[q_{ijl}^{k+1} =\frac{1}{1+e^{-\left[\zeta_{l}^{k}-\Delta F\left(\lambda_{ijl}= 0\right)\right]}}\] \[\zeta_{l}^{k} =\psi(\alpha_{l}^{k})-\psi(\beta_{l}^{k})\] \[\alpha_{l}^{k+1} =\sum_{i,j}q_{ijl}^{k+1}+\alpha_{0}\] \[\beta_{l}^{k+1} =\sum_{i,j}\left(1-q_{ijl}^{k+1}\right)+\beta_{0}\] Here, \(\alpha_{l}^{0}=\alpha_{0}\), \(\beta_{l}^{0}=\beta_{0}\), \(\Delta F\left(\lambda_{ijl}=0\right)=-\frac{1}{2}\left[\ln\sigma_{ijl}^{2}+ \frac{\mu_{ijl}^{2}}{\sigma_{ijl}^{2}}\right]\), and \(\psi\left(\cdot\right)\) refers to the digamma function. The efficiency of this inference scheme is remarkable, typically achieving convergence after a few iterations. In practice, we cap the maximum number of iterations at \(k_{max}=4\). Finally, we use the following pruning heuristics to eliminate model weights and sparsify network structure \[\text{if }q_{ijl}^{k_{max}}<\frac{1}{2}\text{, set }\hat{w}_{ijl}=0.\] To achieve the high sparsification rate presented in the next section, we adopt an iterative optimisation and pruning approach proposed in Beckers et al. (2022). We perform weight pruning at the beginning of each epoch (except the first one), and further optimisation for 500 iterations, completing one epoch. In total, we apply iterative pruning and optimisation for fifty epochs in all examples below. The complete implementation of stochastic BMR is available at an online repository github.com/dimarkov/bmr4pml with notebooks and scripts necessary to recreate all result figures. ## 3 Results In this section, we present the outcomes of our experiments and analyses conducted to evaluate the performance and efficiency of the stochastic Bayesian model reduction in the context of Bayesian sparsification of deep neural networks. Our results are structured to provide insights into the capabilities and advantages of our approach. ### Performance Comparison The training regimen used a batch size of \(N_{B}=128\) and the AdaBelief algorithm with learning rate set to \(\alpha=10^{-3}\) in the case of the MAP estimate, \(\alpha=5\cdot 10^{-3}\) in the case of the mean-field methods, and \(\alpha=10^{-2}\) in the case of stochastic BMR (the exponential decay rates were kept at default values \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\)). Figure 1 charts the Figure 1: Classification performance comparison on FashionMNIST dataset for different neuronal architectures and approximate inference schemes. epoch-wise evolution of ACC, ECE, and NLL for each architecture, under five distinct approximate inference strategies: (i) Maximum a posteriori (MAP) estimate for the flat generative model, akin to traditional deep learning point estimates coupled with weight decay. (ii) A fully factorised posterior approximation for the flat generative model (Flat-FF). (iii) A fully factorised posterior approximation of the hierarchical generative model with a regularised horseshoe prior (Tiered-FF). (iv) The stochastic BMR algorithm augmented with a spike-and-slab prior (BMR-S&S). Each epoch is defined by 500 stochastic gradient steps, with each step randomly drawing \(N_{B}\) data instances from the training pool. Interestingly, all approximate inference methods demonstrate comparable top-1 accuracy scores. However, the stochastic BMR method followed by the Tiered-FF approximatio (with a single exception), consistently resulted in the lowest ECE and NLL scores across the majority of DNN architectures and datasets (see Figure S1 for CIFAR10 dataset and Figure S2 for CIFAR100 dataset).The implicit reduction in model complexity suggests that--as anticipated--Stochastic BMR furnishes a model of the data that has the greatest evidence or marginal likelihood (not shown). In this setting, the NLL of the test data can be regarded as a proxy for (negative log) marginal likelihood. ### Learning of sparse representations Figure 2 depicts the fraction of pruned model parameters for different DNN architectures and datasets. It is noteworthy to observe the substantive sparsity achieved by the stochastic BMR algorithm. This sparsity is consistent across datasets and architectures, with the exception of the LeNet-5 structure when used for the FashionMNIST dataset, because by default LeNet-5 architecture is already sparse and contains relatively low-number of model weights (for other data sets we substantially increased the dimensionality of hidden layers as detailed in Appendix A). To delve deeper into the pruning behavior across varying network depths, Figure 3 presents a per-layer cumulative distribution function (CDF) for model parameters, highlighting the proportion of parameters whose absolute mean posterior estimate falls below a given threshold. When juxtaposing the BMR CDF trajectories with those obtained from the Tiered-FF method (sparsification is induced by the regularised half-cauchy prior), it is evident that BMR furnishes more pronounced sparsification. This distinction is crucial, Figure 2: Total fraction of pruned model parameters obtained with the stochastic BMR algorithm across different DNN architectures and datasets. as the stochastic BMR not only matches or surpasses the performance of the Tiered-FF algorithm but also averages a 30% faster stochastic gradient descent. To illustrate the structural learning variations among algorithms, Figure 4 presents heatmaps of posterior expectations obtained using the four different methods. The Figure 4 reveals subtle differences between inferred representations of the MLP and LeNet-5 Figure 4: Posterior expectations (color coded) over model parameters obtained using different approximate inference schemes at the first layer of (a) MLP architecture, and (b) LeNet architectures. Figure 3: Cumulative Distribution Function (CDF) of absolute posterior parameter expectations at different layers of MLP (top row), and LeNet architectures (bottom row). The y-axis represents the fraction of parameters with values less than or equal to the value on the x-axis. architecture's input layers trained on the Fashion MNIST dataset. Divergent compression rates among the algorithms indicate inherent trade-offs between efficiency and performance. It is evident that the stochastic BMR strikes a balance between compression advantages and performance, as it is less prone to over-pruning as compared to the Tiered-FF method (two featured of the LeNet-5 input layer are effective removed - see Figure 4(b)). ## 4 Discussion In this study, we presented a novel algorithm--stochastic Bayesian model reduction--designed for an efficient Bayesian sparsification of deep neural networks. Our proposed method seamlessly integrates stochastic and black-box variational inference with Bayesian model reduction (BMR), a generalisation of the Savage-Dickey ratio. Through the stochastic BMR strategy, we enable iterative pruning of model parameters, relying on posterior estimates acquired from a straightforward variational mean-field approximation to the generative model. This model is characterized by Gaussian priors over individual parameters and layer-specific scale parameters. The result is an efficient pruning algorithm for which the computational demand of the pruning step is negligible compared to the direct stochastic black-box optimization of the full hierarchical model. Over recent years, the Bayesian sparsification of neural networks has gained momentum, primarily driven by the spike-and-slab prior Bai et al. (2020); Hubin and Storvik (2023); Jantre et al. (2021); Sun et al. (2022); Ke and Fan (2022). These works have showcased the remarkable sparsification capabilities inherent to such shrinkage priors. Nevertheless, when juxtaposed with the stochastic BMR algorithm, they often necessitate supplementary assumptions related to the approximate posterior. These assumptions, in turn, lead to a more computation-intensive model inversion. Moreover, in contrast to related approaches, the versatility of stochastic BMR allows its integration with more efficient optimization techniques, like variational Laplace Daxberger et al. (2021) and proximal-gradient methods Khan et al. (2018), provided the resulting approximate posterior in the form of a normal distribution is apt for the application at hand. The insights obtained here pave the way for a deeper exploration of the potential applications of Bayesian model reduction across a wider array of architectures and tasks in probabilistic machine learning, such as audiovisual and natural language processing tasks. A more detailed fine tuning of the core dynamics of these algorithms, in terms of iterations steps, learning rates, and other free-parameters, might be the key to unveiling even more proficient Bayesian deep learning methodologies in the near future. We thank Conor Heins, Magnus Koudahl, and Beren Millidge for valuable discussions during the initial stages of this work. SK acknowledges support by DFG TRR 265/1 (Project ID 402170461, B09) and Germany's Excellence Strategy--EXC 2050/1 (Project ID 390696704)--Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universitat Dresden. ## Appendix A For the simple multi-layer perceptron, we configure the architecture with five hidden layers, each comprising 400 neurons. The chosen activation function is the Swish activation function (Ramachandran et al., 2017). For the LeNet-5 architecture, we adhere to the original design, which includes three convolutional layers, average pooling following the initial two convolutional layers, and two linear layers. The activation function used is the hyperbolic tangent. The convolutional layers employ a kernel size of \(5\times 5\), while the average pooling uses a window of shape \(2\times 2\). For the FashionMNIST dataset, the feature counts of the convolutional layers are designated as (6, 16, 120), and the two linear layers have neuron counts of (84, 10). However, for the CIFAR10 and CIFAR100 datasets, we elevate the feature counts of the convolutional layers to (18, 48, 360), with linear layer neuron counts set to (256, 10) for CIFAR10 and (256, 100) for CIFAR100. For the MlpMixer architecture we employ six layers and a patch resolution of \(4\times 4\). Across all datasets, we maintain constant values for hidden size (\(C\)), sequence length (\(S\)), MLP channel dimension (\(D_{C}\)), and MLP token dimension (\(D_{S}\)); specifically \(C=256\), \(S=64\), \(D_{C}=512\) and \(D_{S}=512\) for all datasets. For the VisionTransformer architecture, we adopt a slightly modified version of the ViT-Tiny setup: we use six layers, eight heads for each attention block, an embedding dimension of 256, and a hidden dimension of 512. The patch resolution of \(4\times 4\) is consistent with the MlpMixer. In both MlpMixer and VisionTransformer architectures, the GeLU activation function is used (Hendrycks and Gimpel, 2016). For training using the maximum a posteriori estimate (Flat-MAP), dropout regularization, with dropout probability set to 0.2, is applied to all linear layers across all architectures, with the exception of the MlpMixer. ## Appendix B In the centered parameterization of a generative model, Stochastic Variational Inference (SVI) with a fully factorized posterior yields a non-sparse solution, undermining the objective of employing shrinkage priors (Ghosh et al., 2019). Typically, this limitation is addressed by adopting the non-centered parameterization of the prior. Consider the unique property of the half-Cauchy distribution: given \(x\sim C^{+}(0,1)\), and \(z=bx\) the resulting probability distribution for \(z\) is \(z\sim C^{+}(0,b)\). Therefore, the non-centred parameterization is formulated as \[\hat{\tau}_{i}^{l} \sim\mathcal{C}^{+}(0,1)\] \[\hat{\lambda}_{ij}^{l} \sim\mathcal{C}^{+}(0,1)\] \[\hat{w}_{ij}^{l} \sim\mathcal{N}\left(0,1\right)\] \[\left[\gamma_{ij}^{l}\right]^{2} =\frac{\left[c^{l}\tau_{0}^{l}\tau_{i}^{l}\hat{\lambda}_{ij}^{l} \right]^{2}}{\left[c^{l}\right]^{2}+\left[\tau_{0}^{l}\tau_{i}^{l}\hat{ \lambda}_{ij}^{l}\right]^{2}}\] \[w_{ij}^{l} =\gamma_{ij}^{l}\hat{w}_{ij}^{l}\] However, while the half-Cauchy distribution is frequently chosen for sampling-based inference, it poses challenges in variational inference (Piironen and Vehtari, 2017). Firstly, exponential family-based approximate posteriors (e.g., Gamma or log-Normal distributions) inadequately capture the half-Cauchy distribution's fat tails. Secondly, using a Cauchy approximating family for the posterior results in high variance gradients during stochastic variational inference (Ghosh et al., 2019). Hence, in the context of stochastic variational inference, the half-Cauchy distribution undergoes a reparameterization, as described in (Ghosh et al., 2018): \[x\sim\mathcal{C}^{+}(0,b)\equiv x=\sqrt{\frac{1}{u}},u\sim\Gamma\left(\frac{1} {2},\frac{1}{v}\right),v\sim\Gamma\left(\frac{1}{2},b^{2}\right)\] or, when represented in the non-centered parameterization: \[x=b\sqrt{\frac{v}{u}},u\sim\Gamma\left(\frac{1}{2},1\right),v\sim\Gamma\left( \frac{1}{2},1\right) \tag{10}\] ## Appendix C
2309.14382
Agree To Disagree
How frequently do individuals thoroughly review terms and conditions before proceeding to register for a service, install software, or access a website? The majority of internet users do not engage in this practice. This trend is not surprising, given that terms and conditions typically consist of lengthy documents replete with intricate legal terminology and convoluted sentences. In this paper, we introduce a Machine Learning-powered approach designed to automatically parse and summarize critical information in a user-friendly manner. This technology focuses on distilling the pertinent details that users should contemplate before committing to an agreement.
Abhinav Raghuvanshi, Siddhesh Pawar, Anirudh Mittal
2023-09-24T18:06:45Z
http://arxiv.org/abs/2309.14382v1
# Agree To Disagree ###### Abstract How often do you read the terms and conditions before actually signing up for something, installing a software or entering a website? Most internet users don't. The reason behind that shouldn't be very hard to find. The Terms and conditions are usually many pages long filled with legal jargon and complex sentences. Through this paper we present a Machine Learning based method that will read the document for you and summarize the important points in simple language that actually matter to you and which you might want to consider before signing up. Machine Learning, Natural Language Processing, legal, AI, BERT, Text Summarizer, Web extension, Web Scraping ## I Introduction In the ever growing era of networking and proportionately swift upcoming of new websites everyday has led to creation of newer methods for providing better user experience on a website that might involve usage of user data. A website Privacy policy is a statement or a legal document that discloses some or all of the ways a party (the website) gathers, uses, discloses and manages a customer or client's data. Every person has the right to privacy which means he is the owner of his own data and can choose what they wish to share and with whom. As per the Law every organization that has control over its users data is obligated to maintain standards for complete data privacy and security, but everyday users do not posses a modus operandi to know what kind of data collection does the website do in the background or what does it exactly do with the data. Even if one out of a many users do try to navigate through to the 'Terms of Service' Page, most people do not posses the technical or legal knowledge to understand what does the long and detailed policies mean. Ideally every policy should present data in a human understandable way and should clearly mention how they plan to collect,store and use it but all the details seem to be shadowed by the complicated language websites use to unknowingly make user take the easier way out by not bothering what lies between the lines of the text. To assist the user in making informed and smarter choices by not only evaluating the text in the policies but also showing meaningful summaries of the policies, we aim to provide a smoother and safer web surfing experience for people. We perform a complete comprehensive analysis of each part of the policy mentioned on the website and the algorithms could be modified easily for 'Cookie Policies', 'Terms of Service' and/or 'Privacy Policies'. We not only analyse the text, we have also implemented a scoring algorithm that calculates a Score for each Policy and categorises the policy based on the score into classes like 'Good' or 'Bad' etc. Our classes are defined and we process each paragraph of the policy starting with prepossessing the scraped data for removal of unwanted characters and summarization. We extract out the meaningful information present in each paragraph in stages and then finally classify at the last stage. Our classes are predefined and on top of that the final scoring mechanism work. At the implementation side of things, it becomes crucial to address,advise and aware the user of the results that the backend algorithm generate. Our chrome extension works as the first contact of user and they interact solely with the extension which intern interacts with the hosted back-end algorithm through API calls. The Chrome Extension also detects the pages which have their policies referred to different links. It scrapes data off of those pages and sends it at the required API address. It is at the top most priority to interfere with the users work at the slightest and to complete the whole process of evaluation maximizing the throughput. In the upcoming sections we discuss the depth of the algorithms. Namely Section [2] discusses some related work done by others in the domain, Section [3] covers the implementation of the Chrome Extension, Section [4] covers the hosting techniques and Restful APIs that were setup, Section [5] covers the detailed Language Processing throught Machine Learning, Section [6] is about the details of the results and analysis that we did, Section [7] talks about the future work and Section [8] concludes the paper. ## II Related Work At this time there are no significant solution to this _people's problem_ in the market. The most successful and accurate of all is a website TosDr [Terms of Service; Didn't Read] which manually generates safety ratings for all major websites, i.e. they have people who review the policies and score them and they use the scores to grade the website, but a major limitation is that there can be human errors anywhere along the way and the finitely available human resource could cater for only so many websites to be reviewed by them. Our work majorly draws inspiration from them but we tend to make the Machine Learning Algorithms that could work with any general policy on the web. ### _Polisis_ An automated framework for privacy policy analysis (Polisis). They've enabled scalable, dynamic, and multi-dimensional queries on natural language privacy policies. At the core of Polisis is a privacy-centric language model, built with 130K privacy policies, and a novel hierarchy of neural-network classifiers that accounts for both high-level aspects and fine-grained details of privacy practices. Mining based application draws information from the fixed corpus of policies, incompetent with live data. Polisis' modularity and utility is demonstrated with two applications supporting structured and free-form querying. The structured querying application is the automated assignment of privacy icons from privacy policies. The second application, PriBot, is the first freeform question-answering system for privacy policies. ### _PrivacyCheck_ The two previous versions of PrivacyCheck, another chrome extension based product, incorporated the use of machine learning models to automatically answer 20 questions about the content of any given privacy policy, ten questions rooted in User Control and another ten in the European General Data Protection Regulation (GDPR). One setback was the fixed corpus of question against which the policies were evaluated, which was a certain loss of generality. The first two versions were used by about one thousand actual users over the past six years, since the first release in May 2015. In PrivacyCheck v3, they provide the capability to follow privacy policies and notify the user when policies change. Their work is the first to provide a bird's-eye view of privacy policies to which the user has agreed. ## III Chrome Extension Aimed at providing user with a easy and smooth user experience, but the key feature that we present is the automatic detection and scraping of data from the websites for analysis. To keep the notifications to the user minimum, we notify the user only when the back-end processing is done. Along with the final results, the extension shows a warning whenever user is on a page that is making them 'agree' to something, it can be a policy, which most people chose to ignore, while signing up or it can be any website that is designed to forcefully and subtly trick users into automatically agreeing to their terms by continuing on the website, some websites go to the extremes of not letting the user access the content unless they consent to their conditions. But most importantly between all these cases, what is common is that almost none of the users actually check out the policy by navigating to the required page. ### _Scraping Data using the Extension_ The extension's scope for processing pages is the currently active page. As mentioned above the extension gets activated only if the website tries to make user agree to something and skips the gore details by providing links and ensuring the course of action of the user is to simply make a one shot agreement to their policies without detailed examination of what's within the lines. The extension scrapes of all the links on the page and looks for relevant links from within those \(i.e\) the ones which take user to the _Privacy Policy or Terms and Conditions_ page. It then scrapes of each paragraph from there to be sent to the backend Machine Learning Model to process the policy and generate results. ``` Data:\(allLinks\leftarrow\{\}\) \(relevantWords\leftarrow\{\)words help identity useful links\(\}\) \(CheckWords\leftarrow\){words pointing to consent\(\}\) ; /* Check if the page is trying to make user agree to something */ if\(CheckPage(CheckWords)\)then Data:\(i\gets 0\); \(j\gets 0\); \(links\leftarrow\{\}\)\(\$("a").each(function()\)\(\{\)\(allLinks.push(this.href);\)\(\}\)); for\(i<allLinks.length\)do for\(j<relevantWords.length\)do if\(allLinks[i].includes(relevant_{i}ink_{w}ords[j])\)then \(links.push(all_{i}inks[i])\); else \(j++\); \(continue\); end if end for /* variable 'links' contains all the links to Privacy Policy and Terms of Service Pages */ Call function to scrape all the data from the links: \(scape(links)\); else \(exit\) ; end for CheckPage\(CheckWords\): Data:\(pageText\leftarrow\$(^{\prime}doc^{\prime}).text().split("")\) \(i\gets 0\); for\(i<relevantWords.length\)do if\(pageText.includes(relevantWords[i])\)then return True; else \(i++\); end if end for return False; ``` **Algorithm 1**Scrape Policy Data from the page The scraped data is sent to the back-end model by the extension itself using an API call (discussed in Section IV, which in-turn returns the processed policies and scores based on the algorithms discussed in Section V. The extension must convey the results in a human understandable easy way which is at the very core of our work. As shown in the Fig 1, we display - * **Score** : The calculated Good, Bad, Neutral and Blocker points present in the policies are shown under this column * **Overall Grade** : Using the scoring mechanism as mentioned in the paper [1] we score the website in a similar fashion where the good policies carry a positive relative weight and the bad ones carry a negative relative weight * **Summary** : Summary contains the paragraph wise summarised, easy to understand textual information retrieved from each of the paragraphs of the policy page ### _Communicating with backend_ After the links are obtained which contain the policies we go to each page and scrape of each paragraph of text present there. Then we send a request to a setup API which communicates with the Machine Learning Model's wrapper and in-turn does all the processing. The scraping of data from each link can be done through - * **Injecting Payload** : In this technique we inject a payload script to the new URL which scrapes of the data and stores it as a JSON for usage. * **Creating a new tab** : Here we follow a naive approach of opening a new tab with the link and scraping the data using the same script and storing the text and then immediately closing the tab automatically. After making the call to the API and receiving the response we display the information returned by the ML model by simply switching the state of the extension from '_processing_' to '_processed_' and replacing the content with relevant information. ## IV Hosting Techniques and API setups After scraping the data off of the policy pages we convert it to a json and send to the backend via an API call. The NLP models hosted require GPU for optimum performance but with a few added seconds out extension generated results without a GPU as well. ``` Data:\(message\leftarrow\) JSON.stringify(data from payload); \(postUrl\leftarrow\) API Url ; \(response\leftarrow\)\(\$.post(postUrl,message)\); /* response contains all the scores and summaries generated by the model */ ``` **Algorithm 2**Send request through API API can be setup using Flask on local machines. Any virtual hosting service can be used to deploy the models that provide enough space to store heavy ( but non GPU based) models. So storage on the server is important as a requirement for the host rather than the processing speed which is an add on and use case specific. At the receiver end we process the JSON by the extension and extract out the relevant information and display appropriate data on the extension pop up. ## V Language Processing In this section, we dive deep into the details of the Algorithms that work at the backend. From pre-processing the incoming data to the generation of scores there are mainly 3 models that help us in doing so Fig. 1: Version 1 (right) with just the scores for the processed policies of a website. Version 2 (left) is the latest extension which in addition also displays the summarised policies * **Summarizer**: After pre-processing the incoming policies we pass each individual chunk through a summarizer which abstractly summarises the large chunks into smaller ones. * **Tokenizer**: The BERT-based model is used to generate BERT embeddings of the summarized chunks. * Good, Neutral, Bad, and Blocker using which the scores are calculated. ### _Live Policy Evaluation_ Shown in Fig 2 is the overall pipeline at the backend. The scraped policies are received at the hosted model. The policies are generally huge and due to the dynamic structures of different websites, scraping data sometimes might carry HTML/XML tags along with them. Moreover not all websites lay down their policies purely in English, according to the need and type of website we might encounter characters of other languages and some special non-alphanumeric or accented characters as well. So, our first filter tries to clean up the text, taking out all the not needed characters and converting all the text to lowercase as we will be using uncased BERT so it becomes irrelevant to have different tokens for capital and non-capital words, it can also be easily justified that the Capital words are used mostly for nouns, which usually happen to be an ineffective contributor to the overall sentiment of the legal policy. We have used _BeautifulSoup's_ HTML parser to get rid of the tags along with another _unicodedata normalizer_ that removes the accented characters making the policies ready to be summarized. We use the **t5-base** summarizer provided by _Hugging Face_ pipeline respresented by \(H_{\zeta}^{2}\) in Fig 2. But before that, we split the message containing the policy and based on the number of words in each paragraph we summarize each small chunk. As explained in pseudocode 3 for each chunk of variable size we accordingly summarize them into the appropriate maximum length of summaries. T5 is a large-scale transformer-based model that was trained by Google and provided by Hugging Face. T5 can perform abstractive summarization, which means that it can generate a summary of a given text by understanding the content of the text and generating a new, shorter version that conveys the most important information from the original. This is a crucial step in the pipeline as we don't want to miss out on any of the information that might decide whether the policy is good or bad. T5 summarizer would extract important information along with significantly reducing the size of the text, which makes it faster to process in the further steps of the pipeline. The next step is rather simple we stack up all the summaries into a list and then we tokenize them and pass it to the **BERT** model to generate embeddings corresponding to each summary (shown by \(H_{\lambda}^{3}\) in Fig 2). Developed by Google, BERT is designed to pre-process text by assigning a "fixed-length" vector representation, which is what an embedding signifies Fig. 2: Policies Scraped by the extension is pre-processed for HTML tags and other non-alphanumeric noise-like characters carefully. They are then batched up according to their chunk size.\(H_{\zeta}^{2}\) is the summarizer model. Summaries are then converted to vectors through \(H_{\lambda}^{3}\) which is the pre-trained BERT-base model, followed by \(H_{\theta}^{1}\) which is the final classifier model that is used to predict whether the policy is good or bad etc. Finally data score is generated and the summaries are passed back to the extension to display it also contains the contextual information of the text in the form of a vector that we utilise for our classification task that follows. After that, we use a Machine Learning model like KNN (represented by \(H_{\theta}^{1}\) in Fig 2) with \(k=3\) to classify the embeddings into the 4 classes namely good, bad, blocker or neutral. Note that \(H_{\theta}^{1}\) is one Model from the family of models with trained parameters \(\theta\), in general, the classifier should be the best classifier available it could also be a combination of more than one model but what should be kept in mind is that we are trying to reduce the time taken for the user to get the evaluation results so we keep things simple and fast which is our priority, but if higher accuracy is desired then heavier models can be employed. A detailed analysis of the model training is done in Section V-B meanwhile a detailed analysis of our experiments and conclusions with different ML models is done in Section VI. The scoring algorithm can be subjective and can be modified as per the need. We assign a score as- ``` Data:\(paragraphs\leftarrow\) all the paragraphs of the policy; \(summarizer\gets t5-Base\); for\(para\in paragraphs\)do if\(para.length>400\)then \(summarize(maxLength=200)\); else pass; end if if\(para.length>200\)then \(summarize(maxLength=100)\); else pass; end if if\(para.length>100\)then \(summarize(maxLength=75)\); else pass; end if if\(para.length>75\)then \(summarize(maxLength=50)\); else pass; end if end if /* summaries are then stacked up in a list */ ``` **Algorithm 3**Summarization ### _Supervised Training Models_ We use 3 Models in total, 2 of them namely the T5-summarizer (\(H_{\zeta}^{2}\) in Fig 2) and BERT (\(H_{\lambda}^{3}\) in Fig 2) were made available through Hugging Face Pipelines. Both T5 and BERT are trained on a wide variety of data, including text from books, news articles, and web pages. We did not find any need to fine-tune any of them so we have deployed them directly. But the \(3^{rd}\) and the most important Model of them, which is the 'classifier model'(\(H_{\theta}^{1}\) in Fig 2) needs to be trained to predict whether a given policy can be categorized as good or bad. Once we have the embeddings vector containing the contextual information of the policies, it boils down to a very simple supervised learning task. **Dataset** - We have used the data from ToS;DR which aims at creating a transparent and peer-reviewed process to rate and analyse Terms of Service and Privacy Policies in order to create a rating from Grade A to Grade E. But they do this manually through human interaction. Terms of service are reviewed by contributors and divided into small points that we can discuss, compare and ultimately assign a score with a badge. Once a service has enough badges to assess the fairness of its terms for users, a class is assigned automatically by pondering the average scores. Somewhat similar to what we do with the help of AI. We took around 4160 policies from their API which were labelled by the reviewers and assigned a score as shown in the Table I. Fig 3 shows the distribution of words in each quoted text which was recieved from the ToS;DR's API. It can be concluded from the plot that even though majority of the policiy paragraphs contain around 50-60 words, there are still some policy paragraphs that contain more than 400 words as well. This is why it becomes important to summarize them as well. Summarizing them into even shorter text would make bring uniformity in terms of length of each policy since we are not losing information due to abstractive nature of summarization. It also means that while generating embeddings we would have lesser padding, since in a batch all other sentences get padded to become of the same size as that of the largest sentence Fig. 3: Plot showing the number of words in a quoted text in the dataset generated from ToS;DR’s API in the batch, which in turn means lesser loss of information (as zeros in a vector would not be conveying any meaningful information about the policy to the ML model). As can be seen in Table I, **quoteText** contains a lot of noise and the clearly the quoted document lines are not pre-processed for removing HTML tags etc, also some quoted documents are very large. We follow the same routine with the training data to make it model ready- * Quoted Document text is pre-processed to remove all non alphanumeric tokens * According to the size of the doc, they are summarised in batches * The summaries are stacked up and then embeddings are generated ## VI Results Vectorization of policies give us the freedom to try and use the contextual meaning of the text present in the vector for various prediction and classification tasks. Since out goal is to rate a website we classified the embeddings into 4 classes.With a train test split of 80-20 % we have tried different classifiers and the best result was shown by **KNN** with \(k=3\) which gave an **accuracy of 71.39%**. After that **Linear SVM** performed well, we achieved an **accuracy of 64.54%**. We got **60.69 % classification accuracy** with **Quadratic Discriminant Analysis** model provided by SKLearn. **Naive Bayes** achieved an **accuracy of 51.80 %** very closely matched by **Ada-Boost** which could correctly classify **51.68 %** of the test data which was also matched by the **Decision Tree** classifier. Finally the least performing model turned out to be **Random Forest** with an **accuracy of 48.91 %**. Other accuracy metrics have been tabulated in Table II. Using dimentionality reduction techniques we have reduced the each embedding in the test set to a vector in \(\mathbb{R}^{2}\). On colouring the points in 2-D space and plotting them through a scatterplot, we can see that even after significant reduction in dimensions which is likely to cause loss of information, there are still clusters of different types formed. As shown in Fig 5 the blue points denoting the 'bad' policy points concentrate towards the left and similarly in Fig 4 we can see the 'good' points in test set of the policy (denoted by red dots) surround the blue points. The results very well align with the model performances and make it clear that clustering is the better approach while dealing with the semantics of legal text. ## VII Future Work As a first attempt at real time processing of huge policies, a lot of ground has been covered but a lot more in terms of improving upon the efficiency of the overall process can be done. * **Sampling Topics -** Some topics regarding privacy are more important than other, for ex. policies regarding your location access might be of more importance than policies regarding what browser are you using. Plan is to extract out topics and sample from some Dirichlet distribution of topics and rank them according to relevance to user and then process those parts of the policies for better analysis of the website. * **Better classifiers -** Currently we take a very naive approach of classification using clustering, better methods to cluster can be employed. Exploring beyond supervised methods, unsupervised or zero shot techniques can be tried to classify on the go but keeping in mind that accuracy of prediction is of primary importance. * **Speeding up scraping -** The faster the extension scrapes off the policies the more time is saved overall. Currently the code scrapes of all paragraphs, a better approach can be to selectively pick which are the relevant paragraphs to pick and send to the backend model. This would also reduce overheads at the backend pre-processing. * **Speeding up API calls -** Data can be split into smaller chunks and send via multiple requests to through the API bridge and then reassembled at the backend. It is an approach similar to data packets travelling from one node to another in a network. * **Worker threads at backend -** Multi-threaded backend implementation could speed up the rate at which multiple users send requests at the ML model. It can make sure that the requests don't pile up for long, and maximum efficiency rate is maintained at the backend utilizing full computational capacity of the host Fig. 4: tSNE of embeddings representing point cluster Fig. 5: PCA of embeddings representing point cluster ## VIII Conclusion Through our work we want to show that legal text does have a sentiment. The goal is to make Artificial Intelligence learn and extract out meaning from policy data, more so to help build safety evaluation systems for the future use. Not only human efforts in policy evaluation would reduce significantly we would also be able to increase public engagement in their participation in right to information and right to privacy. Privacy policies are written in legal language and often can be very misleading for individuals to understand,it easier for companies to hide potentially harmful or controversial practices in their privacy policies. As a result of which a lot of people do not actually understand how can a website potentially harm them in unknown ways. Growing privacy concerns across the globe calls for reliable integration of Machine Learning to ease the process of safeguarding the interests of individuals on the web. Robustness of such systems continues to be in question and more reliability is desired as we go further. Community as a whole. We present an end to end pipeline blended with a chrome extension on the front end that could help every internet user be more aware of the Terms and Conditions that he unknowingly agrees to while being on the web.
2310.20144
EELBERT: Tiny Models through Dynamic Embeddings
We introduce EELBERT, an approach for compression of transformer-based models (e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic, i.e. on-the-fly, embedding computations. Since the input embedding layer accounts for a significant fraction of the model size, especially for the smaller BERT variants, replacing this layer with an embedding computation function helps us reduce the model size significantly. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to the traditional BERT models. Through this approach, we are able to develop our smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully trained BERT-tiny, while being 15x smaller (1.2 MB) in size.
Gabrielle Cohn, Rishika Agarwal, Deepanshu Gupta, Siddharth Patwardhan
2023-10-31T03:28:08Z
http://arxiv.org/abs/2310.20144v1
# EELBERT: Tiny Models through Dynamic Embeddings ###### Abstract We introduce EELBERT, an approach for compression of transformer-based models (e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic, i.e. on-the-fly, embedding computations. Since the input embedding layer accounts for a significant fraction of the model size, especially for the smaller BERT variants, replacing this layer with an embedding computation function helps us reduce the model size significantly. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to the traditional BERT models. Through this approach, we are able to develop our smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully trained BERT-tiny, while being 15x smaller (1.2 MB) in size. ## 1 Introduction It has been standard practice for the past several years for natural language understanding systems to be built upon powerful pre-trained language models, such as BERT Devlin et al. (2019), T5 Raffel et al. (2020), mT5 Xue et al. (2021), and RoBERTa Liu et al. (2019). These language models are comprised of a series of transformer-based layers, each transforming the representation at its input into a new representation at its output. Such transformers act as the "backbone" for solving several natural language tasks, like text classification, sequence labeling, and text generation, and are primarily used to map (or _encode_) natural language text into a multidimensional vector space representing the semantics of that language. Experiments in prior work Kaplan et al. (2020) have demonstrated that the size of the language model (i.e., the number of parameters) has a direct impact on task performance, and that increasing a language model's size improves its language understanding capabilities. Most of the recent state-of-art results in NLP tasks have been obtained with very large models. At the same time as massive language models are gaining popularity, however, there has been a parallel push to create much smaller models, which could be deployed in resource-constrained environments such as smart phones or watches. Some key questions that arise when considering such environments: _How does one leverage the power of such large language models on these low-power devices? Is it possible to get the benefits of large language models without the massive disk, memory and compute requirements?_ Much recent work in the areas of model pruning Gordon et al. (2020), quantization Zafrir et al. (2019), distillation Jiao et al. (2020); Sanh et al. (2020) and more targeted approaches like the _lottery ticket hypothesis_Chen et al. (2020) aim to produce smaller yet effective models. Our work takes a different approach by reclaiming resources required for representing the model's large vocabulary. The inspiration for our work comes from Ravi and Kozareva (2018), who introduced dynamic embeddings, i.e. embeddings computed on-the-fly via hash functions. We extend the usage of dynamic embeddings to transformer-based language models. We observe that 21% of the trainable parameters in BERT-base Turc et al. (2019) are in the embedding lookup layer. By replacing this input embedding layer with embeddings computed at run-time, we can reduce model size by the same percentage. In this paper, we introduce an "embeddingless" model - EELBERT - that uses a dynamic embedding computation strategy to achieve a smaller size. We conduct a set of experiments to empirically assess the quality of these "embeddingless" models along with the relative size reduction. A size reduction of up to 88% is observed in our experiments, with minimal regression in model quality, and this approach is entirely complementary to other model compression techniques. Since EELBERT calculates embeddings at run-time, we do incur additional latency, which we measure in our experiments. We find that EELBERT's latency increases relative to BERT's as model size decreases, but could be mitigated through careful architectural and engineering optimizations. Considering the gains in model compression that EELBERT provides, this is not an unreasonable trade-off. ## 2 Related Work There is a large body of work describing strategies for optimizing memory and performance of the BERT models (Ganesh et al., 2021). In this section, we highlight the studies most revelant to our work, which focus on reducing the size of the token embeddings used to map input tokens to a real valued vector representation. We also look at past research on hash embeddings or randomized embeddings used in language applications (e.g., Tito Svenstrup et al. (2017)). Much prior work has been done to reduce the size of pre-trained static embeddings like GloVe and Word2Vec. Lebret and Collobert (2014) apply Principal Component Analysis (PCA) to reduce the dimensionality of word embedding. For compressing GloVe embeddings, Arora et al. (2018) proposed LASPE, which leverages matrix factorization to represent the original embeddings as a combination of basis embeddings and linear transformations. Lam (2018) proposed a method called Word2Bits that uses quantization to compress Word2Vec embeddings. Similarly, Kim et al. (2020) proposed using variable size code-blocks to represent each word, where the codes are learned via a feedforward network with binary constraint. However, the most relevant works to this paper are by Ravi and Kozareva (2018) and Ravi (2017). The key idea in the approach by Ravi and Kozareva (2018) is the use of projection networks as a deterministic function to generate an embedding vector from a string of text, where this generator function replaces the embedding layer. That idea has been extended to word-level embeddings by Sankar et al. (2021) and Ravi and Kozareva (2021), using an LSH-based technique for the projection function. These papers demonstrate the effectiveness of projection embeddings, combined with a stacked layer of CNN, BiLSTM and CRF, on a small text classification task. In our work, we investigate the potential of these projection and hash embedding methods to achieve compression in transformer models like BERT. ## 3 Modeling EELBERT EELBERT is designed with the goal of reducing the size (and thus the memory requirement) of the input embedding layers of BERT and other transformer-based models. In this section, we first describe our observations about BERT which inform our architecture choices in EELBERT, and then present the EELBERT model in detail. ### Observations about BERT BERT-like language models take a sequence of tokens as input, encoding them into a semantic vector space representation. The input tokens are generated by a tokenizer, which segments a natural language sentence into discrete sub-string units \(w_{1},w_{2},\dots,w_{n}\). In BERT, each token in the model's vocabulary is mapped to an index, corresponding to a row in the input embedding table (also referred to as the input embedding layer). This row represents the token's \(d\)-size embedding vector \(\mathbf{e_{w_{i}}}\in\mathbb{R}^{d}\), for a given token \(w_{i}\). The table-lookup-like process of mapping tokens in the vocabulary to numerical vector representations using the input embedding layer is a "non-trainable" operation, and is therefore unaffected by standard model compression techniques, which typically target the model's trainable parameters. This results in a compression bottleneck, since a profiling of BERT-like models reveals that the input embedding layer occupies a large portion of the Figure 1: Embedding table in BERT model's parameters. We consider three publicly available BERT models of different sizes, all pre-trained for English (Turc et al., 2019) - _BERT-base_, _BERT-mini_ and _BERT-tiny_. BERT-base has 12 layers with a hidden layer size of 768, resulting in about 110M trainable parameters. BERT-mini has 4 layers and a hidden layer size of 256, with around 11M parameters, and BERT-tiny has 2 layers and a hidden layer size of 128, totaling about 4.4M parameters. Figure 1 shows the proportion of model size occupied by the input embedding layer (blue shaded portion of the bars) versus the encoder layers (unshaded portion of the bars). Note that in the smallest of these BERT variants, BERT-tiny, the input embedding layer occupies almost 90% of the model. By taking a different approach to model compression, focusing not on reducing the trainable parameters but instead on eliminating the input embedding layer, one could potentially deliver up to 9x model size reduction. ### EELBERT Architecture EELBERT differs from BERT only in the process of going from input token to input embedding. Rather than looking up each input token in the input embedding layer as our first step, we dynamically compute an embedding for a token \(w_{i}\) by using an \(n\)-gram pooling hash function. The output is a \(d\)-size vector representation, \(\mathbf{e_{w_{i}}}\in\mathbb{R}^{d}\), just as we would get from the embedding layer in standard BERT. Keep in mind that EELBERT only impacts token embeddings, not the segment or position embeddings, and that all mentions of "embeddings" hereafter refer to token embeddings. The key aspect of this method is that it does not rely on an input embedding table stored in memory, instead using the hash function to map input tokens to embedding vectors at runtime. This technique is not intended to produce embeddings that approximate BERT embeddings. Unlike BERT's input embeddings, dynamic embeddings do not update during training. Our \(n\)-gram pooling hash function methodology is shown in Figure 2, with operations in black boxes, and black lines going from the input to the output of those operations. Input and output values are boxed in blue. For ease of notation, we refer to the \(n\)-grams of length \(i\) as \(i\)-grams, where \(i=1,...,N\), and \(N\) is the maximum \(n\)-gram size. The steps of the algorithm are as follows: **1. Initialize random hash seeds \(\mathbf{h}\in\mathbb{Z}^{d}\).** There are \(d\) hash seeds in total, where \(d\) is the size of the embedding we wish to obtain, e.g. 768 for BERT-base. The \(d\) hash seeds are generated via a fixed random state, so we only need to save a single integer specifying the random state. **2. Hash \(i\)-grams to get \(i\)-gram signatures \(\mathbf{s_{i}}\).** There are \(k_{i}=l-i+1\) number of \(i\)-grams, where \(l\) is the length of the token. Using a rolling hash function (Wikipedia contributors, 2023), we compute the \(i\)-gram signature vectors, \(\mathbf{s_{i}}\in\mathbb{Z}^{k_{i}}\). **3. Compute projection matrix for \(i\)-grams.** For each \(i\), we compute a projection matrix \(\mathbf{P_{i}}\) using a subset of the hash seeds. The hash seed vector \(\mathbf{h}\) is partitioned into \(N\) vectors, boxed in pink in the diagram. Each partition \(\mathbf{h_{i}}\) is of length \(d_{i}\), where \(\sum_{i=1}^{N}d_{i}=d\), with larger values of \(i\) corresponding to a larger \(d_{i}\). Given the hash seed vector \(\mathbf{h_{i}}\) and the \(i\)-gram signature vector \(\mathbf{s_{i}}\), the projection matrix \(\mathbf{P_{i}}\in\mathbb{Z}^{k_{i}\times d_{i}}\) is the outer product \(\mathbf{s_{i}}\times\mathbf{h_{i}}\). To ensure that the matrix values are bounded between \([-1,1]\), we perform a sequence of transformations on \(\mathbf{P_{i}}\): \[\mathbf{P_{i}} =\mathbf{P_{i}}\:\%\:B\] \[\mathbf{P_{i}} =\mathbf{P_{i}}-(\mathbf{P_{i}}>\frac{B}{2})*B\] \[\mathbf{P_{i}} =\mathbf{P_{i}}\:/\:\frac{B}{2}\] where \(B\) is our bucket size (scalar). **4. Compute embedding, \(\mathbf{e_{i}}\), for each \(i\)-grams.** We obtain \(\mathbf{e_{i}}\in\mathbb{R}^{d_{i}}\) by averaging \(\mathbf{P_{i}}\) across its \(k_{i}\) rows to produce a single \(d_{i}\)-dimensional vector. **5. Concatenate \(\mathbf{e_{i}}\) to get token embedding \(e\).** We concatenate the \(N\) vectors \(\{\mathbf{e_{i}}\}_{i=1}^{N}\), to get the token's final embedding vector, \(\mathbf{e}\in\mathbb{R}^{d}\). For a fixed embedding size \(d\), the tunable hyperparameters of this algorithm are: \(N\), \(B\), and the Figure 2: Computing dynamic hash embeddings choice of the hashing function. We used \(N=3\), \(B=10^{9}+7\) and rolling hash function. Since EELBERT replaces the input embedding layer with dynamic embeddings, the exported model size is reduced by the size of the input embedding layer: \(O(d\times V)\) where \(V\) is the vocabulary size, and \(d\) is the embedding size. We specifically refer to the _exported size_ here, because during pre-training, the model also uses an output embedding layer which maps embedding vectors back into tokens. In typical BERT pre-training, weights are shared between the input and output embedding layer, so the output embedding layer does not contribute to model size. For EELBERT, however, there is no input embedding layer to share weights with, so the output embedding layer does contribute to model size. Even if we pre-compute and store the dynamic token embeddings as an embedding lookup table, using the transposed dynamic embeddings as a frozen output layer would defeat the purpose of learning contextualized representations. In short, using coupled input and output embedding layers in EELBERT is infeasible, so BERT and EELBERT are the same size during pre-training. When pre-training is completed, the output embedding layer in both models is discarded, and the exported models are used for downstream tasks, which is when we see the size advantages of EELBERT. ## 4 Experimental Setup In this section, we assess the effectiveness of EELBERT. The key questions that interest us are: _how much model compression can we achieve_ and _what is the impact of such compression on model quality for language understanding?_ We conduct experiments on a set of benchmark NLP tasks to empirically answer these questions. In each of our experiments, we compare EELBERT to the corresponding standard BERT model - i.e., a model with the same configuration but with the standard trainable input embedding layer instead of our dynamic embeddings. This standard model serves as the baseline for comparison, to observe the impact of our approach. ### Pre-training For our experiments, we pre-train both BERT and EELBERT from scratch on the OpenWebText dataset (Radford et al., 2019; Gokaslan and Cohen, 2019), using the pre-training pipeline released by Hugging Face Transformers (Wolf et al., 2019). Each of our models is pre-trained for 900,000 steps with a maximum token length of 128 using the _bertbase-uncased_ tokenizer. We follow the pre-training procedure described in Devlin et al. (2019), with a few differences. Specifically, (a) we use the OpenWeb Corpus for pre-training, while the original work used the combined dataset of Wikipedia and BookCorpus, and (b) we only use the _masked language model_ pre-training objective, while the original work employed both _masked language model_ and _next sentence prediction_ objectives. For BERT, the input and output embedding layers are coupled and trainable. Since EELBERT has no input embedding layer, its output embedding layer is decoupled and trainable. ### Fine-tuning For downstream fine-tuning and evaluation, we choose the GLUE benchmark (Wang et al., 2018) to assess the quality of our models. GLUE is a collection of nine language understanding tasks, including single sentence tasks (sentiment analysis, linguistic acceptability), similarity/paraphrase tasks, and natural language inference tasks. Using each of our models as a backbone, we fine-tune individually for each of the GLUE tasks under a setting similar to that described in Devlin et al. (2019). The metrics on these tasks serve as a proxy for the quality of the embedding models. Since GLUE metrics are known to have high variance, we run each experiment 5 times using 5 different seeds, and report the median of the metrics on all the runs, as done in Lan et al. (2020). We calculate an overall GLUE score for each model. For BERT-base and EELBERT-base we use the following equation: \begin{tabular}{|c|c|c|} \hline & **BERT-base** & **ELBERT-base** \\ \hline Trainable Parameters & 109,514,298 & **86,073,402** \\ Exported Model Size & 438 MB & **344 MB** \\ \hline SST-2 (Acc.) & 0.899 & 0.900 \\ QNLI (Acc.) & 0.866 & 0.864 \\ RTE (Acc.) & 0.625 & 0.563 \\ WNLI* (Acc.) & 0.521 & 0.563 \\ MRPC (Acc., F1) & 0.833, 0.882 & 0.838, 0.887 \\ QQP* (Acc., F1) & 0.898, 0.864 & 0.895, 0.861 \\ MNLI (M, MM Acc.) & 0.799, 0.802 & 0.790, 0.795 \\ STSB (P, S Corr.) & 0.870, 0.867 & 0.851, 0.849 \\ CoLA (M Corr.) & 0.410 & 0.373 \\ \hline GLUE Score & 0.775 & 0.760 \\ \hline \end{tabular} * Pearsoncorr, QQPaccuracy, AVERAGE( MNLImatchaccuracy, MNLI mismatchaccuracy), QNLIaccuracy, RTE accuracy) Like Devlin et al. (2019), we do not include the WNLI task in our calculations. For all the smaller BERT variants, i.e. BERT-mini, BERT-tiny, EELBERT-mini, EELBERT-tiny, and UNO-EELBERT, we use: * AVERAGE(SST-2accuracy, MRPCaccuracy, QQPaccuracy, AVERAGE(MNLImatch accuracy, MNLI mismatchaccuracy), QNLIaccuracy, RTE accuracy) Note that we exclude CoLA and STSB from the smaller models' score, because the models (both baseline and EELBERT) appear to be unstable on these tasks. We see a similar exclusion of these tasks in Sun et al. (2019). Also note that in the tables we abbreviate MNLI match and mismatch accuracy as MNLI (M, MM Acc.), CoLA Matthews correlation as CoLA (M Corr.), and STSB Pearson and Spearman correlation as STSB (P, S Corr.). ## 5 Results We present results of experiments assessing various aspects of the model with a view towards deployment and production use. ### Model Size vs. Quality Our first experiment directly assesses our dynamic embeddings by comparing the EELBERT models to their corresponding standard BERT baselines on GLUE benchmark tasks. We start by pre-training the models as described in Section 4.1 and fine-tune the models on downstream GLUE tasks, as described in Section 4.2. Table 1 summarizes the results of this experiment. Note that replacing the trainable embedding layer with dynamic embeddings does have a relatively small impact on the GLUE score. EELBERT-base achieves \(\sim\)21% reduction in parameter count while regressing by just 1.5% on the GLUE score. As a followup to this, we investigate the impact of dynamic embeddings on significantly smaller sized models. Table 2 shows the results for BERT-mini and BERT-tiny, which have 11 million and 4.4 million trainable parameters, respectively. The corresponding EELBERT-mini and EELBERT-tiny models have 3.4 million and 0.5 million trainable parameters, respectively. EELBERT-mini has just 0.7% absolute regression compared to BERT-mini, while being \(\sim\)3x smaller. Similarly, EELBERT-tiny is almost on-par with BERT-tiny, with 0.5% absolute regression, while being \(\sim\)9x smaller. Additionally, when we compare EELBERT-mini and BERT-tiny models, which have roughly the same number of trainable parameters, we notice that EELBERT-mini has a substantially higher GLUE score than BERT-tiny. This leads us to conclude that under space-limited conditions, it would be better to train a model with dynamic embeddings and a larger number of hidden layers rather than a shallower model with trainable embedding layer and fewer hidden layers. ### Pushing the Limits: UNO-EELBERT The results discussed in the previous section suggest that our dynamic embeddings have the most utility for extremely small models, where they perform comparably to standard BERT while providing drastic compression. Following this line of thought, we try to push the boundaries of model compression. We train UNO-EELBERT, a model with a similar configuration as EELBERT-tiny, but a reduced intermediate size of 128. We note that this model is almost 15 times smaller than BERT-tiny, with an absolute GLUE score regression of \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **BERT-mini** & **EELBERT-mini** & **BERT-tiny** & **EELBERT-tiny** & **UNO-EELBERT** \\ \hline Trainable Parameters & 11,171,074 & **3,357,442** & 4,386,178 & **479,362** & **312,506** \\ \hline Exported Model Size & 44.8 MB & **134 MB** & 17.7 MB & **2.04 MB** & **1.24 MB** \\ \hline SST-2 (Acc.) & 0.851 & 0.835 & 0.821 & 0.749 & 0.701 \\ QNLI (Acc.) & 0.827 & 0.821 & 0.616 & 0.705 & 0.609 \\ RTE (Acc.) & 0.552 & 0.560 & 0.545 & 0.516 & 0.527 \\ WNLI\({}^{\prime}\) (Acc.) & 0.563 & 0.549 & 0.521 & 0.535 & 0.479 \\ MRPC (Acc., F1) & 0.701, 0.814 & 0.721, 0.814 & 0.684, 0.812 & 0.684, 0.812 & 0.684, 0.812 \\ QQP\({}^{\prime}\) (Acc., F1) & 0.864, 0.815 & 0.850, 0.803 & 0.780, 0.661 & 0.752, 0.712 & 0.728, 0.628 \\ MNLI (M, M Acc.) & 0.719, 0.730 & 0.688, 0.697 & 0.577, 0.581 & 0.582, 0.598 & 0.539, 0.552 \\ CoLA (M Corr.) & 0.103 & 0 & 0 & 0 & 0 \\ \hline GLUE score & 0.753 & 0.746 & 0.671 & 0.666 & 0.632 \\ \hline \end{tabular} \end{table} Table 2: EELBERT with smaller models less than 4%. It is also 350 times smaller than BERT-base, with an absolute regression of less than 20%. Note that for these regression calculations, all GLUE scores were calculated using the small-model GLUE score equation, which excludes CoLA and STSB, so that the scores would be comparable. We believe that with a model size of 1.2 MB, UNO-EELBERT could be a powerful candidate for low-memory edge devices like IoT, and other memory critical applications. ### Impact of Hash Function Our results thus far suggest that the trainable embedding layer can be replaced by a deterministic hash function with minimal impact on downstream quality. The hash function we used pools the \(n\)-gram features of a word to generate its embedding, so words with similar morphology, like "running" and "runner", will result in similar embeddings. In this experiment, we investigate whether our particular choice of hash function plays an important role in the model quality, or whether a completely random hash function which preserves no morphological information would yield similar results. To simulate a random hash function, we initialize the embedding layer of BERT with a random normal distribution (BERT's default initialization scheme), and then freeze the embedding layer, so each word in the vocabulary is mapped to a random embedding. The results presented in Table 3 indicate that for larger models like BERT-base, the hashing function doesn't have much significance, as the models trained with random vs \(n\)-gram pooling hash functions perform similarly on the GLUE tasks. However, for the smaller BERT-mini model, our \(n\)-gram pooling hash function results in a better score. These results suggest that the importance of the \(n\)-gram pooling hash function, as compared to a completely random hash function, increases as the model size decreases. This is a useful finding, since the primary benefit of dynamic hashing is to develop small models that can be run on device. ### Hash Function as Initializer Based on the results of the previous experiment, we consider a potential alternative role for the embeddings generated by our hash function. We investigate whether our \(n\)-gram pooling hash function could be a better _initializer_ for a trainable embedding layer, compared to the commonly used random normal distribution initializer. To answer this question, we conduct an experiment with BERT-base, by intializing one model with the default random normal initialization and the other model with the embeddings generated using our \(n\)-gram pooling hash function (_hash_ column in Table 4). Note that in this experiment the input and output embedding layers are coupled, and embedding layers are trainable for both initialization schemes. The results of this experiment are shown in Table 4. The hash-initialized model shows a 0.5% absolute increase in GLUE score compared to the \begin{table} \begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{c|}{**BERT-base**} \\ \hline Initialization Method & random & hash \\ \hline Trainable Parameters & 109,514,298 & 109,514,298 \\ \hline Exported Model Size & 438 MB & 438 MB \\ \hline SST-2 (Acc.) & 0.899 & 0.904 \\ QNLI (Acc.) & 0.866 & 0.876 \\ RTE (Acc.) & 0.625 & 0.614 \\ WNLI* (Acc.) & 0.521 & 0.563 \\ MRPC (Acc., F1) & 0.833, 0.882 & 0.850, 0.896 \\ QQP* (Acc., F1) & 0.898, 0.864 & 0.901, 0.867 \\ MNLI (M, MM Acc.) & 0.799, 0.802 & 0.807, 0.809 \\ STSB (P, S Corr.) & 0.870, 0.867 & 0.869, 0.867 \\ CoLA (M Corr.) & 0.410 & 0.417 \\ \hline GLUE score & 0.775 & 0.780 \\ \hline \end{tabular} \end{table} Table 4: Initialization of trainable embeddings \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**BERT-base**} & \multicolumn{2}{c|}{**BERT-mini**} \\ \hline Initialization Method & \(n\)-gram pooling & random & \(n\)-gram pooling & random \\ \hline Trainable Parameters & 86,073,402 & 86,073,402 & 3,387,962 & 3,387,962 \\ \hline Exported Model Size & 344 MB & 344 MB & 13.4 MB & 13.4 MB \\ \hline SST-2 (Acc.) & 0.900 & 0.897 & 0.835 & 0.823 \\ QNLI (Acc.) & 0.864 & 0.862 & 0.821 & 0.639 \\ RTE (Acc.) & 0.563 & 0.574 & 0.560 & 0.569 \\ WNLI* (Acc.) & 0.563 & 0.507 & 0.549 & 0.507 \\ MRPC (Acc., F1) & 0.838, 0.887 & 0.806, 0.868 & 0.721, 0.814 & 0.690, 0.805 \\ QQP* (Acc., F1) & 0.895, 0.861 & 0.893, 0.858 & 0.850, 0.803 & 0.800, 0.759 \\ MNLI (M, MM Acc.) & 0.791, 0.795 & 0.786, 0.794 & 0.688, 0.697 & 0.647, 0.660 \\ STSB (P, S Corr.) & 0.851, 0.849 & 0.849, 0.847 & -,- & -,- \\ CoLA (M Corr.) & 0.373 & 0.389 & 0 & 0 \\ \hline GLUE score & 0.760 & 0.757 & 0.746 & 0.696 \\ \hline \end{tabular} \end{table} Table 3: Impact of varying hash functions randomly-initialized model. We also perform this comparison for BERT-mini (not shown in the table), and observe a similar result. In fact, for BERT-mini, the hash-initialized model had an absolute increase of 1.6% in overall GLUE score, suggesting that the advantage of \(n\)-gram pooling hash-initialization may be even greater for smaller models. ### Memory vs. Latency Trade-off One consequence of using dynamic embeddings is that we are essentially trading off computation time for memory. The embedding lookup time for a token is \(O(1)\) in BERT models. In EELBERT, token embedding depends on the number of character \(n\)-grams in the token, as well as the size of the hash seed partitions. Due to the outer product between the \(n\)-gram signatures and the partitioned hash seeds, the overall time complexity is dominated by \(l\times d\), where \(l\) is the length of a token, and \(d\) is the embedding size, leading to \(O(l\times d)\) time complexity to compute the dynamic hash embedding for a token. For English, the average number of letters in a word follows a somewhat Poisson distribution, with the mean being \(\sim\)4.79 (Norvig, 2012), and the embedding size \(d\) for BERT models typically ranging between 128 to 768. The inference time for BERT-base vs EELBERT-base is practically unchanged, as the bulk of the computation time goes in the encoder blocks for big models with multiple encoder blocks. However, our experiments in Table 5 indicate that EELBERT-tiny has \(\sim\)2.3x the inference time of BERT-tiny, as the computation time in the encoder blocks decreases for smaller models, and embedding computation starts constituting a sizeable portion of the overall latency. These latency measurements were done on a standard M1 MacBook Pro with 32GB RAM. We performed inference on a set of 10 sentences (with average word length of 4.8) for each of the models, reporting the average latency of obtaining the embeddings for a sentence (tokenization latency is same for all the models, and is excluded from the measurements). To improve the inference latency, we suggest some architectural and engineering optimizations. The outer product between the \(O(l)\) dimensional \(n\)-gram hash values and \(O(d)\) dimensional hash seeds, resulting in a matrix of size \(O(l\times d)\), is the computational bottle-neck in the dynamic embedding computation. A sparse mask with a fixed number of 1's in every row could reduce the complexity of this step to \(O(l\times s)\), where \(s\) is the number of ones in each row, and \(s\ll d\). This means every \(n\)-gram will only attend to some of the hash seeds. This mask can be learned during training, and saved with the model parameters without much memory overhead, as it would be of size \(O(k\times s)\), \(k\) being the max number of \(n\)-grams expected from a token. Future work could explore the effect of this approach on model quality. The hash embedding of tokens could also be computed in parallel, since they are independent of each other. Additionally, we observe that the 1, 2 and 3-grams follow a Zipf-ian distribution. By using a small cache of the embeddings for the most common \(n\)-grams, we could speed up the computation at the cost of a small increase in memory footprint. ## 6 Conclusions In this work we explored the application of dynamic embeddings to the BERT model architecture, as an alternative to the standard, trainable input embedding layer. Our experiments show that replacing the input embedding layer with dynamically computed embeddings is an effective method of model compression, with minimal regression on downstream tasks. Dynamic embeddings appear to be particularly effective for the smaller BERT variants, where the input embedding layer comprises a larger percentage of trainable parameters. We also find that for smaller BERT models, a deeper model with dynamic embeddings yield better results than a shallower model of comparable size with a trainable embedding layer. Since the dynamic embeddings technique used in EELBERT is complementary to existing model compression techniques, we can apply it in combination with other compression methods to produce extremely tiny models. Notably, our smallest model, UNO-EELBERT, is just 1.2 MB in size, but achieves a GLUE score within 4% of that of a standard fully trained model almost 15 times its size. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & **BERT-base** & **EELBERT-base** & **BERT-mini** & **EELBERT-mini** & **BERT-tiny** & **EELBERT-tiny** \\ \hline Model Size (MB) & 428.00 & 344.00 & 44.80 & 13.40 & 17.40 & 2.04 \\ \hline Latency (ms) & 162.0 & 165.0 & 7.0 & 9.9 & 1.7 & 3.9 \\ \hline \end{tabular} \end{table} Table 5: Latency, on MacBookPro M1 32GB RAM
2309.03826
Gænice: a general model for magnon band structure of artificial spin ices
Arrays of artificial spin ices exhibit reconfigurable ferromagnetic resonance frequencies that can be leveraged and designed for potential applications.However, analytical and numerical studies of the frequency response of artificial spin ices have remained somewhat limited due to the need of take into account nonlocal dipole fields in theoretical calculations or by long computation times in micromagnetic simulations. Here, we introduce Gaenice, a framework to compute magnon dispersion relations of arbitrary artificial spin ice configurations. Gaenice makes use of a tight-binding approach to compute the magnon bands. It also provides the user complete control of the interaction terms included, e.g., external field, anisotropy, exchange, and dipole, making it useful also to compute ferromagnetic resonances for a variety of structures, such as multilayers and ensembles of weakly or non-interacting nanoparticles. Because it relies on a semi-analytical model, Gaenice is computationally inexpensive and efficient, making it an attractive tool for the exploration of large parameter spaces.
Ghanem Alatteili, Victoria Martinez, Alison Roxburgh, Jack C. Gartside, Olle G. Heinonen, Sebastian Gliga, Ezio Iacocca
2023-09-07T16:36:53Z
http://arxiv.org/abs/2309.03826v2
# Ganice: a general model for magnon band structure of artificial spin ices ###### Abstract Arrays of artificial spin ices exhibit reconfigurable ferromagnetic resonance frequencies that can be leveraged and designed for potential applications. However, analytical and numerical studies of the frequency response of artificial spin ices have remained somewhat limited due to the need of take into account nonlocal dipole fields in theoretical calculations or by long computation times in micromagnetic simulations. Here, we introduce Ganice, a framework to compute magnon dispersion relations of arbitrary artificial spin ice configurations. Ganice makes use of a tight-binding approach to compute the magnon bands. It also provides the user complete control of the interaction terms included, e.g., external field, anisotropy, exchange, and dipole, making it useful also to compute ferromagnetic resonances for a variety of structures, such as multilayers and ensembles of weakly or non-interacting nanoparticles. Because it relies on a semi-analytical model, Ganice is computationally inexpensive and efficient, making it an attractive tool for the exploration of large parameter spaces. + Footnote †: preprint: AIP/123-QED ## I Introduction Artificial spin ices (ASIs) are systems of structured nanomagnets arranged in periodic patterns that are magneto-statically coupled. ASIs were originally designed to mimic the behavior of natural spin ice materials[1],in order to explore the fundamental principles of frustrated magnetism. Frustration arises from competing magnetic interactions that cannot all be simultaneously minimized[2], leading to highly degenerate states. ASIs can be also considered as magnonic crystals[3; 4; 5] exhibiting reconfigurable magnonic modes[3; 5; 6; 7; 8; 9; 10; 11], nonlinear scattering[12], band structure[13; 14; 15], and hybrid modes[16; 17]. The arrangement of magnetic elements in a square lattice, known as square ice[18],has been a test bed for the investigation of ASIs as magnonic crystals because its relative simplicity allows for the understanding of the fundamental physical phenomena. Analytically, square ices have proven promising for reconfigurable magnonics because of the mode-dependent magnon modes predicted[13; 14] as well as evidence of topological modes[19]. However, the study of similar effects in other geometries remains limited to date. Experimentally, this is partly because of the large number of geometries to explore[1] and the technical challenges to investigate wavevector- or spatially-resolved magnons in a nanopatterned structure by, e.g. by Brillouin light scattering[20]. From a numerical point of view, simulations using micromagnetic modeling[21] are very time-consuming and often require large memory allocations to investigate long-wavelength magnons that are easily excited by microwave antennas in experiments. This means that more exotic geometries, both in 2D[1] and 3D[22; 23; 24], have been slower to materialize due to the lack of an efficient predictive tool for magnetization dynamics. Here, we present Ganice, a general formalism to compute the magnon dispersion relation for arbitrary ASI geometries. The formalism is based on a Holstein-Primakoff transformation[25] to obtain an eigenvalue problem that can be solved numerically with little computational cost[13]. The main difference between Ganice and other analytical methods is its generalization to arbitrary nanomagnet orientations and magnetization states with an automatic determination of the first Brillouin zone (FBZ). We expect that Ganice can serve as a numerically efficient and computationally accurate tool to predict magnonic functionality for ASIs and to direct more detailed studies of promising geometries using micromagnetic simulations and experiments. In other words, we envision Ganice as a tool to quickly explore the parameter space of distinct ASI geometries and identify potentially interesting regimes that can be then further explored with traditional computational and experimental methods[26; 27; 28; 29; 30] The remainder of the paper is organized as follows. In section II, we describe the general formulation of the problem. The energy terms considered and their implementation are detailed in section III. In section IV, we demonstrate the functionality of Ganice by computing simple Kittel modes and ferromagnetic resonance (FMR) modes in a variety of linear arrays of nanomagnets and the two fundamental ASI configurations: square ice and Kagome ice. Generalized analytical model We begin our description from the conservative Larmor torque equation \[\frac{\partial\mathbf{m}}{\partial t}=-\gamma\mu_{0}\mathbf{m}\times\mathbf{H}_{ \text{eff}}, \tag{1}\] where \(\mathbf{m}\) is the normalized magnetization vector with \(|\mathbf{m}|=1\), \(\gamma\) is the gyromagnetic ratio, and \(\mu_{0}\) is the vacuum permeability. The effective field \(\mathbf{H}_{\text{eff}}\) contains physical terms and phenomena relevant to the magnetic material and interfaces which are described within the context of our eigenvalue solver in section III. Note that we neglect damping here given that we are interested in resonant, propagating modes. For small-amplitude excitations, such as magnons, the Larmor torque equation can be rewritten as a Hamiltonian set of equations using a Holstein-Primakoff transformation of the _complex_ small amplitude \(a\)[25] \[a=\frac{m_{1}+im_{2}}{\sqrt{2(1+m_{3}^{2})}}, \tag{2}\] where \(\mathbf{m}=m_{1}\mathbf{\hat{e}}_{1}+m_{2}\mathbf{\hat{e}}_{2}+m_{3}\mathbf{ \hat{e}}_{3}\) is the magnetization vector expressed in a coordinate system where \(\mathbf{\hat{e}}_{3}\) defines the equilibrium orientation of the magnetization vector and \(\mathbf{\hat{e}}_{1}\times\mathbf{\hat{e}}_{2}=\mathbf{\hat{e}}_{3}\). An illustration of this basis is shown in Fig. 1(a). From Eq. 2, we can relate the complex amplitudes to the magnetization vector in the \(\mathbf{\hat{e}}\) basis as \[m_{1} = \sqrt{1-|a|^{2}}(a+a^{*})\approx\left(1-\frac{|a|^{2}}{2}\right) (a+a^{*}) \tag{3a}\] \[m_{2} = i\sqrt{1-|a|^{2}}(a-a^{*})\approx i\left(1-\frac{|a|^{2}}{2} \right)(a-a^{*})\] (3b) \[m_{3} = (1-2|a|^{2}) \tag{3c}\] Using the transformation of Eq. 2, we can approximately rewrite Eq. 1 as a Hamiltonian system for the complex amplitude \(a\) \[\frac{\partial a}{\partial t}=-i\frac{\partial}{\partial a^{*}}a\mathcal{H}a^ {\dagger} \tag{4}\] and the Hamiltonian is defined over a magnetic volume as \[\mathcal{H}=-\mu_{0}M_{s}\int\mathbf{H}(\mathbf{m})\cdot\mathbf{m}dA, \tag{5}\] To describe the magnon band structure of an ensemble of nanomagnets in an ASI, the auto-oscillator model can be generalized to an array of complex amplitudes, as shown in Ref. [13]. To account for the bending of the magnetization at edges of the nanomagnets [31], we divide each nanomagnet into 3 _macrospins_. This is an important assumption in our model, making it valid for magnetic elements with sizes of the order of hundreds of nanometers. Therefore, given \(N\) nanomagnets in the unit cell of the ASI, we define the complex amplitude array \(\underline{a}=[a_{1}\ a_{2}\...\ a_{3N}]\) and the \(2(3N)\times 2(3N)\) Hamiltonian matrix \(\mathcal{H}\) so that the generalized Hamiltonian becomes \[\frac{d}{dt}\underline{a}=-i\frac{d}{d\underline{a}^{*}}\left[\underline{a}\ \ \ a^{*}\right]\mathcal{H}\left[\frac{\underline{a}}{a^{*}}\right], \tag{6}\] The Hamiltonian matrix is further divided as \[\mathcal{H}=\begin{bmatrix}\mathcal{H}^{(1,2)}&\mathcal{H}^{(2,2)}\\ \mathcal{H}^{(1,1)}&\mathcal{H}^{(2,1)}\end{bmatrix}, \tag{7}\] where \(\mathcal{H}^{(1,1)}=(\mathcal{H}^{(2,2)})^{*}\) and \(\mathcal{H}^{(1,2)}=(\mathcal{H}^{(2,1)})^{*}\) by symmetry of the Hamiltonian equations. As further discussed below, this system describes bosonic excitation (magnons) so that the eigenvalue problem can be solved using Colpa's grand dynamical matrix that ensures complex conjugate eigenvalues [32]. ### Coordinate system Our framework relies on a Cartesian coordinate system where the polar angle \(\theta=0\) and the azimuth angle \(\varphi=0\) define the \(z\)-axis, as shown in Fig. 1(a). Once this coordinate system is established, applying Eq. 1 requires a rotation to the coordinate system defined by \((\mathbf{\hat{e}}_{1},\mathbf{\hat{e}}_{2},\mathbf{\hat{e}}_{3})\), where the direction of \(\mathbf{\hat{e}}_{3}\) is parallel to the equilibrium orientation of the magnetization vector at any given point in space. This implies that coordinate transformations must be performed locally for both the magnetization vector and the effective field for an arbitrary array of magnetization vectors. Figure 1: (a) The magnetization vector rotated by the polar and azimuthal angles \((\theta_{m},\varphi_{m})\). The rotated frame \((\mathbf{\hat{e}}_{1},\mathbf{\hat{e}}_{2},\mathbf{\hat{e}}_{3})\) is defined relative to the equilibrium orientation of the magnetization vector \(\mathbf{m}\) (b) Nanomagnet orientation relative to the Cartesian coordinate system. The unit vector \(\mathbf{\hat{D}}\) indicates the direction of the nanomagnet’s long axis and is defined by the angles \(\theta_{d}\) and \(\varphi_{d}\). We define the local rotation matrix \[R(\theta,\varphi)=\begin{bmatrix}\cos\varphi\cos\theta&\sin\varphi\cos\theta&- \sin\theta\\ -\sin\varphi&\cos\varphi&0\\ \cos\varphi\sin\theta&\sin\varphi\sin\theta&\cos\theta\end{bmatrix}. \tag{8}\] It is important to note that \(R^{-1}(\theta,\varphi)=R(\theta,\varphi)\). An arbitrary orientation of nanomagnets is considered, defined by a unit vector \(\hat{\mathbf{D}}\). The direction of \(\hat{\mathbf{D}}\) is parameterized by the polar and azimuth angles \(\theta_{d}\) and \(\varphi_{d}\) as shown in Fig. 1(b). When \(\theta_{d}=0\) and \(\varphi_{d}=0\), the nanomagnet is aligned along the \(z\)-axis, and its thickness is aligned along the \(x\)-axis. In this case, the angles \(\theta_{d}\) and \(\varphi_{d}\) represent pitch and yaw, respectively. ### Eigenvalue problem The magnon dispersion relation \(\omega(k)\) is obtained from Eq. 6 by invoking Bloch's theorem \(a\to ae^{i\omega}\) and Colpa's grand dynamical matrix [32] \[\omega\underline{\Psi}\propto\begin{bmatrix}\mathscr{H}^{(1,2)}&-(\mathscr{H }^{(1,1)})^{*}\\ \mathscr{H}^{(1,1)}&-(\mathscr{H}^{(1,2)})^{*}\end{bmatrix}\underline{\Psi}, \tag{9}\] where \(\underline{\Psi}\) is an array of eigenvectors. By introducing the proportionality factor \(\gamma/(2VM_{s})\), where \(V\) is a volume and \(M_{s}\) is the saturation magnetization, Eq. 9 becomes an equality. The volume and saturation magnetization will be associated with a magnetization vector to allow for maximal flexibility of the model. Therefore, we will use the convention \(\Omega=\gamma/(2VM_{s})\mathscr{H}\) to obtain \[\omega\underline{\Psi}=\begin{bmatrix}\underline{\Omega}^{(1,2)}&-( \underline{\Omega}^{(1,1)})^{*}\\ \underline{\Omega}^{(1,1)}&-(\underline{\Omega}^{(1,2)})^{*}\end{bmatrix} \underline{\Psi}, \tag{10}\] The eigenvalue problem of Eq. 10 can be solved numerically by standard methods. This formulation has been implemented for square ice [13; 19] and it has been referred to as semi-analytical because the Hamiltonian matrix \(\mathscr{H}\) is derived analytically and only the eigenvalue problem is solved numerically. Here, we derive the Hamiltonian matrices for arbitrary ASI configurations, so that the matrix is built in an automated way for any number of nanomagnets in a unit cell. ## III Effective field The effective field is Geenice's core, which gives rise to the Hamiltonian matrix. In essence, the effective field is divided into two groups of physical effects; local and non-local \[\mathbf{H}_{\text{eff}}=\mathbf{H}_{\text{l}}+\mathbf{H}_{\text{nl}}. \tag{11}\] In its current implementation, Geenice includes a uniform external magnetic field and anisotropy field as local contributions; and exchange interaction and dipole-dipole interaction as non-local fields from a point of view that the macrospins of the nanomagnets are coupled. In other words, these fields lead to finite non-diagonal elements in the Hamiltonian block matrices. The dipole-dipole contribution is fundamental to ASIs and is detailed below. In the same manner, it is possible to extend Geenice to include other field contributions, such as magnetocrystalline anisotropy, Dzyaloshinskii-Moriya interaction, and RKKY exchange. In the following subsections, we express these field contributions in the form given in Eq. 10. All fields are defined in the Cartesian coordinate system and rotated to the local magnetization coordinates described in section II.1. ### External Field A uniform external field, \(\mathbf{H}_{0}\), leads to the energy \[E_{0}=-V\mu_{0}M_{s}(R(\theta_{m},\varphi_{m})\cdot\mathbf{H}_{0})^{T}\cdot \mathbf{m}. \tag{12}\] The only quadratic term in \(a\) in Eq. 12 is parallel to \(\hat{\mathbf{e}}_{3}\). Therefore, we can express the frequency contribution due to an external field as \[\Omega_{0}=\gamma\mu_{0}|a|^{2}(R(\theta_{m},\varphi_{m})\cdot\mathbf{H}_{0})^ {T}\cdot\hat{\mathbf{e}}_{3}, \tag{13}\] with diagonal and off-diagonal blocks \[\underline{\Omega}^{(1,1)}_{0} = 0 \tag{14a}\] \[\underline{\Omega}^{(1,2)}_{0} = \frac{\gamma\mu_{0}}{2}\left[(R(\theta_{m},\varphi_{m})\cdot \mathbf{H}_{0})^{T}\cdot\hat{\mathbf{e}}_{3}\right]\mathbf{I}, \tag{14b}\] where \(\mathbf{I}\) is the identity matrix. ### Demagnetization Field The demagnetization (demag) field is determined from the shape of the magnetic element, which is an accurate approximation for soft magnets, such as Permalloy. We consider a demag tensor \(\underline{D}\) that we approximate with diagonal demagnetizing factors \(\underline{D}_{1}<\underline{D}_{2}<\underline{D}_{3}\) such that \[\underline{D}=\begin{bmatrix}D_{3}&0&0\\ 0&D_{2}&0\\ 0&0&D_{1}\end{bmatrix}. \tag{15}\] This approximation is consistent with the notion of macrospin elements, i.e., similar to the general ellipsoid [33]. This definition follows from the nanomagnet's orientation in the Cartesian coordinate system whereby the easy axis lies along the \(z\)-axis and the hard axis along the \(x\)-axis. The demagnetizing factors can be found in a variety of ways. Analytical expressions are available for oblate nanomagnets [33] and for rectangular prisms [34]. Demag factors can also be obtained by fitting FMR from simulated nanomagnets and empirical expressions can be found for a range of aspect ratios [35] Geenice currently supports the oblate nanomagnet analytical expres sions given by. \[D_{1} = \frac{t\sqrt{1-e^{2}}(K-E)}{le^{2}}, \tag{16a}\] \[D_{2} = \frac{t\left(E-(1-e^{2})K\right)}{le^{2}\sqrt{1-e^{2}}},\] (16b) \[D_{3} = \frac{1-tE}{l\sqrt{1-e^{2}}}, \tag{16c}\] where \(K\) and \(E\) are the complete elliptic integrals of the first and second kind and \(e=\sqrt{1-(w/l)^{2}}\). In the limit of a circular nanomagnet, one must consider \(D_{1}=D_{2}=0\) and \(D_{3}=1\) to avoid a numerical singularity. The anisotropy energy is expressed as \[E_{\text{an}}=V\mu_{0}M_{s}^{2}\vec{m}\cdot\underline{D}\cdot\vec{m}^{T}, \tag{17}\] We rotate the demagnetization tensor using Eq. 8 and the director vector \(\hat{\mathbf{D}}\) to align it with the magnetization direction in the Cartesian reference frame \[\underline{C}=R(\theta_{m}-\theta_{d},\varphi_{m}-\varphi_{d})\cdot \underline{D}\cdot R(\theta_{m}-\theta_{d},\varphi_{m}-\varphi_{d})^{-1}, \tag{18}\] resulting in the nanomagnet-dependent Hamiltonian matrix \[\mathcal{H}_{\text{an}}=VM_{s}^{2}\vec{m}\cdot\underline{C}\cdot\vec{m}^{T}. \tag{19}\] We note that this matrix is a \(3\times 3\) block that is defined for each nanomagnet. Expressing Eq. 19 as a function of the complex amplitudes \(a\), ultimately results in the diagonal Hamiltonian block matrices \[\underline{\Omega}_{\text{an}}^{(1,1)} = \frac{\gamma\mu_{0}M_{s}}{2}\left[C_{11}-C_{22}+i(C_{12}+C_{21}) \right]\mathbf{I}, \tag{20a}\] \[\underline{\Omega}_{\text{an}}^{(1,2)} = \frac{\gamma\mu_{0}M_{s}}{2}\left[C_{11}-C_{22}-2C_{33}\right] \mathbf{I}, \tag{20b}\] where the factors \(C_{ij}\) are the coefficients of the matrix \(\underline{C}\). ### Exchange We include exchange interaction as a minimal model for edge bending in the magnetization of tightly packed nanomagnets [31; 13]. The nanomagnet is split into three regions, and we use an effective exchange energy to parameterize the exchange interaction. The nanomagnet splitting is shown in Fig. 2 for the cases where the edge modes are (a) larger or (b) smaller than the stadium's semi-circular edges. It is assumed that the edge modes are symmetric in volume. We refer to Appendix A for details. Here, we report the final form of the block matrices used in the eigenvalue problem. We consider that the nanomagnet is split in a bulk macrospin with volume \(V_{b}\) and two edge macrospins with identical volumes \(V_{e}\), satisfying \(V=V_{b}+2V_{e}\). The volumes are uniquely determined by the parameter \(\Delta l\) defined as the length from the geometric center of the nanomagnet to the center of the edge volume. The default value \(\Delta l=(2l-w)/4\) is defined when the edge volume is exactly contained at the semi-circular edges of stadium-shaped nanomagnets. However, this parameter can be tuned. The exchange energy is therefore defined as a pair-wise interaction between a bulk macrospin \(b\) and an edge macrospin \(e+\) (upper) and \(e-\) (lower) \[E_{\text{ex}}^{(b,e)}=-\frac{J}{2}\mathbf{m}_{b}^{T}\cdot R(\theta_{m_{e}}- \theta_{m_{b}},\varphi_{m_{e}}-\varphi_{m_{b}})\cdot\mathbf{m}_{e}, \tag{21}\] where the exchange factor \(J\) is given by \[J=\frac{2}{\Delta l^{2}}\left(V_{e}+\frac{V_{b}}{2}\right). \tag{22}\] Recognizing that the exchange interaction only occurs for neighboring macrospins within a nanomagnet, we define the \(3\times 3\) exchange energy blocks per nanomagnet \(N\) \[\underline{E}_{ex,N}^{(1,1)} = -\frac{J}{2}\begin{bmatrix}0&E_{1}^{(b,e^{+})}/V_{e}&0\\ E_{1}^{(e^{+},b)}/V_{b}&0&E_{1}^{(b,e^{-})}/V_{b}\\ 0&E_{1}^{(e^{-},b)}/V_{e}&0\end{bmatrix}, \tag{23a}\] \[\underline{E}_{ex,N}^{(1,2)} = -\frac{J}{2}\begin{bmatrix}-\underline{\Sigma}E^{+}&E_{2}^{(b,e^ {+})}/V_{e}&0\\ E_{2}^{(e^{+},b)}/V_{b}&-\underline{\Sigma}E^{+}-\underline{\Sigma}E^{-}&E_{ 2}^{(b,e^{-})}/V_{b}\\ 0&E_{2}^{(e^{-},b)}/V_{b}&-\underline{\Sigma}E^{-}\end{bmatrix}, \tag{23b}\] where we define \(\Sigma E^{e}=E_{3}^{b,e}/V_{e}+E_{3}^{e,b}/V_{b}\), and \[E_{1}^{b,e} = R^{1,1}-R^{2,2}+i(R_{1,2}+R^{2,1}), \tag{24a}\] \[E_{2}^{b,e} = R^{1,1}+R^{2,2}-i(R_{1,2}-R^{2,1}),\] (24b) \[E_{3}^{b,e} = R^{3,3}, \tag{24c}\] With these blocks, the frequency contribution due to ex Figure 2: Symmetric splitting of stadium-shaped nanomagnets, where we discern between a large edge volume (a) and a small edge volume (b), relative to the semi-circular edges. The bulk and edge volumes, \(V_{b}\) and \(V_{e}\), respectively, are uniquely determined by the parameter \(\Delta l\). change is written with the block-diagonal matrices as \[\underline{\Omega}_{ex}^{(1,1)} = \frac{\gamma}{2M_{s}}\begin{bmatrix}\underline{E}_{ex,1}^{(1,1)}&&\\ &\ddots&\\ &&\underline{E}_{ex,N}^{(1,1)}\end{bmatrix} \tag{25a}\] \[\underline{\Omega}_{ex}^{(1,2)} = \frac{\gamma}{2M_{s}}\begin{bmatrix}\underline{E}_{ex,1}^{(1,2)}&&\\ &\ddots&\\ &&\underline{E}_{ex,N}^{(1,2)}\end{bmatrix} \tag{25b}\] ### Dipole field The dipole field is essential to compute the spin-wave band structure for ASIs. We distinguish two contributions to the dipole field: a static contribution originating from the equilibrium magnetization, and a dynamic contribution originating from the long-range dynamics of macrospins. An analogous way to phrase this, is that we consider a perturbation to the dynamical matrix where the zeroth order term is the static stray field from the magnetization and the first-order correction is the dipole-dipole contribution. #### ii.4.1 Static contribution To compute the static contribution of the dipole, we consider the stray field from each nanomagnet in the ASI on a macrospin \(i\). As an approximation, we implemented the analytical expressions of the stray field from a rectangular prism derived by R. Engel-Herbert and T. Hesjedal [36]. The resulting field due to nanomagnet \(n\) is \(\mathbf{H}_{stray,N}^{(i)}\) and is computed as a function of the center position of the nanomagnet \(n\) and the position of the _macrospin \(i\)_. The analytical expressions derived in Ref. [36] are written in Appendix B. Essentially, this computation provides a local field source for macrospin \(i\) so that it contributes to the Hamiltonian matrix as an external magnetic field. A subtle difference between a truly external field and the stray field is that we need to scale the latter to the fractional volume of the macrospin it is acting upon. In other words, we impose that the total energy on the target nanomagnet due to nanomagnet \(n\) is conserved \[E=\mu_{0}M_{s}\frac{\sum_{i=1}^{3}V_{i}\mathbf{H}_{stray,n}^{(i)} \cdot\mathbf{m}_{i}}{V}. \tag{26}\] As a consequence, the contribution to the matrix becomes \[\underline{\Omega}_{stray}^{(1,1)} = 0, \tag{27a}\] \[\underline{\Omega}_{stray}^{(1,2)} = \frac{\gamma\mu_{0}}{2}\sum_{\tau_{1},\tau_{2}}^{\text{ASI}} \left[\sum_{i=1}^{3}\frac{V_{i}}{V}R(\theta_{m},\varphi_{m})\cdot\mathbf{H}_{ stray,n}^{(i)}\right)^{T}\cdot\mathbf{\varepsilon}_{3} \tag{27b}\] where \(\tau_{1}\) and \(\tau_{2}\) are integer numbers so that \(\tau_{1}=\tau_{2}=0\) is the unit cell. The maximum value of \(\tau_{1}\) and \(\tau_{2}\) is capped so that the long-range dipole contributions converge with sufficient numerical accuracy. Genice permits to specify either the maximum value for \(\tau_{1}\) and \(\tau_{2}\) or to expand the lattice until numerical accuracy is achieved. #### ii.4.2 Dynamic contribution The dipole field of macrospin \(j\) acting on macrospin \(i\) is calculated using the following expression. \[\mathbf{H}_{\text{d},ij}=\frac{V_{j}M_{s,j}}{4\pi}\left[\frac{3 \mathbf{r}_{i,j}(\mathbf{r}_{i,j}\cdot\mathbf{m}_{j})}{|\mathbf{r}_{i,j}|^{5} }-\frac{\mathbf{m}_{j}}{|\mathbf{r}_{i,j}|^{3}}\right], \tag{28}\] where, \(\mathbf{r}_{i,j}\) is the distance between the two macrospins \(i\) and \(j\). We adopt a tight-binding-like approach for periodic structures whereby the dipole field is computed within and between unit cells. Therefore, the long-range terms collapse into a single Hamiltonian matrix. Identifying each macrospin's spatial position within a nanomagnet is critical. Hence, we adopt the convention that the bulk macrospin is located at the nanomagnet's geometric center, denoted by \(\mathbf{X}_{b}\). The edge macrospins can be computed by \[\mathbf{X}_{e^{\pm}}=\mathbf{X}_{b}\pm\Delta I\mathbf{\hat{D}}. \tag{29}\] We consider the periodic structure established by the translation vectors \(a_{1}\) and \(a_{2}\). In general, the distance between macrospins \(i\) and \(j\) is given by \[\mathbf{r}_{ij}^{(\tau_{1},\tau_{2})}=\mathbf{X}_{i}-\mathbf{X}_{j}-(\tau_{1} \mathbf{a}_{1}+\tau_{2}\mathbf{a}_{2}). \tag{30}\] Therefore, the total nonlocal dipole field acting on a macrospin \(i\) can be written as \[\mathbf{H}_{\text{d},ij} = \frac{1}{4\pi}\sum_{\tau_{1},\tau_{2}}^{\text{ASI}}\sum_{V_{j}M_ {s,j}}\Big{[}\frac{3\mathbf{r}_{ij}^{(\tau_{1},\tau_{2})}(\mathbf{r}_{ij}^{( \tau_{1},\tau_{2})}\cdot\mathbf{m}_{j}^{(\tau_{1},\tau_{2})})}{|\mathbf{r}_{ ij}^{(\tau_{1},\tau_{2})}|^{5}} \tag{31}\] \[- \frac{\mathbf{m}_{j}}{|\mathbf{r}_{ij}^{(\tau_{1},\tau_{2})}|^{ 3}}\Big{]},\] Because the distances in Eq. (30) are computed in the natural Cartesian coordinates, we need to rotate into the basis of each macrospin to compute the products as a function of coupled complex amplitudes. For this we define two distances in the rotated reference frame \[\rho_{ij} = R(\theta_{j},\phi_{j})\cdot\mathbf{r_{ij}}, \tag{32a}\] \[\alpha_{ij} = R(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i})\cdot\mathbf{r_{ij}}. \tag{32b}\] Therefore, the net field on macrospin \(i\), expressed in the basis of \(i\), is \[\mathbf{H}_{\text{d},ij}^{(i)} = \frac{1}{4\pi}\sum_{\tau_{1},\tau_{2}}^{\text{ASI}}\sum_{j}^{ \text{U.C.}}V_{j}M_{s,j}\Big{[}\frac{\alpha_{ij}(\rho_{ij}\cdot\mathbf{m}_{j})}{ r_{ij}^{5}} \tag{33}\] \[-\frac{R(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i})\cdot\mathbf{m} _{j}}{r_{ij}^{3}}\Big{]}\] The last step to collapse the sums into a single Hamiltonian matrix is to incorporate a tight-binding approach. We invoke Bloch's theorem but we make the assumption that the phase between the complex amplitudes is solely given by the translation vectors between unit cells. This is the main approximation in our model and ensures that the resulting band structure is periodic within the FBZ. If macrospin-to-macrospin phases were to be included, then length scales smaller than the FBZ would be resolved, which is outside the model's scope. Therefore, we apply Bloch's theorem as \[\mathbf{m}_{j}^{(\tau_{1},\tau_{2})}=\mathbf{m}_{j}e^{\Phi}=\mathbf{m}_{j}e^{- i\mathbf{m}_{ij}^{(\tau_{1},\tau_{2})}\cdot k} \tag{34}\] and compute the dipole energy, \[\mathcal{H}^{(\mathrm{I})}=\mu_{0}M_{s,i}\mathbf{m}_{i}^{T}\cdot\mathbf{H}_{ \mathrm{d},ij}^{(i)}. \tag{35}\] Rewriting the energy as a function of the complex amplitudes \(a\) and rescaling to units of frequency, we obtain the block Hamiltonian matrices \[\underline{\Omega}_{d}^{1,1} = \sum_{\tau_{1},\tau_{2}}^{\mathrm{ASI}}e^{\Phi}\begin{bmatrix}0&C _{12}&0\\ C_{21}&0&C_{23}\\ 0&C_{32}&0\end{bmatrix} \tag{36a}\] \[\underline{\Omega}_{d}^{1,2} = \sum_{\tau_{1},\tau_{2}}^{\mathrm{ASI}}e^{\Phi}\begin{bmatrix}G _{11}&D_{12}&0\\ D_{21}&G_{22}&C_{23}\\ 0&D_{32}&G_{33}\end{bmatrix}, \tag{36b}\] where, \[C_{ij} = \frac{3}{|\mathbf{r}_{ij}|^{5}}[\alpha_{1}\rho_{1}+i(\alpha_{1} \rho_{2}+\alpha_{2}\rho_{1})-\alpha_{2}\rho_{2}] \tag{37a}\] \[-\frac{1}{|\mathbf{r}_{ij}|^{3}}[R^{(1,1)}-R^{(2,2)}+i(R^{(1,2)} +R^{(2,1)})],\] \[D_{ij} = \frac{3}{|\mathbf{r}_{ij}|^{5}}[\alpha_{1}\rho_{1}+i(\alpha_{1} \rho_{2}-\alpha_{2}\rho_{1})+\alpha_{2}\rho_{2}]\] (37b) \[-\frac{1}{|\mathbf{r}_{ij}|^{3}}[R^{(1,1)}+R^{(2,2)}-i(R^{(1,2)} -R^{(2,1)})],\] \[G_{ij} = -\frac{3}{|\mathbf{r}_{ij}|^{5}}[\alpha_{3}\rho_{3}]-\frac{2}{| \mathbf{r}_{ij}|^{5}}R^{(3,3)}, \tag{37c}\] and we have used a slightly shorthand notation where \(\rho_{i,j}=(\rho_{1},\rho_{2},\rho_{3})\), \(\alpha_{i,j}=(\alpha_{1},\alpha_{2},\alpha_{3})\), \(R\) represent the components of the \(3\times 3\) matrix \(R(\theta_{j}-\theta_{i},\phi_{j}-\phi_{i})\), and the sums over the ASI modify the phase \(\Phi\). ## IV Validation Genice is implemented in MATLAB and it can be obtained from [http://doi.org/10.17605/OSF.IO/YUNHD](http://doi.org/10.17605/OSF.IO/YUNHD) as well as a script that reproduces the results presented below. ### Local fields The validity of the implementation of local fields can be verified my means of the Kittel equation. For the purposes of the model presented here, it is imperative to verify the field magnitude and angle dependent ferromagnetic resonance as well as its independence from the coordinate system. We first model a circular thin-film which can be considered as a single macrospin due to the fact that only the hard axis contributes to the demag tensor, i.e., \(D_{3}=1\) and \(D_{1}=D_{2}=0\). We use a saturation magnetization of \(M_{s}=800\) kA/m. Kittel's equation as a function of the magnetic field amplitude \(H\) is thus: \[\omega=\gamma\mu_{0}M_{s}\left(\frac{H}{M_{s}}-1\right), \tag{38}\] valid for \(H>M_{s}\). To model this scenario, we use an applied field oriented along the \(z\)-axis with magnitude in the range \(800\) kA/m \(<H<1,200\) kA/m. The magnetization is parallel to the applied field so that \(\theta_{m}=0\) and \(\varphi_{m}=0\). The magnetic film must be rotated so that the hard axis is also oriented along the \(z\)-axis. In other words, \(\theta_{d}=\pi/2\) and \(\varphi_{d}=0\). The numerical results are shown in Fig. 3(a) by blue circles. The solution of the Kittel equation is shown at the top by a solid black line. The agreement is within numerical error (\(<4\times 10^{-15}\)). Validation of the rotation matrix is achieved by computing the same field dependence when the magnetization, external field, and nanomagnet are rotated by arbitrary polar and azimuth angles. For example, selecting a rotation \(\theta=33\) deg and \(\varphi=233\) deg, we recover the correct solution within numerical error (\(<9\times 10^{-15}\)), shown in Fig. 3(a) by gold asterisks. We now set the external field magnitude to \(H=1,000\) kA/m and vary its angle, \(\theta_{0}\). The frequency as a function of angle is obtained from the Kittel equation expressed as \[\omega=\gamma\mu_{0}\sqrt{H_{i}\left(H_{i}+M_{s}\cos^{2}\left(\theta_{0} \right)\right)}, \tag{39}\] where \(H_{i}\) is the internal magnetic field magnitude obtained by solving the magnetostatic equations \[\left(H_{i}+M_{s}\right)\cos\theta_{i} = H\cos\left(\theta_{0}\right), \tag{40a}\] \[H_{i}\sin\left(\theta_{i}\right) = H\sin\left(\theta_{0}\right). \tag{40b}\] The magnetization vector is oriented along the internal magnetic field angle for a saturating field, \(\theta_{m}=\theta_{i}\) and \(\varphi_{m}=0\). We define \(\theta_{0}=0\) as the out-of-plane component so that \(\theta_{d}=\pi/2\) and \(\varphi_{d}=0\) for all cases. The results shown in Fig. 3(b) further validate the implementation of the external field and demag fields. Finally, we vary the size of the magnetic element so that all three demagnetizing factors are computed. We consider three different oblate spheres: "Large" (\(10,000\) nm \(\times 1,000\) nm \(\times\) 5 nm), "Medium" (\(1,000\) nm \(\times\) 100 nm \(\times\) 5 nm), and "Small" (\(100\) nm \(\times\) 10 nm \(\times\) 5 nm). The field is once again considered to be oriented along the \(z\)-axis and its magnitude is varied between \(800\) kA/m \(<H<1,200\) kA/m. The frequency dependence as a function of field is given by Kittel's equation \[\omega=\gamma\mu_{0}M_{s}\sqrt{\left(\frac{H}{M_{s}}+D_{1}-D_{3}\right)\left( \frac{H}{M_{s}}+D_{2}-D_{3}\right)}. \tag{41}\] The results shown in Fig. 3(c) validate the demagnetization field and its nanomagnet-dependent implementation because all three nanomagnets are concurrently simulated. This also shows that Ganice can be used as a tool to quickly compute FMR for an ensemble of uncoupled nanomagnets. ### Nonlocal field: exchange The dynamic contribution of the exchange energy introduces the splitting of the resonant frequencies within a single nanomagnet. As a test case, we set a stadium-shaped nanomagnet with dimensions \(l=280\) nm, \(w=100\) nm, and \(t=10\) nm. The nanomagnet is oriented along the \(x\) axis, \(\theta_{d}=\pi/2\) and \(\varphi_{d}=0\). We first explore the effect of the magnetization's relative angles. For this, we set the magnetization parallel to the nanomagnet orientation, and we vary the azimuth angle of the magnetization at one extremum, \(\varphi_{1}\). The computed frequencies are shown in Fig. 4, where different colors and dashed curves were used for each branch for clarity. One frequency branch exhibits a sinusoidal variation, consistent with one magnetization being rotated and modifying the exchange energy. The maximum occurs at 45 deg, implying that the maximum exchange contribution occurs when the adjacent magnetization vectors dynamically couple in both \(x\) and \(y\). Indeed, when \(\varphi_{1}=0\) deg or 90 deg, the \(z\) components of the magnetization vectors are coupled, leading to identical energy contributions to the eigenmodes. Note that this is different than the static exchange energy computed in Eq. (A2). We now explore the influence of the bulk and edge volume ratios. From Eq. (23), the exchange energy diverges as either the bulk or edge volume tends to zero. This is expected because of the underlying assumption that the nanomagnet is separated in three macrospins. In other words, such a divergence has no physical origin. The frequencies computed as a function of \(\Delta l\) when all the magnetization vectors are aligned with the nanomagnet are shown in Fig. 4(b). Clearly, the frequencies diverge when the bulk and edge volumes tend to zero towards the left and right extrema of the figure, respectively. The frequencies are relatively constant close to the default distance \(\Delta l=(2l-w)/4=110\) nm. As the size of the nanomagnet increases, the effect of Figure 4: Frequencies computed by adding the exchange Hamiltonian. (a) A single macrospin is azimuthally rotated, resulting in a sizeable variation of the frequency in one band. The minima occur at \(\varphi_{1}=0\) deg and \(\varphi_{1}=90\) deg, consistent with a dynamic coupling mediated only by the magnetization’s \(z\) component. (b) Frequency variation as a function of \(\Delta l\), showing divergence as either the bulk or edge volumes tend to zero. The frequencies in the vicinity of the default value \(\Delta l=(2l-w)/4\) (the transition between the white and gray areas) are approximately constant. Each band is displayed in different colors for clarity. Figure 3: Comparison of numerical computation of ferromagnetic resonance (FMR) and Kittel’s equation to validate the geometry and implementation of the rotation matrix. (a) Field-dependent FMR of a perpendicularly magnetized easy-plane ferromagnet. The blue circles represent calculations for a magnet in the \(x-y\) plane and magnetized along the \(z\) direction. The calculations agree with Kittel’s equation (38). The gold asterisks are obtained when the magnet and the magnetization are oriented along \(\theta_{d}=\theta=33\) deg and \(\varphi_{d}=\varphi=233\) deg. The field is perpendicular to this orientation, and we recover the same field-dependent frequency. (b) Angle dependence FMR of an in-plane magnetic saturated at \(H=1,000\) kA/m. Both numerical computations (blue circles) and Kittel’s equation (39) agree. (c) Validation of the anisotropy field implementation. Field-dependent FMR for magnets of different sizes (colored symbols) and the corresponding Kittel’s equation (41). the exchange interaction in the frequencies must necessarily decrease insofar as the nanomagnet is split into three macrospins. We compute this test scenario by locking \(\Delta l\) to its default value and varying the nanomagnet's size. We maintain the aspect ratio of the nanomagnet so that the length is representative of the nanomagnet's volume. The results are shown by solid and dashed curves in Fig. 5, where the x axis is shown in natural logarithmic scale and the colors represent different branches for clarity. As expected, the frequency splitting diminishes as the nanomagnet's size increases. A final test for the exchange interaction, is to verify that its implementation is independent of the number of nanomagnets. For this, we specify five nanomagnets with lengths 100, 200, 300, 400, and 1000 nm and dimensions consistent with the aspect ratio of the test case considered in this section. The resulting eigenvalue problem requires solving for a matrix of dimension \(30\times 30\). The frequencies are shown by circles in Fig. 5, color-coded according to the branches of the single-nanomagnet calculations. We note that the frequencies are not automatically sorted for each nanomagnet: only the computation of the eigenvectors can return such type of sorting which is not currently computed in our implementation, as discussed in the conclusions. In this case, the frequencies were manually sorted. The results are in agreement with the calculations done for each nanomagnets, validating that the exchange interaction is nonlocal but intrinsic to each nanomagnet, i.e, there is no coupling between nanomagnets. ### Nonlocal field: dipole We now investigate the interaction between two identical nanomagnets of dimensions \(l=280\) nm, \(w=100\) nm, and \(t=10\) nm, as used in the previous section. We focus here on collective excitation, so that \(|\mathbf{k}|=0\) and the phase contributions in Eqs. (36) simplify to 1. The first test ensures that the nonlocal dipole field's strength depends on the distance between the nanomagnets. For this, we consider a varying distance \(d\) along the \(y\) axis ranging from \(100\) nm to \(1,000\) nm. The computed frequencies considering only nonlocal dipole fields are shown in Fig. 6(a). The relative magnetization orientation is parallel for the solid black curves and antiparallel for the dashed red curves. In both cases, the modes are degenerate at long distances. This is a clear indication that nonlocal dipole field does not affect the internal modes of non-interacting (or weakly interacting) nanomagnets. Modes are visibly split under a distance of \(\approx 400\) nm. Red and blue-shifts are observed for the parallel and antiparallel cases, respectively, in agreement with the static dipole energy for each. Including exchange energy,shown in Fig. 6(b), Figure 5: Frequency dependence on the nanomagnet’s size. The aspect ratio is maintained in the calculations so that the length is a representative metric. The solid lines are frequencies obtained from computations on a single nanomagnet. The circles are obtained from a single computation including five non-interacting nanomagnets. Each band is displayed in different colors for clarity. Figure 6: Frequencies as a function of distance between two identical nanomagnets interacting via (a) nonlocal dipole field and (b) both nonlocal dipole field and exchange interaction within each nanomagnet. The magnets are parallel to one another, and we distinguish the relative magnetization being parallel (solid black curves) and anti-parallel (dashed red curves). In all cases, the nonlocal dipole field becomes negligible at large distances and the bands become degenerate, as expected for non-interacting identical nanomagnets. Figure 7: Geometrical variations between two interacting nanomagnets. The nanomagnet with director \(\hat{D}_{1}\) is located at a distance \(d=300\) nm along the \(\hat{y}\) direction. The director’s orientation is varied by the polar and azimuth angles \(\theta_{1}\) and \(\varphi_{1}\), respectively. naturally leads to larger split bands because of the additional energy. As expected in all cases, the modes converge towards degenerate values at large distances indicating a negligible interaction mediated by the nonlocal dipole field. We next explore the frequency dependence on the relative orientation between the two nanomagnets. For this, we consider a nanomagnet located at the global origin of the Cartesian coordinate and a second nanomagnet located at a distance of \(d=300\) nm in the \(y\) direction with a varying unit vector \(\hat{\mathbf{D}}_{1}\). We consider both polar and azimuth rotations parametrized by the angles \(\theta_{1}\) and \(\varphi_{1}\), respectively, as shown Fig. 7. Note that these angles are measured relative to the orientation of the fixed nanomagnet. The computed frequencies are shown in Fig. 8. The frequency variation as a function of the polar angle \(\theta_{1}\) is shown in (a). In these computations, we disabled the exchange interaction to focus on the symmetry of the static dipole field. There is a modest change in the frequency that is maximal at \(\theta_{1}=180\) deg. The symmetry is also consistent with the fact that 90 deg and 270 deg are degenerate. The frequency variation as a function of the azimuth angle \(\varphi_{1}\) is shown in (b). There are again clear symmetries consistent with the rotation of the nanomagnet despite the increased number of modes originating from the non-collinear magnetization orientations. Notably, at 90 deg and 270 deg, the rotated nanomagnet is perpendicular to the fixed nanomagnet and the spacing is just 110 nm. Strong variations are observed close to these conditions. As expected, the computed frequencies are periodic for both \(\theta_{1}\) and \(\varphi_{1}\). This concludes the verification of the static dipole field, which follows the qualitative expectations of decay with distance and symmetries due to different types of rotations. ### Band structure #### iv.4.1 Nanomagnet chain A one-dimensional chain of nanomagnets is modeled by a single nanomagnet with dimensions \(l=280\) nm, \(w=100\) nm, and \(t=10\) nm subject to a translation vector \(\mathbf{a}_{1}\) oriented at an azimuth \(\varphi_{a}\) and lattice constant \(|\mathbf{d}|=300\) nm. Magnons with wave vectors \(\mathbf{k}\) oriented at an azimuth \(\varphi_{k}\) are computed, as shown schematically in Fig. 9. The magnon dispersion is computed for cases where we set the wavevector parallel to the \(x\)-axis and rotating the translation vector. In other words, \(\mathbf{a}_{1}=d\hat{\mathbf{a}}_{1}=d[\cos{(\varphi_{a})}\hat{x}+\sin{( \varphi_{a})}\hat{y}]\). The resulting dispersion relations as a function of \(k=2\pi/d(\mathbf{k}\cdot\hat{\mathbf{a}}_{1})\) within the first Brillouin zone for selected azimuths are shown in Fig. 10(a). Three bands are observed because a single nanomagnet composes the unit cell of the chain. The band structures show a pronounced periodic behavior in the FBZ that is symmetric relative to \(\varphi_{a}=90\) deg, stemming from the product \(\mathbf{k}\cdot\mathbf{a}_{1}\). This symmetry validates the implementation of the sums performed in Eqs. (36). It is also shown in the mid panel of Fig. 10(a) that the band structure perpendicular to the chain orientation at \(\varphi_{a}=90\) deg, is flat. This is because the phase in Eqs. (36) is exactly zero when the translation vector and wavevector are perpendicular. It is also possible to consider another case of \(\varphi_{a}\neq\varphi_{k}\). We set the translation vector along the \(x\) axis, i.e., \(\mathbf{a}_{1}=d\hat{x}\) and we vary \(\varphi_{k}\), such that \(\mathbf{k}=|\mathbf{k}|[\cos{(\varphi_{k})}\hat{x}+\sin{(\varphi_{k})}\hat{y}]\). The dispersion relations for \(k=|\mathbf{k}|\) up to the FBZ are shown in Fig. 10(b). As for Fig. 10(a), the expected symmetries are respected, e.g. the band structure perpendicular to the chain orientation is flat; see \(\varphi_{k}=90\) and 270 deg, for the same reasons outlined above.. Note that here we extend the rotation of \(\varphi_{k}\) to a full cycle. #### iv.4.2 Multiple interacting chains We now calculate the band structure of interacting nanomagnet chains. Each nanomagnet is oriented at \(\phi_{d}=0\) with respect to the x-axis and the array is generated from the single nanomagnet unit cell due to translation vectors \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) with a lattice constant of \(d=300\) nm. A visualization of this configuration is produced by Gennice to ensure the correct geometry definition, shown in Fig. 11(a). The magnon band structure is computed by an automatic determination of the FBZ and its subsequent Delaunay triangulation to produce an array of wavevectors \(\mathbf{k}=k_{x}\hat{x}+k_{y}\hat{y}\). This feature allows to optimally map the FBZ and produce band surfaces, as shown in Fig. 11(b). Figure 8: Computed frequencies for a pair of interacting nanomagnets when one of the nanomagnets is rotated about (a) the polar angle \(\theta_{1}\) and (b) the azimuth \(\varphi_{1}\). The angles are shown in the schematic Fig. 7. Figure 9: A chain of nanomagnets is modeled by a single nanomagnet (green) upon which the nonlocal dipole field from an infinite chain of nanomagnets (blue) acts. The chain can be defined along an arbitrary in-plane direction by setting the translation vector \(\mathbf{a}_{1}\) oriented at an azimuth \(\varphi_{1}\). The magnon dispersion can be computed for arbitrary in-plane wavevectors \(\mathbf{k}\) given the azimuth \(\varphi_{k}\). As for the 1D nanomagnet chain, the FBZ also shows three bands because the unit cell consists of a single nanomagnet. By examining the band structure depicted in Fig. 11(b), the frequencies calculated along \(k_{x}\hat{x}\) exhibit pronounced variations while it is predominantly flat along \(k_{y}\hat{y}\). This is consistent with our tight-binding approach whereby the dynamic dipole coupling depends on the gap distance between nanomagnets. The irreducible path in the FBZ can be also directly computed in Genice. We observe that the band structure is different when the path is taken towards the \(X\) and \(X^{\prime}\) points. This is because the array is asymmetric, such that the dipole field is different along \(\hat{x}\) and \(\hat{y}\) directions. #### iv.2.3 Square ice We now use Genice to compute the magnon band structure for square ASI, where four nanomagnets are placed around a vertex, each at an angle of 90 degrees to one another and equidistant from the vertex. We maintain the previously used nanomagnet dimensions \(l=280\) nm, \(w=100\) nm, and \(t=10\) nm but now set a center-to-center distance of \(d=430\) nm. For simplicity, we investigate the band structure for states where the magnetization is in a homogeneous (onion) state. Edge bending in the magnetization state leads to \(S\) and \(C\) states that are known to modify the band structure [13]. We investigate both the vortex (type-I) and remanent (type-II) configuration. The vortex state has four nanomagnets in the unit cell and is defined by the translation vectors \(\mathbf{a}_{1}=2\hat{x}\) and \(\mathbf{a}_{2}=\hat{x}+\hat{y}\). The remanent state has two nanomagnets in the unit cell and is defined by \(\mathbf{a}_{1}=\hat{x}\) and \(\mathbf{a}_{2}=\hat{y}\). These configurations are shown in Fig 12(a) and (b), respectively. The band structures in the FBZ are shown in Fig. 12(c) and 12(d), for the vortex and remanent states, respectively. By dividing the nanomagnet into three macrospins, the vortex state has twelve bands. The band structure exhibits little dispersion, which may be expected by the fact that the Figure 11: (a) Génice representation of the 2D array of nanomagnet chains, depicting the translation vectors. (b) Resulting band structure where the FBZ is shown under the band structure. The FBZ is directly computed by Génice from the translation vectors and the high-symmetry points are also identified. The band structure is obtained by performing a Delaunay triangulation over the FBZ and evaluating the resulting wavevectors. The color scale represents the frequency and it is also shown in the vertical axis. (b) Irreducible path in the FBZ exhibiting the asymmetry of this geometry as well as the periodicity achieved by the tight-binding method. Figure 10: Dispersion relation for a 1D nanomagnet chain upon (a) setting the wavevector along \(\hat{x}\) and rotating \(\mathbf{a}_{1}\) and (b) setting \(\mathbf{a}_{1}\) along the \(\hat{x}\) direction and rotating the wavevector. The bands exhibit the most changes when the wavevector and translation vector are parallel and are flat when these are orthogonal. This is in agreement with our tight-binding definition of the phase in Eqs. (36). stray fields are largely compensated in a type-I configuration. The two high-frequency bands are bulk modes while the low-frequency bands are edge modes, as shown previously [13]. This band separation is clearly seen in the irreducible path shown in Fig. 12(e), exhibiting a band-gap of about \(\approx 5\) GHz. In the remanent state, there are six bands. In this case, the bands are very close together, with a visible dip at the \(\Gamma\) point. It is also evident that the band structure is skewed, which is a consequence of the likewise skewed static dipole field in this configuration. Figure 12: Génice representation of the square ice geometry for the (a) vortex and (b) remanent states. The respective band structure for each case in shown in (c) and (d) while the irreducible path in the FBZ are shown in (e) and (f). Figure 13: Génice representation of the Kagome ice geometry for (a) identical nanomagnets and (b) anisotropy modified nanomagnets. The respective band structure for each case in shown in (c) and (d) while the irreducible path in the FBZ exhibiting are shown in (e) and (f). The results in this section are in agreement with previous calculations[13] demonstrating the reconfigurability of the magnon band structure for square ices. However, the improved dipole field implementation in Génice showcases more subtleties in the band structure as well as asymmetries that could in principle indicate directional magnon propagation, as recently surmised in a combined experimental and micromagnetic study[12]. We also emphasize that we have only explored here the onion state, but it is well-known that the magnetization tilts at the edges of the nanomagnets due to stray fields. This adds an additional degree of freedom for tuning the band structure. #### iv.2.4 Kagome ice We now explore the band structure for Kagome ASI. The Kagome unit cell comprises three nanomagnets with lattice constant \(d=800\) nm which we define as twice the radius of the circle in which the hexagonal structure is embedded. Considering the center of the triad of nanomagnets as the origin, we define the translation vectors \(\mathbf{a}_{1}=\hat{x}\) and \(\mathbf{a}_{2}=(1/2)\hat{x}+(1/\sqrt{3}+1/4)\hat{y}\). We consider two cases: a "regular" Kagome ice where the nanomagnets have identical dimensions \(l=280\) nm, \(w=100\) nm, and \(t=10\) nm; and an anisotropy modified Kagome ice inspired by the work by T. Dion et al.[9], where we use three different widths \(w=100\) nm, \(w=180\) nm, and \(w=60\) nm for the nanomagnets in the unit cell. The geometries are shown in Fig. 13(a) and Fig. 13(b). In both cases, the array is in a degenerate ground state where the unit cell triad has a 2-in/1-out vertex. In the "regular" Kagome ice we find a modest band structure with all nine bands contributing to the band structure, shown in Fig. 13(c). However, the anisotropy modified Kagome ice exhibits only four bands, as shown in Fig. 13(d) with other five softened to exactly zero. In the context of our framework, a zero-frequency band entails a real, evanescent solution. The bands in the irreducible paths in Figs. 13(e) and (f) further confirm that the bands are relatively flat in all cases. An important distinction is the anisotropy modified Kagome ice exhibits band-gaps which is consistent with the different FMR for each nanomagnet, i.e., different demagnetization factors. While this is certainly not an in-depth investigation of the frequency response of anisotropy modified Kagome ices, it showcases the functionality of Génice to compute the band structure of relatively complex geometries with ease. ## V Conclusions We have presented Génice, a computational tool to compute the dispersion relation of arbitrary artificial spin ice geometries. The theoretical framework of Génice relies on the excitation of small-amplitude perturbations and produces the dispersion relation by computing both static and dynamic dipole contributions to the Hamiltonian matrices. Our framework also relies on a tight-binding approach to ensure the periodicity of solutions within the FBZ, which composes the main simplification of the model. Génice can be also used for FMR computations of relatively complicated geometries. For example, Génice has been recently applied for square ices based on trilayers and exhibited remarkable agreement with experiments and micromagnetic simulations of field-dependent FMR[20]. Because both the exchange and dipole interactions can be toggled, Génice can be easily be used to study the FMR of ensembles of interacting or non-interacting nanoparticles and extended to 3D structures. There are three main limitations to Génice in its current form. First, the computations are accurate for nanoparticles and nanomagnets because of the assumption of three macrospins. Larger nanomagnets possess higher degrees of freedom that will reduce the relative energy contributions. Therefore, Génice is likely to overestimate the frequency split when nanomagnets are brought very close together. A possible solution to this issue is to further split the nanomagnets into more macrospins, with the caveat that the number of macrospins should be kept to a minimum to maintain a computational advantage over micromagnetic simulations. Another way to solve this issue is to compute the energy of the system to actively modify the magnetization's edge bending due to stray fields, as recently shown in Ref.[37]. Second, the wavefunctions are not currently computed. As is well known, the linearization of the Hamiltonian leads to wavefunctions with large errors. This limitation will be resolved in future work. Third, a 3D band structure is not currently supported. However, the basic framework is written and a generalization in 3D will compose a simple expansion of the dipole phases in the tight-binding approximation, a method that is well-known in solid-state physics. ###### Acknowledgements. This material is based upon work supported by the National Science Foundation under Grant No. 2205796. AR and EI acknowledge support from the UCCS Committee on Research and Creative Works (CRCW). VM and EI acknowledge the Department of Physics and Energy Science at UCCS for the use of their facilities and equipment. This work was supported by the Royal Academy of Engineering Research Fellowships, awarded to JCG. JCG was supported by EPSRC grant EP/X015661/1. Work by OGH at Argonne was supported by the US Department of Energy, Basic Energy Sciences Division of Materials Sciences and Engineering. ## Data Availability Statement The data that support the findings of this study are openly available in Open Science Framework (OSF) at [http://doi.org/10.17605/OSF.IO/YUNHD](http://doi.org/10.17605/OSF.IO/YUNHD). ## Appendix A Derivation of exchange energy To compute the exchange energy, we use a simple quasi-one-dimensional spin chain model to estimate the energy along the chain and relate it to the nanomagnet's regions and their volume. Consider a chain of length \(l\) where the magnetization vector is linearly rotated, so that \[\mathbf{m}=\cos\left(k_{0}x\right)\mathbf{\hat{x}}+\sin\left(k_{0}x\right) \mathbf{\hat{y}}. \tag{10}\] It can be shown that the exchange energy is given by \(E_{\mathrm{ex}}=AVk_{0}^{2}+E_{0}\), where \(A\) is the exchange constant in units of pJ/m, \(V\) is the volume of the quasi-1D chain, and \(E_{0}\) is a constant of integration. We consider now a nanomagnet of length \(l\), width \(w\), and thickness \(d\), split in three unequal pieces with boundaries at \(l_{1}\) and \(l_{2}\) so that their volumes are \(V_{1}=wtl_{1}\), \(V_{2}=wt(l_{2}-l_{1})=wt\Delta l_{1,2}\) and \(V_{3}=wt(l_{3}-l_{2})=wt\Delta l_{2,3}\). This scenario is schematically shown in Fig. 14. The total exchange energy is \[E_{\mathrm{ex}}=J_{1}\left(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\right)+J_{2} \left(\mathbf{m}_{2}\cdot\mathbf{m}_{3}\right), \tag{11}\] where the magnetization vectors are taken in the geometric center of each piece. This leads to \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}=\cos\left(k_{0}\Delta l_{1,2}\right)\) and \(\mathbf{m}_{2}\cdot\mathbf{m}_{3}=\cos\left(k_{0}\Delta l_{2,3}\right)\). Expanding the cosine to first order in Eq. (11) and equating to the continuum solution, we obtain \[E_{\mathrm{ex}}=\left(J_{1}+J_{2}\right)+\left(J_{1}\frac{\Delta l_{1,2}^{2}} {2}+J_{2}\frac{\Delta l_{2,3}^{2}}{2}\right)k_{0}^{2}=E_{0}+AVk_{0}^{2}, \tag{12}\] Given that the exchange constant is uniform in the nanomagnet, we can set \(J_{1}=C_{1,2}A\) and \(J_{2}=C_{2,3}A\). From geometry, it can be shown that \[C_{1,2} = \frac{2}{\Delta l_{1,2}^{2}}\left(V_{1}+\frac{V_{2}}{2}\right), \tag{13a}\] \[C_{2,3} = \frac{2}{\Delta l_{2,3}^{2}}\left(V_{3}+\frac{V_{2}}{2}\right). \tag{13b}\] In the case of stadium-shaped nanomagnets, one can consider a symmetric splitting so that \(V_{1}=V_{3}=V_{e}\) and \(V_{2}=V_{b}\), leading to the expression shown in Eq. (22). The edge volume \(V_{e}\) and the bulk volume \(V_{b}\) can be computed as a function of \(\Delta l\). We have two cases. #### a.0.1 Case \(l/4<\Delta l<(2l-w)/4\) This corresponds to the situation where an edge mode occupies more than the half-circle in the stadium's edge at the expense of the bulk mode. The edge and bulk volumes are \[V_{e} = \frac{1}{2}\left[wtl-w^{2}t\left(1-\frac{\pi}{4}\right)-V_{b} \right], \tag{14a}\] \[V_{b} = (4\Delta l-l)wt. \tag{14b}\] #### a.0.2 Case \((2l-w)/4\leq\Delta l<l/2\) This corresponds to the situation where an edge mode is confined to the half-circle in the stadium's edge. Computing the cone angle \[\theta=2\arccos\left(1-\frac{2l-4\Delta l}{w}\right), \tag{15}\] the edge and bulk volumes are \[V_{e} = \frac{\theta-\sin\left(\theta\right)}{8}w^{2}t, \tag{16a}\] \[V_{b} = wtl-w^{2}t\left(1-\frac{\pi}{4}\right)-2V_{e}. \tag{16b}\] The edge and bulk volumes as a function of \(\Delta l\) are shown in Fig. 15. The limiting case \(\Delta l=(2l-w)/4\) is considered to be the default. ## Appendix B Derivation of exchange energy In Ref. [36], the authors considered a rectangular prisms with its geometric center at the origin of the Cartesian reference Figure 14: Toy model for a quasi-1D spin chain splitted into unequal pieces to estimate the exchange energy. Figure 15: Fractional ratio between edge and volume modes. The default is considered at the edge of the gray area in the limiting case \(\Delta l=(2l-w)/4\). frame and sides \(2x_{b}>2y_{b}>2z_{b}\). The resulting expressions for the stray field are: \[H_{x}(x,y,z) = \frac{M_{s}}{4\pi}\sum_{k,l,m=1}^{2}(-1)^{k+l+m}\text{ln}\left\{z+( -1)^{m}z_{b}+\sqrt{L(k,l,m)}\right\}, \tag{30a}\] \[H_{y}(x,y,z) = -\frac{M_{s}}{4\pi}\sum_{k,l,m=1}^{2}(-1)^{k+l+m}\frac{\left[y+(-1 )^{l}y_{b}\right]\left[x+(-1)^{k}x_{b}\right]}{\left|y+(-1)^{l}y_{b}\right| \left|x+(-1)^{k}x_{b}\right]}\times\arctan\left\{\frac{\left|x+(-1)^{k}x_{b} \right|\left[z+(-1)^{m}z_{b}\right]}{\left|y+(-1)^{l}y_{b}\right|L(k,l,m)}\right\},\] (30b) \[H_{z}(x,y,z) = \frac{M_{s}}{4\pi}\sum_{k,l,m=1}^{2}(-1)^{k+l+m}\text{ln}\left\{x +(-1)^{k}x_{b}+\sqrt{L(k,l,m)}\right\}, \tag{30c}\] where \[L(k,l,m)=\left[x+(-1)^{k}x_{b}\right]^{2}+\left[y+(-1)^{l}y_{b}\right]^{2}+[ z+(-1)^{m}z_{b}]^{2} \tag{31}\] We note that the assumed orientation of the rectangular prism in Ref. [36] is different than that assumed in Genice. For this reason, we rotate the expressions of Eq. (30) such that the easy axis of the rectangular prism aligns with the \(z\) axis. In Fig. 16 we show the calculated stray field from rectangular prisms with different director vectors. In all cases, it is seen by inspection that the stray field is computed correctly.
2305.19573
Reliability of energy landscape analysis of resting-state functional MRI data
Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.
Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, Naoki Masuda
2023-05-31T05:49:56Z
http://arxiv.org/abs/2305.19573v2
# Reliability of energy landscape analysis of resting-state functional MRI data ###### Abstract Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability. keywords: Maximum entropy model, Ising model, functional magnetic resonance imaging, Bayesian approximation, Permutation test, Fingerprinting ## 1 Introduction Brain activity is dynamic and nonlinear in nature. Such nonlinear brain dynamics are considered to underly many functions of the brain such as cognition, action, and learning [1; 2; 3], and mathematical modeling is widely accepted as a useful tool for simulating such brain dynamics on different scales [1; 2; 3; 4; 5; 6; 7]. There are also many methods for analyzing empirical data of neural dynamics, including dynamic causal modeling [8; 9], functional network analysis [10; 11], its dynamic variants [12; 13; 14], and hidden Markov models [15; 16; 17]. Population-level inferences are a common practice for analyzing brain activity in empirical data. However, both the structure and dynamics of the brain vary even among healthy individuals, let alone among individuals belonging to a disease group due to the heterogeneity of the disease. Therefore, although population-level inferences increase the data size and often help us to reach statistically significant observations, they may yield inaccurate results and loss of information when the observed data are individual-specific. To avoid population-level inferences, it is necessary to establish the reliability of individual-level inferences of collective brain dynamics. Finn et al. examined the role of individual variability in functional networks measured by functional magnetic resonance imaging (fMRI) and its ability to act as a fingerprint to identify individuals [18] (see [19; 20] for earlier studies). In order for individual fingerprinting to be successful, the test-retest reliability of the functional network must be higher across sessions obtained from the same individual (i.e., within-participant reliability) than across sessions obtained from different individuals (i.e., between-participant reliability). Indeed, it was found that within-participant reliability was robust and that both resting-state and task fMRI from different sessions of the same individual could be used to perform fingerprinting [21]. Other studies also confirmed the ability of functional networks from fMRI data as fingerprints of individuals, including the development of different methods to quantify and improve fingerprinting [22; 23; 24; 25; 26]. The ability of functional connectivity to act as individual fingerprints has also been confirmed with electroencephalogram (EEG) [27] and magnetoencephalogram (MEG) data [28; 29]. Functional networks or its dynamic variants are not the only tools for analyzing brain dynamics or fingerprinting individuals. One way to analyze fMRI or other multidimensional time series data from the brain is to infer dynamics of discrete states. Each state may correspond to a particular functional network [30; 31; 13; 32; 15; 33] or a spatial activation pattern [17; 34; 35], and the transition from one state to another may correspond to a regime shift in the brain. Energy landscape analysis is a method to characterize brain dynamics as a movement of a stochastic ball constrained on an energy landscape inferred from the data [36; 37; 38]. Quantifications of the estimated energy landscapes such as the height of the barrier between two local minima of the energy allow intuitive interpretations; a local minimum of the energy is a particular spatial activity pattern and defines a discrete brain state. A high barrier between two local minima implies that it is difficult for the brain dynamics to transit between the two local minima. Indices from energy landscape analysis have been shown to be associated with behavior of healthy individuals in a test of bistable visual perception task [37; 39], executive function [40], fluid intelligence [41], healthy aging [42], autism [43], Alzheimer disease [44], schizophrenia [45; 46], attention deficit hyperactivity disorder [47], and epilepsy [48]. These successful applications of energy landscape analysis are likely to owe to advantages of the method compared to other related methods such as functional network analysis and hidden Markov models. For example, with energy landscape analysis, one can borrow concepts and computational tools from statistical physics of spin systems to quantify the ease of state transition by the energy barrier [38] and complexity of the dynamics by different phases (e.g., spin-glass phase) and susceptibility indices [41]. In addition, each network state is by definition a binary activity pattern among a pre-specified set of regions of interest (ROIs) and therefore relatively easy to interpret. Despite its expanding applications, the validity of the energy landscape analysis has not been extensively studied except that one can measure the accuracy of fit of the model to the given data [38; 49; 50; 51; 52]. A high accuracy of fit does not imply that the estimated energy landscape is a reliable fingerprint for individuals. In fact, if fMRI data are nonstationary, an energy landscape estimated for the same individual in two time windows may be substantially different from each other, whereas the accuracy of fit may be high in both time windows. Furthermore, the original energy landscape analysis method requires pooling of fMRI data from different individuals unless the number of regions of interest (ROIs) to be used is relatively small (e.g., 7) or the scanning session is extremely long. This is because the method is relatively data hungry [38]. The concept of individual fingerprinting is unclear when pooling of data is necessary. In the present study, we assess potential utility of energy landscape analysis in individual fingerprinting by investigating its test-retest reliability. Specifically, we ask how much features of the estimated energy landscapes are reproducible across different sessions from the same individual as opposed to across sessions belonging to different sets of individuals. We hypothesize that test-retest reliability is higher between sessions for the same individual than between sessions for different individuals. Code for computing energy landscapes with the conventional and Bayesian methods is available on Github [53]. ## 2 Methods ### Midnight Scan Club data We primarily use the fMRI data in the Midnight Scan Club (MSC) data set [22]. MSC data set contains five hours of resting-state fMRI data in total recorded from each of the 10 healthy human adults across 10 consecutive nights. A resting-state fMRI scanning section lasted for 30 minutes and yielded 818 volumes. Imaging was performed with a Siemens TRIO 3T MRI scanner using an echo planar imaging (EPI) sequence (TR \(=2.2\) s, TE \(=27\) ms, flip angle \(=90^{\circ}\), voxel size \(=4\) mm \(\times\) 4 mm \(\times\) 4 mm, 36 slices). The original paper reported that the eighth participant (i.e., MSC08) fell asleep, showed frequent and prolonged eye closures, and had systematically large head motion, resulting in much less reliable data than those obtained from the other participants [22]. We also noticed that the accuracy of fitting the energy landscape, which we will explain in section 2.4, fluctuated considerably across the different sessions for the tenth participant (i.e., MSC10), suggesting unstable quality of the MSC10's data across sessions. Therefore, we excluded MSC08 and MSC10 from the analysis. We used SPM12 ([http://www.fil.ion.ucl.ac.uk/spm](http://www.fil.ion.ucl.ac.uk/spm)) to pre-process the resting-state fMRI data as follows: we first conducted realignment, unwraping, slice-timing correction, and normalization to a standard template (ICBM 152); then, we performed regression analyses to remove the effects of head motion, white matter signals, and cerebrospinal fluid signals; finally, we conducted band-pass temporal filtering (0.01-0.1 Hz). We determined the ROIs of the whole-brain network using an atlas with 264 spherical ROIs whose coordinates were set in a previous study [54]. We then removed 50 ROIs labelled 'uncertain' or'subcortical', which left us with 214 ROIs. The 214 ROIs were labeled either of the nine functionally different brain networks, i.e., auditory network, dorsal attention network (DAN), ventral attention network (VAN), cingulo-opercular network (CON), default mode network (DMN), fronto-parietal network (FPN), salience network (SAN), somatosensory and motor network (SMN), or visual network. We merged the DAN, VAN, and CON into an attention network (ATN) to reduce the number of observables from nine to seven, as we did in our previous study [43]. This is due to the relatively short length of the data and the fact that energy landscape analysis requires sufficiently long data sets if working with 9 observables. In fact, the DAN, VAN, and CON are considered to be responsible for similar attention-related cognitive activity [54], justifying the merge of the three systems into the ATN. We call the obtained \(N=7\) dimensional time series of the fMRI signal the whole-brain network. We calculated the fMRI signal for each of the seven networks (i.e., ATN, auditory network, DMN, FPN, SAN, SMN, and visual network) by averaging the fMRI signal over the volumes in the sphere of radius 4 mm in the ROI and over all ROIs belonging to the network. In addition to the whole-brain network, we used a separate 30-ROI coordinate system [55] and determined the multi-ROI DMN and CON. We used a different parcellation system for the DMN and CON than the 264-ROI system used for the whole-brain network. It is because the former (i.e., 30-ROI) coordinate system provides much fewer ROIs for the DMN and CON than the 264-ROI system does, which is convenient for energy landscape analysis. The original study identified 12 and 7 ROIs for the DMN and CON, respectively [55]. To reduce the dimension of the DMN, we averaged over each pair of the symmetrically located right- and left-hemisphere ROIs in the DMN into one observable. The symmetrized DMN, which we simply call the DMN, has eight ROIs because four ROIs (i.e., amPFC, vmPFC, pCC, and retro splen) in the original DMN are almost on the midline and therefore have not undergone the averaging between the right- and left-hemisphere ROIs [42]. For the CON, we used the original seven ROIs as the observables. Note that the whole-brain network contains the DMN and CON as single observables, whereas the DMN and CON we are introducing here are themselves systems containing \(N=8\) and \(N=7\) observables, respectively. We denote the fMRI signal for the \(i\)th ROI at time \(t\) by \(x_{i}^{t}\)\((i=1,\ldots,N;t=1,\ldots,t_{\max})\), where \(N\) is the number of ROIs, and \(t_{\max}\) is the number of time points. We then removed the global signals and transformed the signals into their \(z\)-values using \(z_{i}^{t}=(x_{i}^{t}-m^{t})/s^{t}\), where \(m^{t}\) and \(s^{t}\) represent the mean and standard deviation, respectively, of \(x_{i}^{t}\) over the \(N\) ROIs at time \(t\); \(m^{t}\) is the global signal [56]. The global signal in resting-state functional MRI data is considered to be dominated by physiological noise mainly originating from the respiratory, scanner-related, and motion-related artifacts. Global signal removal improves various quality-control metrics, enhances the anatomical specificity of functional-connectivity patterns, and can increase the behavioral variance [57, 58]. The same or similar global signal removal was carried out in previous energy landscape studies [41, 42]. ### Human Connectome Project data For validation, we also analyzed fMRI data that were recorded from healthy human participants and shared as the S1200 data in the Human Connectome Project (HCP) [59]. In the data set, 1200 adults between \(22\)-\(35\) years old underwent four sessions of 15-min EPI sequence with a 3T Siemens Connectome-Skyra (\(\text{TR}=0.72\) s, \(\text{TE}=33.1\) ms, \(72\) slices, \(2.0\) mm isotropic, field of view (FOV) \(=208\times 180\) mm) and a T1-weighted sequence (\(\text{TR}=2.4\) s, \(\text{TE}=2.14\) ms, \(0.7\) mm isotropic, FOV \(=224\times 224\) mm). Here, we limited our analysis to those included in the 100 unrelated participant subset released by the HCP. We confirmed that all these 100 participants were among the subset of participants who completed diffusion weighted MRI as well as two resting-state fMRI scans. The resting-state fMRI data of each participant are composed of two sessions, and each session is broken down into a Left-Right (LR) and Right-Left (RL) phases. We used data from participants with at least 1150 volumes in each of the four sessions after removing volumes with motion artifacts, which left us with 87 participants. For the 87 participants, we first removed the volumes with motion artifacts. Then, we used the last 1150 volumes in each session to remove possible effects of transient. We used independent component analysis (ICA) to remove nuisance and motion signals [60]. Furthermore, any volumes with frame displacement greater than 0.2 mm [61] were excised [62] because the ICA-FIX pipeline has been found not to fully remove motion-related artifacts [63, 64]. We standardized each voxel by subtracting the temporal mean. Lastly, global signal regression of the same form as that for the MSC data (see section 2.1) was used for removing remaining noise. In each volume, we averaged the fMRI signal over all the voxels within each ROI of the AAL atlas [65]. Note that this atlas is composed of 116 ROIs. Then, we mapped each cortical ROI to either of the parcellation scheme from the Schaefer-100 atlas [66]. System assignment was based on minimizing the Euclidian distance from the centroid of an ROI in the AAL to the corresponding centroid of an ROI in the Schaefer atlas. We removed 42 ROIs labeled'subcortical' or 'cereb', which left us with 74 ROIs. These 74 ROIs were labelled either of the \(N=7\) functionally different brain networks: control network, DMN, DAN, limbic network, salience/ventral attention network, somatomotor network, and visual network, altogether defining a whole-brain network. ### Fitting of the pairwise maximum entropy model To carry out energy landscape analysis, we fit the pairwise maximum entropy model (MEM), also known as the Ising model, to the preprocessed fMRI data in essentially the same manner as in previous studies [38, 67]. For each session, we first binarized \(z_{i}^{t}\) for each \(i\)th ROI (with \(i\in\{1,\ldots,N\}\)) and time \(t\) (with \(t\in\{1,\ldots,t_{\max}\}\)) using a threshold that we set to the time average of \(z_{i}^{t}\). A computational study showed that binarization did not affect important information contained in originally continuous brain signals [68]. We denote the binarized signal at the \(i\)th ROI and time \(t\) by \(\sigma_{i}^{t}\), which is either \(+1\) or \(-1\) corresponding to whether \(z_{i}^{t}\) is larger or smaller than the threshold, respectively. The activity pattern of the entire network at time \(t\) is described by the \(N\)-dimensional vector \[V^{t}=[\sigma_{1}^{t},\ldots,\sigma_{N}^{t}]\in\{-1,1\}^{N}. \tag{1}\] It should be noted that there are \(2^{N}\) activity patterns in total, enumerated as \(V_{1}\), \(\ldots\), \(V_{2^{N}}\). The empirical mean activity at the \(i\)th ROI is denoted by \[\langle\sigma_{i}\rangle\equiv\frac{1}{t_{\max}}\sum_{t=1}^{t_{\max}}\sigma_{i }^{t}. \tag{2}\] The empirical mean pairwise joint activation for the \(i\)th and \(j\)th ROIs is defined by \[\langle\sigma_{i}\sigma_{j}\rangle\equiv\frac{1}{t_{\max}}\sum_{t=1}^{t_{\max} }\sigma_{i}^{t}\sigma_{j}^{t}. \tag{3}\] The pairwise MEM maximizes the entropy of the distribution of activity patterns under the condition that \(\langle\sigma_{i}\rangle\) and \(\langle\sigma_{i}\sigma_{j}\rangle\) (with \(1\leq i\leq j\leq N\)) are the same between the estimated model and the empirical data. The resulting probability distribution of activity pattern \(V=[\sigma_{1},\ldots,\sigma_{N}]\), denoted by \(P(V)\), obeys the Boltzmann distribution [69] given by \[P(V)=\frac{e^{-E(V)}}{\sum_{k=1}^{2^{N}}e^{-E(V_{k})}}, \tag{4}\] where \(E(V)\) represents the energy of activity pattern \(V\) given by \[E(V)=-\sum_{i=1}^{N}h_{i}\sigma_{i}-\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}J_{ij} \sigma_{i}\sigma_{j}. \tag{5}\] In Eq. (5), the fitting parameter \(h_{i}\) represents the tendency for the \(i\)th ROI to be active (i.e., \(\sigma_{i}=+1\)), and \(J_{ij}\) quantifies the pairwise interaction between the \(i\)th and \(j\)th ROIs. We denote the mean activity and mean pairwise activity from the estimated model by \(\langle\sigma_{i}\rangle_{\text{m}}\) and \(\langle\sigma_{i}\sigma_{j}\rangle_{\text{m}}\), respectively. By definition, we obtain \[\langle\sigma_{i}\rangle_{\text{m}}=\sum_{k=1}^{2^{N}}\sigma_{i}(V_{k})P(V_{k}) \tag{6}\] and \[\langle\sigma_{i}\sigma_{j}\rangle_{\text{m}}=\sum_{k=1}^{2^{N}}\sigma_{i}(V_ {k})\sigma_{j}(V_{k})P(V_{k}). \tag{7}\] We calculated \(h_{i}\) and \(J_{ij}\) by iteratively adjusting \(\langle\sigma_{i}\rangle_{\text{m}}\) and \(\langle\sigma_{i}\sigma_{j}\rangle_{\text{m}}\) towards the empirically values, i.e., \(\langle\sigma_{i}\rangle\) and \(\langle\sigma_{i}\sigma_{j}\rangle\), respectively, using a gradient ascent algorithm. The iteration scheme is given by \[h_{i}^{\text{new}}=h_{i}^{\text{old}}+\epsilon\log\frac{\langle\sigma_{i} \rangle}{\langle\sigma_{i}\rangle_{\text{m}}} \tag{8}\] and \[J_{ij}^{\text{new}}=J_{ij}^{\text{old}}+\epsilon\log\frac{\langle\sigma_{i} \sigma_{j}\rangle}{\langle\sigma_{i}\sigma_{j}\rangle_{\text{m}}}, \tag{9}\] where superscript new and old represent the values after and before a single updating step, respectively, and \(\epsilon\) is the learning rate. We set \(\epsilon=0.2\). ### Accuracy of fit We evaluated the accuracy of fit of the pairwise MEM to the given fMRI data [38, 42, 50]. The accuracy index is given by \[r_{D}=\frac{D_{1}-D_{2}}{D_{1}}, \tag{10}\] where \[D_{\ell}=\sum_{k=1}^{2^{N}}P_{N}(V_{k})\log_{2}\frac{P_{N}(V_{k})}{P_{\ell}(V_ {k})} \tag{11}\] is the Kullback-Leibler divergence between the probability distribution of the activity pattern in the \(\ell\)th-order \((\ell=1,2)\) MEM, \(P_{\ell}(V)\), and the empirical probability distribution of the activity pattern, denoted by \(P_{N}(V)\). Note that \(P_{2}(V)\) is equivalent to \(P(V)\) given by Eqs. (4) and (5). The first-order, or independent, MEM (i.e., \(\ell=1\)) is Eq. (4) without interaction terms, that is, \(J_{ij}=0\ \forall i,j\) in Eq. (5). We obtain \(r_{D}=1\) when the pairwise MEM perfectly fits the empirical distribution of the activity pattern, and \(r_{D}=0\) when the pairwise MEM does not fit the data any better than the independent MEM. To assess the dependency of \(r_{D}\) on the number of sessions to be concatenated for the estimation of the pairwise MEM, \(m\), the network (i.e., whole-brain, DMN, or CON), and the type of concatenation (i.e., within-participant or between-participant), we examined the multivariate linear regression model given by \[r_{D}=\beta_{0}+\beta_{1}m+\beta_{2}I_{\text{whole}}+\beta_{3}I_{\text{CON}}+ \beta_{4}I_{\text{within}}. \tag{12}\] In Eq. (12), \(\beta_{0}\) is the intercept, dummy variable \(I_{\text{whole}}\) is equal to \(1\) for the whole-brain network and \(0\) for the other two networks, \(I_{\text{CON}}\) is equal to \(1\) for the CON, and \(0\) for the other two networks, and \(I_{\text{within}}\) is equal to \(1\) for the within-participant comparison and \(0\) for the across-participant comparison. ### Bayesian approximation method The pairwise MEM and the subsequent energy landscape analysis have mostly been restricted to analysis of group-level data. This is because the methods in its original form are data-hungry, requiring concatenation of fMRI signals from different individuals. The length of fMRI data, \(t_{\text{max}}\), that is necessary for reliably estimating the pairwise MEM with \(N\) nodes is roughly proportional to the number of states, \(2^{N}\)[38]. To overcome this problem and obtain the energy landscape for each individual, we employed a recently developed variational Bayes approximation method for estimating the pairwise MEM [40, 70], which runs as follows. We denote by \(\mathcal{S}_{n}\) the \(N\)-dimensional time series obtained from an \(n\)th session of fMRI. Different fMRI sessions typically originate from different participants in the same group (e.g., control group). We denote the number of sessions available by \(D\). Let \(\mathcal{S}\) be the concatenated data, i.e., \[\mathcal{S}\equiv\cup_{n=1}^{D}\mathcal{S}_{n}. \tag{13}\] The variational Bayes approximation method estimates a pairwise MEM for each \(\mathcal{S}_{n}\) (with \(n\in\{1,\ldots,D\}\)). This method introduces a prior distribution for the set of session-specific model parameters, \(\mathbf{\theta}_{n}=(h_{1},h_{2},\ldots,h_{N},J_{12},J_{13},\ldots,J_{N-1,N})\in \mathbb{R}^{M}\), where \(n\in\{1,\ldots,D\}\) and \(M=N(N+1)/2\). We give the prior distribution for \[\Theta=[\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{D}] \tag{14}\] by \[p(\Theta|\mathbf{\eta},\mathbf{\alpha})=\prod_{n=1}^{D}\prod_{M^{\prime}=1}^{M}p(\theta _{mM^{\prime}}|\mathcal{N}(\eta_{M^{\prime}},1/\alpha_{M^{\prime}})), \tag{15}\] where \(p(x|\mathcal{N}(\mu,\sigma^{2}))\) represents the probability density of \(x\) obeying the one-dimensional normal distribution with mean and variance equal to \(\mu\) and \(\sigma^{2}\), respectively. Here, \(\mathbf{\eta}=(\eta_{1},\ldots,\eta_{M})^{\top}\in\mathbb{R}^{M}\) is the prior mean vector, \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{M})^{\top}\in\mathbb{R}_{+}^{M}\) is the prior precision vector, and \({}^{\top}\) represents the transposition. In Eq. (15), we have assumed that the signals from all the \(D\) sessions are mutually independent. Now, we derive the posterior distribution of \(\Theta\). It is intractable to derive the posterior because the normal distribution is not the conjugate prior for the Boltzmann distribution. Therefore, we use a variational approximation to the posterior [71] using the normal distribution as follows: \[q(\Theta|\mathcal{S},\mathbf{\eta},\mathbf{\alpha})=\prod_{n=1}^{D}\prod_{M^{ \prime}=1}^{M}p(\theta_{nM^{\prime}}|\mathcal{N}(\mu_{nM^{\prime}},1/\beta_{nM^{ \prime}})). \tag{16}\] We write \(\mathbf{\mu}_{n}=(\mu_{n1},\dots,\mu_{nM})^{\top}\in\mathbb{R}^{M}\) and \(\mathbf{\beta}_{n}=(\beta_{n1},\dots,\beta_{nM})^{\top}\in\mathbb{R}^{M}_{+}\), which are the posterior mean vector and the posterior precision vector for session \(n\in\{1,\dots,D\}\), respectively. One obtains the variational approximate solution for distribution \(q\) by optimizing the evidence lower bound (ELBO), also called the free energy [40; 70]. By maximizing the free energy with respect to \(q\), we have the posterior mean and precision vectors in terms of the prior mean and precision vectors as follows: \[\mathbf{\mu}_{n} = \mathbf{\eta}+\mathrm{t_{max}}\mathbb{A}_{\mathbf{\eta},\mathbf{\alpha}}^{-1} (\langle\bar{\sigma}_{n}\rangle-\langle\bar{\sigma}\rangle_{\mathbf{\eta}}), \tag{17}\] \[\mathbf{\beta}_{n} = \mathbf{\alpha}+\mathrm{t_{max}}\mathrm{c_{\mathbf{\eta}}}, \tag{18}\] where \[\mathbb{A}_{\mathbf{\eta},\mathbf{\alpha}} = \mathrm{diag}(\mathbf{\alpha})+\mathrm{t_{max}}\mathrm{C_{\mathbf{\eta}}}, \tag{19}\] and \(\mathrm{diag}(\cdot)\) represents the diagonal matrix whose entries are given by the arguments. In Eq. (17), \(\langle\bar{\sigma}_{n}\rangle\equiv(\langle\sigma_{1}\rangle,\dots,\langle \sigma_{N}\rangle,\langle\sigma_{1}\sigma_{2}\rangle,\langle\sigma_{1}\sigma_{ 3}\rangle,\dots,\langle\sigma_{N-1}\sigma_{N}\rangle)^{\top}\) is the vector composed of the empirical mean activity and empirical pairwise joint activation; \(\langle\bar{\sigma}\rangle_{\mathbf{\eta}}\) is the model mean of \(\bar{\sigma}_{n}\equiv(\sigma_{1},\sigma_{2},\dots,\sigma_{N},\sigma_{1} \sigma_{2},\sigma_{1}\sigma_{3},\dots,\sigma_{N-1}\sigma_{N})^{\top}\) when the model parameters \((h_{1},h_{2},\dots,h_{N},J_{12},h_{13},\dots,J_{N-1,N})\) are given by \(\mathbf{\eta}\); \(\mathrm{C_{\mathbf{\eta}}}\equiv\mathrm{Cov_{\mathbf{\eta}}}(\bar{\mathbf{\sigma}}_{n})\) is the covariance matrix of \(\bar{\sigma}_{n}\) when the model is given by \(\mathbf{\eta}\). In Eq. (18), \(\mathbf{c_{\mathbf{\eta}}}\) is the vector composed of the diagonal element of \(\mathrm{C_{\mathbf{\eta}}}\). In other words, the \(i\)th element of \(\mathbf{c_{\mathbf{\eta}}}\) is the variance of the \(i\)th element of \(\bar{\sigma}_{n}\) under parameters \(\mathbf{\eta}\). Now, we fix \(q\) and maximize the free energy with respect to \(\mathbf{\eta}\) and \(\mathbf{\alpha}\) to obtain the equations for updating \(\mathbf{\eta}\) and \(\mathbf{\alpha}\) as follows: \[\eta_{M^{\prime}} = \frac{1}{D}\sum_{n=1}^{D}\mu_{nM^{\prime}}, \tag{20}\] \[\alpha_{M^{\prime}} = \left[\frac{1}{D}\sum_{n=1}^{D}\left\{(\mu_{nM^{\prime}}-\eta_{M ^{\prime}})^{2}+\frac{1}{\beta_{nM^{\prime}}}\right\}\right]^{-1} \tag{21}\] where \(M^{\prime}\in\{1,\dots,M\}\). Thus, we have updated the posterior distribution \(\theta_{nM^{\prime}}\sim\mathcal{N}(\mu_{nM^{\prime}},1/\beta_{nM^{\prime}})\), \(n\in\{1,\dots,D\}\), \(M^{\prime}\in\{1,\dots,M\}\) using the prior distribution \(\theta_{nM^{\prime}}\sim\mathcal{N}(\eta_{M^{\prime}},1/\alpha_{M^{\prime}})\), and then updated the prior distribution using the new posterior distribution. We summarize the steps of the variational Bayes approximation method as follows: 1. Initialize the hyperparameters by independently drawing each \(\eta_{M^{\prime}}\) (with \(M^{\prime}\in\{1,\dots,M\}\)) from the normal distribution with mean \(0\) and standard deviation \(0.1\). We also set the first \(N\) entries of the prior precision vector \(\alpha_{M^{\prime}}\), corresponding to \(h_{i}\), \(i\in\{1,\dots,N\}\), to \(6\), and set the remaining \(M-N\) entries of \(\alpha_{M^{\prime}}\) corresponding to \(J_{ij}\), \(1\leq i<j\leq N\), to \(30\). 2. Calculate the posterior mean vector and posterior precision vector for each \(n\in\{1,\dots,D\}\) using Eqs. (17) and (18). 3. Update the prior mean vector, \(\mathbf{\eta}=(\eta_{1},\dots,\eta_{M})^{\top}\), and the prior precision vector, \(\mathbf{\alpha}=(\alpha_{1},\dots,\alpha_{M})^{\top}\), using Eqs. (20) and (21). 4. If \(\left|\frac{\mathrm{ELBO}(\mathrm{iter})}{\mathrm{ELBO}(\mathrm{iter}-1)}-1 \right|<10^{-8}\), we stop the iteration. Otherwise, we return to step 2. Here, \(\mathrm{ELBO}(\mathrm{iter})\) represents the ELBO value after 'iter' iterations of steps 2 and 3. ### Energy landscape and disconnectivity graph Once we have estimated the pairwise MEM, we calculated the energy landscape [36; 37; 38; 42]. The energy landscape is defined as a network with \(2^{N}\) nodes in which each node is an activity pattern. We first constructed a dendrogram called the disconnectivity graph. We show a hypothetical disconnectivity graph in Fig. 1. In the disconnectivity graph, a leaf corresponds to an activity pattern \(V_{k}\) that is a local minimum of the energy. There are four local minima in the disconnectivity graph shown in Fig. 1. The vertical position of the leaf represents the energy value of the local minimum. A low energy value corresponds to a high frequency of appearance through Eq. (5). For example, in Fig. 1, activity pattern \(\gamma_{1}\) is the one that appears with the highest frequency among all the \(2^{N}\) activity patterns. By definition, activity pattern \(V_{k}\) is a local minimum of energy if and only if \(V_{k}\) appears more frequently (thus has a lower energy) than any other activity pattern adjacent to \(V_{k}\). Two activity patterns are defined to be adjacent in the network of activity patterns if and only if they have the opposite activity \(\sigma_{i}\in\{-1,1\}\) for just one \(i\). Note that the network of activity patterns is the hypercube composed of \(2^{N}\) nodes in which each node representing an activity pattern is adjacent to \(N\) other nodes. To obtain the disconnectivity graph, we first enumerate Figure 1: Schematic of a disconnectivity graph showing the relationships between the activity patterns that are energy local minima. The arrow indicates the height of the energy barrier between local minima \(\gamma_{1}\) and \(\gamma_{2}\) from the viewpoint of \(\gamma_{1}\). the local minima. Then, for each pair of local minima \(\gamma\) and \(\gamma^{\prime}\), we determine the smallest energy value \(E_{\rm th}\) that a path connecting \(\gamma\) and \(\gamma^{\prime}\) needs to go through as follows. There may be various paths connecting \(\gamma\) and \(\gamma^{\prime}\). Then, we sequentially remove nodes in the descending order of the energy until there is no path connecting \(\gamma\) and \(\gamma^{\prime}\). The energy of the node that we have removed the last is the \(E_{\rm th}\) value for \(\gamma\) and \(\gamma^{\prime}\). The horizontal dashed line in Fig. 1 indicates the \(E_{\rm th}\) value (\(=0.69\)) for the pair of local minima \(\gamma_{1}\) and \(\gamma_{2}\). The difference between \(E_{\rm th}\) and the energy at the local minimum represents the energy barrier that the dynamics of the brain have to overcome to reach from one local minima to the other. In Fig. 1, the energy barrier between \(\gamma_{1}\) and \(\gamma_{2}\) from the viewpoint of \(\gamma_{1}\) is \(0.64\), which is indicated by the double-headed arrow. The disconnectivity graph shows \(E_{\rm th}\) and the energy barrier values for all pairs of the local minima. ### Measures of discrepancy To assess within-participant test-retest reliability of energy landscape analysis, we compared two energy landscapes that we separately estimated for two sets of fMRI data, which were in different sessions for the same participant or obtained from different participants. We decided to make within-participant versus between-participant comparisons because a successful individual fingerprinting requires that the within-participant test-retest reliability is high enough, whose examination requires a baseline. Higher within-participant test-retest reliability than between-participant one implies that the energy landscape analysis provides reliable fingerprints for individuals. To analyze test-retest reliability, we measured the following four indices of the discrepancy between the two energy landscapes. #### 2.7.1 Discrepancy in terms of the interaction strength The energy landscape is primarily a function of \(\{J_{ij}\}_{i,j\in\{1,\ldots,N\}}\) because \(\{h_{1},\ldots,h_{N}\}\) tend to take values close to \(0\) if we set our threshold to binarize \(z_{i}^{t}\) such that the fraction of \(\sigma_{i}=-1\) and that of \(\sigma_{i}=1\) is not heavily imbalanced [41]. Therefore, we measured the discrepancy between two energy landscapes in terms of the estimated \(\{J_{ij}\}\). We define the discrepancy using the Frobenius distance as follows: \[d_{J}=\frac{2}{N(N-1)}\sum_{i=1}^{N}\sum_{j=i+1}^{N}\left|J_{ij}^{(1)}-J_{ij}^ {(2)}\right|, \tag{22}\] where \(J^{(1)}=(J_{ij}^{(1)})\) and \(J^{(2)}=(J_{ij}^{(2)})\) denote the pairwise interaction matrices according to the pairwise MEM estimated for the first and second data sets, respectively. #### 2.7.2 Discrepancy in terms of the activity patterns at the local minima of the energy A local minimum of the energy landscape is locally the most frequent activity pattern. We compared the location of the local minima in the two energy landscapes by calculating the Hamming distance between the activity patterns at the local minima from the first energy landscape and those from the second energy landscape as follows. First, we assumed that minor local minima characterized by low energy barriers with other local minima did not play important roles because the brain state would stay near such shallow local minima only briefly. Therefore, we started by removing minor local minima of the energy as follows. We generated \(N\) random binary time series of length \(4t_{\rm max}\) by independently drawing the \(N\times 4t_{\rm max}\) binary numbers, i.e., \(-1\) or \(+1\), with the same probability (i.e., \(0.5\)). The multiplication factor was set at 4 because we mainly analyzed energy landscapes of the empirical fMRI data with \(t_{\rm max}\) volumes that were concatenated over four sessions. Then, we inferred the pairwise MEM for the generated random binary time series and calculated the maximum length of the branch in the disconnectivity graph. A branch corresponds to a local minimum of the energy landscape. We define the branch length for local minimum \(\gamma\) by the smallest value of the energy barrier between \(\gamma\) and another local minimum \(\gamma^{\prime}\) among all local minima \(\gamma^{\prime}(\neq\gamma)\). In the disconnectivity graph shown in Fig. 1, the branch length for \(\gamma_{1}\) is the length of the arrow. We claim that the energy landscape estimated for the random binary time series, including the number and depth of its local minima, does not have functional meanings. Therefore, in an energy landscape estimated for the empirical data, the local minima whose branch length is comparable with the maximum branch length for the random binary time series are not important. To implement this idea, we generated random binary time series, inferred the energy landscape, computed its maximum branch length, and repeated all these steps \(100\) times. We denote the average and standard deviation of the maximum branch length on the basis of the 100 random binary time series by \(\mu^{\prime}\) and \(\sigma^{\prime}\), respectively. We then identified the local minimum with the shortest branch length in the original disconnectivity graph. We removed that local minimum as being significant if its branch was shorter than \(\mu^{\prime}+2\sigma^{\prime}\). If we removed this local minimum, we recomputed the branch length of each local minimum whose branch had merged with the removed branch. Then, if the shortest branch was shorter than \(\mu^{\prime}+2\sigma^{\prime}\), we removed the branch and repeated these steps until all the local minima have branches whose length is at least \(\mu^{\prime}+2\sigma^{\prime}\). We refer to the local minima that survive this test as major local minima. We denote the activity patterns at the major local minima of the first energy landscape by \(\tilde{V}_{1}^{(1)}\), \(\ldots\), \(\tilde{V}_{m_{1}}^{(1)}\), where \(m_{1}\) is the number of the major local minima in the first energy landscape. Similarly, we denote the activity patterns at the major local minima of the second energy landscape by \(\tilde{V}_{1}^{(2)}\), \(\ldots\), \(\tilde{V}_{m_{2}}^{(2)}\). To examine similarity between \(\{\tilde{V}_{1}^{(1)},\ldots,\tilde{V}_{m_{1}}^{(1)}\}\) and \(\{\tilde{V}_{1}^{(2)},\ldots,\tilde{V}_{m_{2}}^{(2)}\}\), we need to match the major local minima between the two energy landscapes. To this end, we assume without loss of generality that \(m_{1}\leq m_{2}\) and pair each \(\tilde{V}_{\ell}^{(1)}\) (with \(\ell\in\{1,\ldots,m_{1}\}\)) with a \(\tilde{V}_{\ell^{\prime}}^{(2)}\) (with \(\ell^{\prime}\in\{1,\ldots,m_{2}\}\)) under the condition that different \(\tilde{V}_{\ell}^{(1)}\)'s are not matched to the same \(\tilde{V}_{\ell^{\prime}}^{(2)}\). We call the obtained correspondence between \(\{\tilde{V}_{1}^{(1)},\ldots,\tilde{V}_{m_{1}}^{(1)}\}\) and \(\{\tilde{V}_{1}^{(2)},\ldots,\tilde{V}_{m_{2}}^{(2)}\}\) a matching. Figure 2 describes how to match between the local minima of two energy landscapes. Note that \(m_{2}-m_{1}\) major local minima in the second energy landscape are not matched to any major local minimum in the first energy landscape. We quantify the quality of a matching by \[d_{\text{H}}=\frac{1}{m_{1}}\sum_{\ell=1}^{m_{1}}d_{\text{H}}^{ \prime}\left(\tilde{V}_{\ell}^{(1)},\tilde{V}_{\rho(\ell)}^{(2)}\right), \tag{23}\] where \(\tilde{V}_{\rho(\ell)}^{(2)}\) is the activity pattern at the major local minimum paired with \(\tilde{V}_{\ell}^{(1)}\) in the considered matching; \(d_{\text{H}}^{\prime}\) is the Hamming distance between the \(N\)-dimensional binary vectors \(\tilde{V}_{\ell}^{(1)}\) and \(\tilde{V}_{\rho(\ell)}^{(2)}\), i.e., the number of ROIs whose binary activity (i.e., \(\sigma_{i}=-1\) or \(+1\)) is opposite between \(\tilde{V}_{\ell}^{(1)}\) and \(\tilde{V}_{\rho(\ell)}^{(2)}\). We calculate \(d_{\text{H}}\) for all the possible matchings and select the one that minimizes \(d_{\text{H}}\), which we simply refer to as \(d_{\text{H}}\) hereafter. A small \(d_{\text{H}}\) value implies that the two energy landscapes are similar in terms of the activity patterns at the local minima of energy. #### 2.7.3 Discrepancy in terms of the activity patterns averaged over the attractive basin Brain dynamics tend to visit local minima of the energy landscape but also fluctuate around it. Therefore, we additionally measured a distance between the two energy landscapes in terms of the activity patterns averaged over the attractive basin of local minima as follows. Consider a major local minimum of the first energy landscape, \(\tilde{V}_{\ell}^{(1)}\). The attractive basin of \(\tilde{V}_{\ell}^{(1)}\) is a set of activity patterns. By definition, \(V\) is in the attractive basin of \(\tilde{V}_{\ell}^{(1)}\) if and only if the gradient-descent walk starting from \(V\) eventually reaches \(\tilde{V}_{\ell}^{(1)}\). The gradient-descent walk on the set of activity patterns is defined by a series of moves from an activity pattern to another such that the move from \(V\) is allowed only when the next activity pattern is the one that attains the smallest energy (i.e., largest probability of appearance) among the neighbors of \(V\). Intuitively, if we release a ball at \(V\), the ball following the gradient moves on the energy landscape until it reaches \(\tilde{V}_{\ell}^{(1)}\) and stops there if there is no dynamical noise. We calculate the average of the activity patterns within the attractive basin of \(\tilde{V}_{\ell}^{(1)}\), which we denote by \(\mathbf{u}_{\ell}^{(1)}\). Note that \(\mathbf{u}_{\ell}^{(1)}\) is an \(N\)-dimensional vector, which we assume to be a column vector, whose \(i\)th entry is the average of \(\sigma_{i}\in\{-1,1\}\) over all the activity patterns in the attractive basin of \(\tilde{V}_{\ell}^{(1)}\). Similarly, denote by \(\mathbf{u}_{\ell^{\prime}}^{(2)}\) the average of the activity patterns in the attractive basin of \(\tilde{V}_{\ell^{\prime}}^{(2)}\) in the second energy landscape. Then, we calculate the cosine distance between \(\mathbf{u}_{\ell}^{(1)}\) and \(\mathbf{u}_{\ell^{\prime}}^{(2)}\) given by \[d_{\text{basin}}^{\prime}\left(\mathbf{u}_{\ell}^{(1)},\mathbf{u}_{ \ell^{\prime}}^{(2)}\right)=1-\frac{\mathbf{u}_{\ell}^{(1)\top}\mathbf{u}_{\ell^{ \prime}}^{(2)}}{\left\|\mathbf{u}_{\ell}^{(1)}\right\|\cdot\left\|\mathbf{u}_{\ell^{ \prime}}^{(2)}\right\|}, \tag{24}\] where \(\|\ \|\) denotes the Euclidean norm of the vector. The \(d_{\text{basin}}^{\prime}\) value ranges between \(0\) and \(2\). A small value of \(d_{\text{basin}}^{\prime}\) indicates a stronger alignment between \(\mathbf{u}_{\ell}^{(1)}\) and \(\mathbf{u}_{\ell^{\prime}}^{(2)}\). For a given matching \(\rho\), we then define \[d_{\text{basin}}=\frac{1}{m_{1}}\sum_{\ell=1}^{m_{1}}d_{\text{basin }}^{\prime}\left(\mathbf{u}_{\ell}^{(1)},\mathbf{u}_{\rho(\ell)}^{(2)}\right), \tag{25}\] which quantifies overall discrepancy between the two energy landscapes in terms of the average activity pattern in the attractive basin of the local minimum. We calculate \(d_{\text{basin}}\) for all the possible matchings between \(\{\tilde{V}_{1}^{(1)},\ldots,\tilde{V}_{m_{1}}^{(1)}\}\) and \(\{\tilde{V}_{1}^{(2)},\ldots,\tilde{V}_{m_{2}}^{(2)}\}\) and adopt the smallest value, which we also refer to as \(d_{\text{basin}}\) for simplicity. In a majority of cases, the best matching determined by the minimization of \(d_{\text{H}}\) and that determined by the minimization of \(d_{\text{basin}}\) are the same. However, they are sometimes different from each other. #### 2.7.4 Discrepancy in terms of the branch length As a fourth measure to characterize energy landscapes, we quantified the ease with which the activity pattern switches from one major local minimum to another. We call it the normalized branch length. Then, we compared the normalized branch length between two energy landscapes. We compute the normalized branch length as follows. We first calculate the length of the branch corresponding to each major local minimum \(\gamma\) as the difference between the energy value of \(\gamma\) and the smallest energy value at which \(\gamma\) joins the branch of another major local minimum on the disconnectivity graph. The calculated branch length quantifies the difficulty of transitioning from \(\gamma\) to another local minimum. We assume that there are \(m_{1}\) and \(m_{2}\) major local minima from the first and second energy landscapes, respectively. We denote by \(L^{(1)}\) and \(L^{(2)}\) the average of the branch length over the \(m_{1}\) corresponding branches in the first energy landscape and over the \(m_{2}\) branches in the second energy landscape, respectively. Then, we define the normalized branch length difference between the two energy landscapes by \[d_{L}=\frac{\left|L^{(1)}-L^{(2)}\right|}{\max(L^{(1)},L^{(2)})}. \tag{26}\] ### Nonparametric statistical analysis We examine whether two energy landscapes estimated from different fMRI data from the same participants are more similar to each other than two energy landscapes estimated for two different groups of participants. We argue that, if the energy landscape analysis is useful, two energy landscapes estimated from two data sets from the same participants should be closer to each other than the two energy landscapes estimated from two data sets from different participants. We consider one of the four discrepancy measures, say, \(d_{J}\). We also focus on the \(p\)th participant. We first calculate \(d_{J}\) between \(J^{(1)}\) and \(J^{(2)}\), where we estimate \(J^{(1)}\) for the fMRI data concatenated over four sessions that are uniformly randomly selected out of the ten sessions, \(s\in\{1,\ldots,10\}\), and \(J^{(2)}\) from the fMRI data concatenated over another uniformly randomly selected four sessions. We impose that the second set of four sessions does not overlap with the first set. Note that we use eight out of ten randomly selected sessions to calculate one \(d_{J}\) value. We repeat this procedure ten times to obtain ten values of \(d_{J}\) for the \(p\)th participant. By calculating ten values of \(d_{J}\) for each of the eight participants, i.e., \(p\in\{1,\ldots,7,9\}\), we obtain \(8\times 10=80\) values of \(d_{J}\). We denote the average of the 80 values of \(d_{J}\) by \(d_{1}\) (see Fig. 3(a)). Next, we calculate \(d_{J}\), with \(J^{(1)}\) being estimated for the fMRI data concatenated over the \(s\)th sessions of four participants that are uniformly randomly selected out of the eight participants, and \(J^{(2)}\) from the fMRI data concatenated over the \(s\)th sessions of the other four participants. We repeat this procedure ten times to obtain ten values of \(d_{J}\) for the \(s\)th session. By calculating the ten values of \(d_{J}\) for each of the ten sessions, i.e., \(s\in\{1,\ldots,10\}\), we obtain \(10\times 10=100\) values of \(d_{J}\). We denote the average of the 10 values of \(d_{J}\) by \(d_{2}\) (see Fig. 3(a)). We define \(\mathrm{ND}\equiv d_{2}/d_{1}\)[72, 73]. If energy landscapes are more similar between different sets of sessions from the same participant (i.e., within-participant comparison) than between those from different participants (i.e., between-participant comparison), the ND value will be larger than \(1\). In this case, we regard that the energy landscape analysis bears high within-participant test-retest reliability. In contrast, if the energy landscape from the same participant is not particularly reliable across sessions, the ND will be close to \(1\). To statistically examine whether ND is sufficiently larger than \(1\), we run a nonparametric permutation test, which is an adaptation of the same test in different studies [72, 73]. The steps of the permutation test based on the ND are as follows. Here we use \(d_{J}\) to explain the steps. See Fig. 3(b) for a schematic. 1. Consider the binarized \(N\)-dimensional fMRI time series data for each of the eight participants and each of the ten sessions. 2. Uniformly randomly permute the 80 participant-session pairs. After the randomization, the fMRI data for the \(s\)th session from the \(p\)th participant is the fMRI data for a uniformly randomly selected session from a uniformly randomly selected participant without replacement. 3. We calculate ND for the randomized data. This step entails concatenating the fMRI data over four random sessions from the same participant \(p\) or over the \(s\)th sessions from four random participants, estimate the energy landscapes for the concatenated data, comparing two energy landscapes to calculate \(d_{J}\), repeat this 80 times to obtain \(d_{1}\) and 100 times to obtain \(d_{2}\), and compute \(\mathrm{ND}=d_{2}/d_{1}\). 4. Repeat steps (2) and (3) over \(c\) random permutations, where \(c\) is a large number. We set \(c=10^{3}\). 5. Calculate the \(p\) value, which is the fraction of the random permutations that yield an ND value larger than that for the original data. 6. If the \(p\) value is significantly small, then we reject the null hypothesis that \(d_{1}=d_{2}\) in the original data. In this case, we conclude the significant presence of within-participant test-retest reliability in the energy landscape analysis. In step 3, we calculate \(d_{1}\) and \(d_{2}\) as the within-participant and between-participant averages, respectively. However, for the randomized data, they are statistically the same except that \(d_{1}\) and \(d_{2}\) are averages of \(80\) and \(100\) values of \(d_{J}\), respectively. This is because \(d_{J}\) calculated for both \(d_{1}\) and \(d_{2}\) originates from the comparison of the energy landscape estimated from uniformly randomly selected four out of the 80 sessions and another uniformly randomly selected four sessions without overlapping. Therefore, \(d_{1}\) and \(d_{2}\) have the same mean, and ND is expected to be peaked approximately at 1. The present permutation test thus evaluates whether the reliability of the energy landscape analysis across sessions for the same participant is higher than that across sessions for different participants. ## 3 Results ### Accuracy of fit of the pairwise MEM We extracted \(N\) ROIs for three brain networks, i.e., the whole-brain network (\(N=7\)), DMN (\(N=8\)), and CON (\(N=7\)). For each of them, we estimated the pairwise MEM for the resting-state fMRI signals obtained from healthy adults in the MSC data set. We calculated \(r_{D}\), the accuracy of fit of the pairwise MEM, for each pair of participant and session. We obtained \(r_{D}=69.12\pm 6.41\%\) (average \(\pm\) standard deviation) for the whole brain network, \(r_{D}=57.97\pm 8.94\%\) for the DMN, and \(r_{D}=77.65\pm 5.41\%\) for the CON (also see Table 1). Because the accuracy of fit is not high enough, as is customarily done, we concatenated the data across participants or across sessions, estimated the pairwise MEM, and calculated \(r_{D}\)[37; 38]. Specifically, we concatenated the fMRI data across \(m\) sessions, where \(m\in\{2,3,4,5\}\). The \(m\) sessions are from the same participant but from \(m\) different sessions, or have the same session ID (i.e., \(s\)) but from \(m\) different participants. We show in Table 1 the average and standard deviation of \(r_{D}\) for the three networks when we concatenated \(m\in\{2,3,4,5\}\) sessions from the same participant. Table 2 shows the \(r_{D}\) values when we concatenated \(m\) sessions from different participants. In both Tables 1 and 2, as expected, \(r_{D}\) increases as \(m\) increases (\(\beta_{1}=3.09\) in Eq. (12); \(p=4.60\times 10^{-9}\)). Furthermore, \(r_{D}\) is larger with the within-participant than across-participant concatenation (\(\beta_{4}=3.51\) in Eq. (12); \(p=6.04\times 10^{-5}\)). The latter result indicates that the energy landscape estimated through the within-participant concatenation of the fMRI data is more reliable than that estimated through the between-participant concatenation in terms of the accuracy of fit of the pairwise MEM. In both Tables 1 and 2, the accuracy for the DMN is substantially lower than that for the whole-brain network (\(\beta_{2}=10.34\) in Eq. (12); \(p=1.63\times 10^{-10}\)) and the CON (\(\beta_{3}=14.31\) in Eq. (12); \(p=5.54\times 10^{-13}\)). This is presumably because the DMN has one more ROI than the whole-brain network and the CON. The accuracy decreases as the number of ROIs increases in general [38]. In the following analyses, we use concatenation over \(m=4\) sessions and examine test-retest reliability of the energy landscape analysis. Figure 3(a) schematically explains the concatenation within each participant and that across participants. With \(m=4\), the accuracy of fit is more than 85% except for the DMN. In general, we are also interested in the test-retest reliability of fMRI data in the case of a relatively low accuracy of fit, which we test with the DMN. A concatenation over more sessions, such as with \(m=5\), would further increase the accuracy of fit (see Tables 1 and 2). Then, however, examining test-retest reliability may be more difficult because one needs to create two energy landscapes, preferably from non-overlapping data, and systematically compare them. In the present study, we use data obtained from eight participants. Therefore, if \(m=5\), one cannot avoid overlapping of the participants if we create two groups of participants for concatenating the fMRI data. Our choice of \(m=4\) balances the accuracy of fit and the tractability of the test-retest reliability analysis. Figure 3: Schematic diagram describing concatenation of the fMRI data across different sessions and calculation of \(d_{1}\) and \(d_{2}\). (a) For the original data. (b) For the randomized data. The inference of the energy landscape is based on the data concatenated across four sessions. The same four cells in the table are used for the concatenation in (a) and (b). However, because of the random permutation, any concatenation in (b) is over four sessions that are selected uniformly at random from the original data. Therefore, in (b), \(d_{1}^{\prime}\) and \(d_{1}^{\prime\prime}\), for example, are statistically the same, and the expectation of \(d_{1}\) and that of \(d_{2}\) are the same. ### Reliability in terms of the interaction strength We first examined the test-retest reliability of the energy landscape analysis in terms of the interaction strength parameters \(\{J_{ij}\}\). We concatenated the fMRI data over the first four sessions from the \(p\)th participant and estimated \(\{J_{ij}\}\) for each \(p\in\{1,2,3,4,5,6,7,9\}\). Similarly, for each participant \(p\), we concatenated the data over the next four sessions (i.e., sessions 5 to 8) and estimated \(\{J_{ij}\}\). For the whole-brain network, we show the relationships between \(J_{ij}\) estimated for the first four sessions against that estimated for the next four sessions for the first participant in Fig. 4(a). Each circle represents \(J_{ij}\) for a pair of \(i\) and \(j\). The values of \(\{J_{ij}\}\) are reasonably consistent between the first four sessions and the next four sessions (Pearson correlation coefficient \(=0.850\); discrepancy \(d_{J}=0.0428\)). We instead concatenated the data for a single session over the first four participants (i.e., \(p=1\), \(2\), \(3\), and \(4\)) and estimate \(\{J_{ij}\}\), did the same for the last four participants (i.e., \(p=5\), \(6\), \(7\), and \(9\)), and compared the two obtained sets of \(\{J_{ij}\}\). In this manner, we investigated the consistency of the energy landscape between participants. For the whole-brain network, we show relationships between \(\{J_{ij}\}\) for the two sets of participants in the first session in Fig. 4(b). Similar to the case of Fig. 4(a), the estimated \(\{J_{ij}\}\) was reasonably consistent between the two concatenations, consistent with previous results with other data [42, 67, 74]. However, the degree of consistency was smaller for the present between-participant comparison (Pearson correlation coefficient \(=0.773\); discrepancy \(d_{J}=0.0493\)) than the within-comparison comparison. In this particular example, the estimation of \(\{J_{ij}\}\) was more consistent between pairs of sessions from the same participant than those from different participants. To examine the generality of this result, we then calculated \(d_{J}\) between the concatenation across sessions 1 to 4 and that across sessions 5 to 8 from the same participant (i.e., within-participant comparison). The mean and standard deviation of \(d_{J}\) over the eight participants were equal to \(d_{J}=0.0464\pm 0.0082\) (mean \(\pm\) std) for the whole-brain network. We also calculated \(d_{J}\) between the concatenation of the \(s\)th section over the first four participants and that over the last four participants \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(m\) & Whole-brain & DMN & CON \\ \hline 1 & \(69.12\)\(\pm\)\(6.41\) & \(57.97\)\(\pm\)\(8.94\) & \(77.65\)\(\pm\)\(5.41\) \\ \hline 2 & \(81.26\)\(\pm\)\(4.36\) & \(71.49\)\(\pm\)\(5.77\) & \(86.26\)\(\pm\)\(3.51\) \\ \hline 3 & \(85.97\)\(\pm\)\(3.34\) & \(76.78\)\(\pm\)\(5.30\) & \(90.07\)\(\pm\)\(2.32\) \\ \hline 4 & \(88.62\)\(\pm\)\(3.49\) & \(80.80\)\(\pm\)\(4.44\) & \(92.30\)\(\pm\)\(1.70\) \\ \hline 5 & \(90.59\)\(\pm\)\(2.45\) & \(83.73\)\(\pm\)\(4.44\) & \(93.50\)\(\pm\)\(1.54\) \\ \hline \end{tabular} \end{table} Table 1: **Accuracy of fit of the pairwise MEM when we concatenate fMRI data within the same participant. Each cell shows the average and standard deviation of the accuracy of fit in percent when we concatenate the fMRI data across sessions from the same participant. We concatenated data from a given participant over \(m\) sessions and then fitted the pairwise MEM to the concatenated data. With \(m=2\), we partitioned the 10 sessions into 5 groups as (1,2), (3,4), (5,6), (7,8), and (9,10), concatenated the fMRI data within each group and within each participant, estimated the energy landscape, and computed the accuracy of fit, \(r_{D}\). For example, we concatenated the data from the first two scanning sessions from participant 1, estimated the energy landscape, and computed \(r_{D}\). We did the same for data from the third and fourth sessions from participant 1, the first and second sessions from participant 2, for example. With \(m=3\), we concatenated sessions \(s=1,2\), and \(3\) from the same participant into one time series, sessions \(s=4,5\) and \(6\) into one series, and sessions \(s=7,8\), and \(9\) to one series. With \(m=4\), we concatenated sessions \(s=1,2,3,4\) and \(4\) into one series and sessions \(s=5,6,7\), and \(8\) into another series. With \(m=5\), we concatenated sessions \(s=1,2,3,4\), and \(5\) into one series and sessions \(s=6,7,8,9\), and \(10\) into another series.** \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(m\) & Whole-brain & DMN & CON \\ \hline 1 & \(69.12\)\(\pm\)\(6.41\) & \(57.97\)\(\pm\)\(8.94\) & \(77.65\)\(\pm\)\(5.41\) \\ \hline 2 & \(79.64\)\(\pm\)\(3.77\) & \(64.71\)\(\pm\)\(7.45\) & \(84.81\)\(\pm\)\(3.68\) \\ \hline 3 & \(83.92\)\(\pm\)\(1.96\) & \(72.41\)\(\pm\)\(4.51\) & \(86.66\)\(\pm\)\(3.13\) \\ \hline 4 & \(86.36\)\(\pm\)\(1.85\) & \(73.46\)\(\pm\)\(3.59\) & \(90.76\)\(\pm\)\(2.07\) \\ \hline 5 & \(87.51\)\(\pm\)\(1.72\) & \(77.79\)\(\pm\)\(3.23\) & \(91.27\)\(\pm\)\(1.06\) \\ \hline \end{tabular} \end{table} Table 2: **Accuracy of fit of the pairwise MEM when we concatenate fMRI data across different participants. Each cell shows the average and standard deviation of the accuracy of fit in percent when we concatenate the fMRI data across sessions from different participants. We concatenated data from a given session over \(m\) participants and the fitted the pairwise MEM to the concatenated data. With \(m=2\), we concatenated the data for the \(s\)th session from participants \(p=1\) and \(2\) into one time series, those from participants \(p=3\) and \(4\) into another series, those from participants \(p=5\) and \(6\) into another series, and those from participants \(p=7\) and \(9\) into another series. We did this for each \(s\). With \(m=3\), we concatenated the data from participants \(p=1,2,\) and \(3\) into one series and those from participants \(p=4,5\), and \(6\) into another series. With \(m=4\), we concatenated the data from participants \(p=1,2,3\), and \(4\) into one series and those from participants \(p=5,6,7\), and \(9\) into another series. With \(m=5\), we concatenated the data from participants \(p=1,2,3,4\), and \(5\) into one series and sessions \(s=6,7,8,9\), and \(10\) into another series.** Figure 4: **Reliability of the interaction strength between two ROIs, \(J_{ij}\), for the whole-brain network. (a) Within-participant comparison. We concatenated the data for the first participant over four sessions. The horizontal and vertical axes correspond to the concatenation of sessions 1 to 4 and sessions 5 to 8, respectively. Each circle represents a pair of \(i\) and \(j\). (b) Between-participant comparison. We concatenated the data from the first session over four participants. The horizontal and vertical axes correspond to the concatenation of the first and last four participants, respectively. In both (a) and (b), if all the circles lay on the diagonal, which we show by the solid lines, then the discrepancy, \(d_{J}\), would be equal to 0. The \(d_{J}\) value is large if the circles tend to be far from the diagonal.** (i.e., between-participant comparison). The mean and average of \(d_{J}\) for the between-participant comparison over the ten sessions were equal to \(d_{J}=0.0527\pm 0.0098\). We show these \(d_{J}\) values and those for the DMN and CON in Table 3. The table suggests that the energy landscape is apparently somewhat more similar between different fMRI sessions obtained from the same participant than between different participants. To statistically investigate potential differences between the within-participant and between-participant comparisons, we carried out the permutation test on \(d_{J}\). The ND for the whole-brain network, DMN, and CON were at least \(1.3\) (see Table 4). After a random permutation of the participants and sessions, the ND value was centered around 1 by definition. We show the distribution of the ND value obtained from \(c=10^{3}\) random permutations in Fig. 5(a), (b), and (c) for the whole-brain network, DMN, and CON, respectively. We calculated the \(p\) value for the empirical data by contrasting it to the distribution of ND for the randomized data. We obtained \(p<10^{-3}\) for all the three networks, implying that no random permutation yielded an ND value larger than that for the empirical data before the random permutation. These results remained significant after correction for the multiple comparison present in Table 4 (\(p<1.2\times 10^{-2}\), Bonferroni corrected). Therefore, we conclude that the estimated parameter values, \(\{J_{ij}\}\), are significantly more reliable in the within-participant than between-participant comparison for the three networks. ### Reliability in terms of the activity patterns at the local minima As a second index of the consistency between different energy landscapes, we compared the activity patterns at the local minima of the energy landscape between energy landscape pairs in terms of the Hamming distance, \(d_{\text{H}}\). Table 3 indicates that the average \(d_{\text{H}}\) is at least 1.6 times larger for the between-participant than within-participant comparison for the whole-brain network, DMN, and CON. The ND value was at least \(1.73\) for the three networks (see Table 4). The permutation test yielded \(p<10^{-3}\) for all the three networks; see Fig. 5(d)-(f) for the distribution of the ND values for the random permutations. These results altogether support that the reliability of the energy landscape analysis in terms of \(d_{\text{H}}\) is higher within the same participant than between different participants. ### Reliability in terms of the activity patterns averaged over the attractive basin As a third index to characterize the consistency between energy landscapes, we measured the distance between the average activity patterns belonging to the attractive basin of a local minimum in one energy landscape and that in another energy landscape, i.e., \(d_{\text{basin}}\). Similarly to the case of \(d_{J}\) and \(d_{\text{H}}\), we found that \(d_{\text{basin}}\) is substantially smaller for the within-participant than between-participant comparison for the three networks although the standard deviation is not small (see Table 3). It should be noted that the observed \(d_{\text{basin}}\) values are close to \(0\) for both the within-participant and between-participant comparisons. This result implies the almost full agreement between a pair of energy landscapes in terms of the averaged activity pattern in the attractive basin, even for the between-participant comparison. We show in Fig. 5(g)-(i) for the distribution of the ND values for the random permutations as well as the ND values for the original energy landscapes. The permutation test yielded \(p<10^{-3}\) for the whole-brain network and the DMN and \(p=0.003\) for the CON (Table 4). These results support a significantly high test-retest reliability of the energy landscape analysis in \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Whole-brain & DMN & CON \\ \hline \multirow{2}{*}{\(d_{J}\)} & ND\(=\)\(1.315\) & ND\(=\)\(1.415\) & ND\(=\)\(1.580\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \multirow{2}{*}{\(d_{\text{H}}\)} & ND\(=\)\(1.934\) & ND\(=\)\(3.200\) & ND\(=\)\(1.730\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \multirow{2}{*}{\(d_{\text{basin}}\)} & ND\(=\)\(1.491\) & ND\(=\)\(2.359\) & ND\(=\)\(1.503\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p=\)\(0.003\) \\ \hline \multirow{2}{*}{\(d_{L}\)} & ND\(=\)\(1.237\) & ND\(=\)\(1.744\) & ND\(=\)\(1.609\) \\ & \(p\)\(=\)\(0.014\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \end{tabular} \end{table} Table 4: ND values and the permutation test results for the four discrepancy measures, calculated with the conventional likelihood maximization applied to the MSC data. The \(p\) values are the uncorrected values. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Whole-brain} & \multicolumn{2}{c|}{DMN} & \multicolumn{2}{c|}{CON} \\ \hline & Within (\(d_{1}\)) & Between (\(d_{2}\)) & Within (\(d_{1}\)) & Between (\(d_{2}\)) & Within (\(d_{1}\)) & Between (\(d_{2}\)) \\ \hline \(d_{J}\) & 0.0464\(\pm\)0.0082 & 0.0527\(\pm\)0.0098 & 0.0448\(\pm\)0.0091 & 0.0634\(\pm\)0.0060 & 0.0432\(\pm\)0.0130 & 0.0684\(\pm\)0.0110 \\ \hline \(d_{\text{H}}\) & 0.5800\(\pm\)1.0956 & 0.9125\(\pm\)0.8149 & 0.5417\(\pm\)0.4777 & 1.7333\(\pm\)1.0998 & 0.3854\(\pm\)0.4606 & 0.6667\(\pm\)0.3909 \\ \hline \(d_{\text{basin}}\) & 0.0232\(\pm\)0.0239 & 0.0346\(\pm\)0.0329 & 0.0245\(\pm\)0.0220 & 0.0578\(\pm\)0.0245 & 0.0157\(\pm\)0.0098 & 0.0236\(\pm\)0.0121 \\ \hline \(d_{L}\) & 0.2852\(\pm\)0.2100 & 0.3529\(\pm\)0.1625 & 0.2022\(\pm\)0.2366 & 0.3526\(\pm\)0.3341 & 0.2281\(\pm\)0.1679 & 0.3671\(\pm\)0.1683 \\ \hline \end{tabular} \end{table} Table 3: Discrepancy between two energy landscapes estimated by the conventional likelihood maximization applied to the MSC data. “Within” and “Between” in the table stand for within-participant and between-participant, respectively. terms of \(d_{\rm basin}\) including the case of the CON after correction for multiple comparisons across the networks and indices (\(p=0.036\), Bonferroni corrected). ### Reliability in terms of the branch length As a last index of consistency of the energy landscape, we measure the normalized difference in the average branch length in the disconnectivity graph, \(d_{L}\), between two energy landscapes. We found that the average of \(d_{L}\) was smaller for the within-participant than between-participant comparison for the three networks (see Table 3). The permutation test yielded \(p=0.014\) for the whole-brain network, and \(p<10^{-3}\) for the DMN and CON; see Table 4 and Fig. 5(j)-(l). These results support a significantly high test-retest reliability of the energy landscape analysis in terms of \(d_{L}\) for the DMN and CON although the result for the whole-brain network did not survive correction for multiple comparison (\(p=0.17\), Bonferroni corrected). ### Accuracy and reliability of the variational Bayes approximation method The Bayesian estimation potentially allows us to reliably estimate an energy landscape without concatenating fMRI data across sessions or participants even if a single session is not long. Therefore, we repeated the same test-retest reliability analysis on the MSC data with the Bayesian estimation and without any concatenation. After running the variational Bayes approximation method to compute the hyperparameters, we calculated the accuracy of fit, \(r_{D}\), of the pairwise MEM. We obtained \(r_{D}=86.02\pm 2.79\%\), \(91.50\pm 3.21\%\), and \(93.51\pm 1.48\%\) for the whole-brain network, DMN, and CON, respectively. These high accuracy values support the effectiveness of the method. We show the mean and standard deviation of the four discrepancy indices for the within-participant and between-participant comparison in Table 5. For some combinations of the session, participant, and network, the Bayesian method yielded an energy landscape with just one local (and hence global) minimum of the energy. In this case, we set the branch length to be \(0\). Table 5 suggests that the within-participant consistency of energy landscape analysis is notably higher than the between-participant consistency in terms of the four discrepancy measures although the standard deviation is large. These results are qualitatively the same as those obtained with the conventional likelihood maximization method described in sections 3.2-3.5. However, the discrepancy values are substantially larger with the Bayesian method (see Table 5) than the likelihood maximization method (see Table 3) for both within-participant and between-participant comparisons with few exceptions. Table 6 shows the results of the permutation test for the three networks and four discrepancy measures. We find significantly higher reliability within the same participant than between different participants in terms of \(d_{J}\), \(d_{H}\) and \(d_{\rm basin}\). In terms of \(d_{L}\), the uncorrected \(p\) values were smaller than \(0.05\) but did not survive Bonferroni correction for the whole-brain network and DMN. These results were similar to those for the likelihood maximization method. However, comparison of Tables 4 and 6 reveals that the ND value with the Bayesian method was smaller than that with the likelihood maximization method for all the four discrepancy measures and all the three networks. Therefore, we conclude that the Bayesian method yields significantly higher reliability within the same participant than between different participants in most cases, whereas the reliability is somewhat weaker than in the case of the conventional likelihood maximization method. ## 4 Validation with the Human Connectome Project data As a different type of validation, we ran the test-retest reliability analysis for another fMRI data set, HCP data. We used a whole-brain network with \(N=7\) ROIs. We calculated the accuracy of fit, \(r_{D}\), of the pairwise MEM estimated with the likelihood maximization method to single-session data. We obtained \(r_{D}=92.49\pm 1.99\%\), where we calculated the average and standard deviation on the basis of the four sessions per Figure 5: **Histogram of ND for the randomized data and the empirical ND value.** The first, second, and third columns of the figure show the distributions for the whole-brain network, DMN, and CON, respectively. The four rows of the figure show the distributions for \(d_{J}\) (in (a), (b), and (c)), \(d_{\rm H}\) (in (d), (e), and (f)), \(d_{\rm basin}\) (in (g), (h), and (i)), and \(d_{L}\) (in (i), (k), and (l)), from the top to the bottom. In each panel, the vertical line indicates the empirical ND value. participant and all the participants. Table 7 shows the mean and standard deviation of the four discrepancy indices, compared between the within-participant and between-participant comparison. The results are similar to those for the MSC data. The ND values for \(d_{J}\), \(d_{\text{H}}\), \(d_{\text{basin}}\), and \(d_{L}\) are \(1.310\), \(1.152\), \(1.249\), and \(1.152\), respectively. The permutation test yielded \(p<10^{-3}\) for all the four discrepancy indices. These results confirm significantly higher within-participant than between-participant test-retest reliability of the energy landscape analysis with a different data set. ## 5 Discussion We examined test-retest reliability of the energy landscape analysis in terms of four indices. For each index, we calculated a discrepancy in the index value between two estimated energy landscapes. We then constructed and ran a permutation test on the calculated discrepancy value to statistically assess whether within-participant comparison of two energy landscapes yielded a smaller discrepancy value than between-participant comparison of two energy landscapes. For the two data sets, we found significant within-participant test-retest reliability (i.e., within-participant discrepancy being significantly smaller than between-participant discrepancy) in most cases. Furthermore, we found qualitatively the same results for a Bayesian variant of the energy landscape estimation method that enables us to estimate an energy landscape for each scanning session, mitigating the data-hungry nature of the original estimation method. The accuracy of fit measured by \(r_{D}\) was large for the variational Bayes approximation method (i.e., \(86.02\), \(91.50\), and \(93.51\)% on average for the whole-brain network, DMN, and CON, respectively) although we did not concatenate the fMRI data across different sessions. These \(r_{D}\) values are close to that with the conventional likelihood maximization method with concatenation of four or five sessions (see Tables 1 and 2). The high accuracy of the variational Bayes method is presumably due to the fact that the target empirical distribution of activity patterns, i.e., \(P_{N}(V_{i})\) in Eq. (11), is necessarily different between the two estimation methods. Specifically, \(P_{N}(V_{i})\) is the empirical distribution over all sessions for non-Bayesian estimation methods, whereas it is the empirical distribution for one session for Bayesian methods. We do not ascribe the higher accuracy of fit of the variational Bayes method to overfitting. The variational Bayes method yields a Boltzmann distribution for each session. Therefore, it uses \(MD\) parameters, where we remind that \(M=N(N+1)/2\) is the number of parameters of the Boltzmann distribution and \(D=80\) is the number of sessions. Therefore, it uses \(D\) times more parameters than the conventional likelihood maximization method, which uses \(M\) parameters to estimate one Boltzmann distribution. However, the variational Bayes method needs to produce an accurate Boltzmann distribution tailored to a single session to attain a high \(r_{D}\) value, which is not the case for the conventional likelihood \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Whole-brain & DMN & CON \\ \hline \multirow{2}{*}{\(d_{J}\)} & ND\(=\)\(1.274\) & ND\(=\)\(1.354\) & ND\(=\)\(1.381\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \multirow{2}{*}{\(d_{\text{H}}\)} & ND\(=\)\(1.351\) & ND\(=\)\(1.530\) & ND\(=\)\(1.569\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \multirow{2}{*}{\(d_{\text{basin}}\)} & ND\(=\)\(1.196\) & ND\(=\)\(1.275\) & ND\(=\)\(1.351\) \\ & \(p<10^{-3}\) & \(p<10^{-3}\) & \(p<10^{-3}\) \\ \hline \multirow{2}{*}{\(d_{L}\)} & ND\(=\)\(1.065\) & ND\(=\)\(1.057\) & ND\(=\)\(1.163\) \\ & \(p\)\(=\)\(0.0320\) & \(p\)\(=\)\(0.0230\) & \(p<10^{-3}\) \\ \hline \end{tabular} \end{table} Table 6: ND values and the permutation test results for the four discrepancy measures, calculated with the variational Bayes method applied to the MSC data. The \(p\) values are the uncorrected values. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Whole brain} & \multicolumn{2}{c|}{DMN} & \multicolumn{2}{c|}{CON} \\ \hline & Within (\(d_{1}\)) & Between (\(d_{2}\)) & Within (\(d_{1}\)) & Between (\(d_{2}\)) & Within (\(d_{1}\)) & Between (\(d_{2}\)) \\ \hline \(d_{J}\) & \(0.2748\)\(\pm\)\(0.0693\) & \(0.3500\)\(\pm\)\(0.0741\) & \(0.3291\)\(\pm\)\(0.1501\) & \(0.4455\)\(\pm\)\(0.1352\) & \(0.2482\)\(\pm\)\(0.0743\) & \(0.3428\)\(\pm\)\(0.0766\) \\ \hline \(d_{\text{H}}\) & \(1.1038\)\(\pm\)\(0.5951\) & \(1.4910\)\(\pm\)\(0.5846\) & \(1.5439\)\(\pm\)\(1.0098\) & \(2.3617\)\(\pm\)\(0.8919\) & \(0.7881\)\(\pm\)\(0.6890\) & \(1.2365\)\(\pm\)\(0.7095\) \\ \hline \(d_{\text{basin}}\) & \(0.0582\)\(\pm\)\(0.0305\) & \(0.0696\)\(\pm\)\(0.0272\) & \(0.0342\)\(\pm\)\(0.0262\) & \(0.0436\)\(\pm\)\(0.0257\) & \(0.0211\)\(\pm\)\(0.0160\) & \(0.0285\)\(\pm\)\(0.0172\) \\ \hline \(d_{L}\) & \(0.3537\)\(\pm\)\(0.2076\) & \(0.3765\)\(\pm\)\(0.2066\) & \(0.5056\)\(\pm\)\(0.3198\) & \(0.5342\)\(\pm\)\(0.3120\) & \(0.2610\)\(\pm\)\(0.2051\) & \(0.3036\)\(\pm\)\(0.2057\) \\ \hline \end{tabular} \end{table} Table 5: Discrepancy between two energy landscapes estimated by the variational Bayes method applied to the MSC data. “Within” and “Between” in the table stand for within-participant and between-participant, respectively. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Within (\(d_{1}\)) & Between (\(d_{2}\)) \\ \hline \(d_{J}\) & \(0.0784\)\(\pm\)\(0.0194\) & \(0.1027\)\(\pm\)\(0.0232\) \\ \hline \(d_{\text{H}}\) & \(0.3145\)\(\pm\)\(0.4864\) & \(0.3623\)\(\pm\)\(0.4967\) \\ \hline \(d_{\text{basin}}\) & \(0.0309\)\(\pm\)\(0.0314\) & \(0.0386\)\(\pm\)\(0.0325\) \\ \hline \(d_{L}\) & \(0.2535\)\(\pm\)\(0.1619\) & \(0.2921\)\(\pm\)\(0.1783\) \\ \hline \end{tabular} \end{table} Table 7: Discrepancy between two energy landscapes estimated by the conventional likelihood maximization method applied to the HCP data. “Within” and “Between” in the table stand for within-participant and between-participant, respectively. maximization method. In general, the accuracy of the pairwise MEM simply degrades if the data are shorter (see Tables 1 and 2; also see [38] for a systematic analysis on the effect of the data length on the accuracy). Our results that the variational Bayes method yields a higher accuracy of fit and higher consistency in the within-participant than between-participant comparison both support that individual-to-individual differences are not negligible when carrying out energy landscape analysis. While such individual differences were a motivation behind the original proposals of the Bayesian methods [40; 70], further comparisons of Bayesian and non-Bayesian estimation methods as well as pursuit of biological and medical relevances of energy landscapes estimated with the Bayesian methods remain future work. The significance of the test-retest reliability results obtained with the permutation test was similar between the likelihood maximization and variational Bayes methods. However, the ND values were larger for the likelihood maximization than the variational Bayes method. As a separate result, the discrepancy indices were overall smaller for the likelihood maximization than Bayesian method. The latter two results are in favor of the likelihood maximization over Bayesian method for realizing high test-retest reliability. However, we point out that the estimation of an energy landscape for the likelihood maximization requires concatenation of four sessions, whereas the Bayesian method avoids concatenation. Assessment of test-retest reliability for different Bayesian approximation methods [70] and other approximate estimating methods such as the pseudo-likelihood maximization [38; 41], including systematic analysis on the dependence of the results on the data length, is left as future work. The interclass correlation coefficient (ICC) has been widely used for investigating test-retest reliability in functional connectivity data [21]. We did not use the ICC because our quantification of the estimated energy landscape was mostly multi-dimensional and difficult to fit to an ANOVA or similar framework based on which the ICC is calculated. Specifically, \(\{J_{ij}\}\), based on which we calculated \(d_{J}\), is a \(N(N-1)/2\)-dimensional vector. In addition, we calculated \(d_{\text{H}}\) and \(d_{\text{bsain}}\) by examining the activity patterns at local minima and their average over the attractive basin, respectively, in the situation where the number of the local minima varies in one energy landscape from another. Therefore, we decided to calculate a discrepancy measure for each of the four indices between two energy landscapes and constructed a permutation test to examine test-retest reliability. We point out that the average branch length is a scalar characterization of an energy landscape, and therefore it is straightforward to use the ICC framework if we discard the normalization factor in Eq. (26). Although we proposed four discrepancy indices for pairs of energy landscapes, they are our arbitrary choices. One can apply the analysis pipeline proposed in the present study to assess test-retest reliability for other discrepancy indices. Other potential discrepancy indices are the frequency of transiting from one particular local minimum to another and features of the transition probability matrix among the activity patterns or among the local minima. Furthermore, our framework of the permutation test on the ND value is not limited to energy landscape analysis (e.g., application to "microstate dynamics" for fMRI data [75]). On the other hand, adopting methods for assessing the reliability of individual fingerprint of functional networks, such as the one based on the "identifiability matrix" [23], to energy landscape analysis may be interesting. Because our ND and the "differential identifiability" [23] are similarly defined, we expect similar results if we use a variant of differential identifiability to energy landscape analysis. ## Acknowledgements T.W. acknowledges support from the Japan Society for Promotion of Sciences (TW, 19H03535, 21H05679, 23H04217) N.M. acknowledges support from the Japan Science and Technology Agency (JST) Moonshot R&D (under grant no. JPMJMS2021), the National Science Foundation (under grant no. 2204936), and JSPS KAKENHI (under grant nos. JP 21H04595 and 23H03414). Two publicly available data sets were used in this work. The first data set was provided by the Midnight Scan Club (MSC) project, funded by NIH Grants NS088590, TR000448 (NUFD), MH104592 (DJG), and HD087011 (to the Intellectual and Developmental Disabilities Research Center at Washington University); the Jacobs Foundation (NUFD); the Child Neurology Foundation (NUFD); the McDonnell Center for Systems Neuroscience (NUFD, BLS); the Mallinckrodt Institute of Radiology (NUFD); the Hope Center for Neurological Disorders (NUFD, BLS, SEP); and Dart Neuroscience LLC. This data was obtained from the OpenMRI database. Its accession number is ds000224. The second data set was provided by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.
2309.12903
Latitudinal Propagation of Thermal Rossby Waves in Stellar Convection Zones
Using an analytic model, we derive the eigenfrequencies for thermal Rossby waves that are trapped radially and latitudinally in an isentropically stratified atmosphere. We ignore the star's curvature and work in an equatorial f-plane geometry. The propagation of inertial waves is found to be sensitive to the relative direction of the wave vector to the zonal direction. Prograde propagating thermal Rossby waves are naturally trapped in the radial direction for frequencies above a critical threshold, which depends on the angle of propagation. Below the threshold frequency, there exists a continuous spectrum of prograde and retrograde inertial waves that are untrapped in an isentropic atmosphere, but can be trapped by gradients in the specific entropy density such as occurs in a stellar convection zone. Finally, we discuss the implications of these waves on recent observations of inertial oscillations in the Sun, as well as in numerical simulations.
Rekha Jain, Bradley W. Hindman
2023-09-22T14:39:34Z
http://arxiv.org/abs/2309.12903v1
# Latitudinal Propagation of Thermal Rossby Waves in Stellar Convection Zones ###### Abstract Using an analytic model, we derive the eigenfrequencies for thermal Rossby waves that are trapped radially and latitudinally in an isentropically stratified atmosphere. We ignore the star's curvature and work in an equatorial f-plane geometry. The propagation of inertial waves is found to be sensitive to the relative direction of the wave vector to the zonal direction. Prograde propagating thermal Rossby waves are naturally trapped in the radial direction for frequencies above a critical threshold, which depends on the angle of propagation. Below the threshold frequency, there exists a continuous spectrum of prograde and retrograde inertial waves that are untrapped in an isentropic atmosphere, but can be trapped by gradients in the specific entropy density such as occurs in a stellar convection zone. Finally, we discuss the implications of these waves on recent observations of inertial oscillations in the Sun, as well as in numerical simulations. 0000-0002-3880-7088]Rekha Jain ## 1 Introduction A primary motivation for studying inertial oscillations of stars is their implications in understanding the stellar interior structure. In particular, observations of such oscillations may provide a strong constraint on superadiabaticity and other thermodynamic variables within the star's convection zone (e.g., Gilman, 1987). Recent observations in the Sun of Rossby waves and other inertial oscillations (e.g., Loptien et al., 2018; Hanasoge and Mandal, 2019; Proxauf et al., 2020; Gizon et al., 2021; Hathaway and Upton, 2021; Hanson et al., 2022) have aroused interest in using these waves as seismic probes of the solar interior and in the potential role that they play in the Sun's magnetic cycle (e.g., Dikpati and McIntosh, 2020). Through exploitation of the Sun's acoustic oscillations (\(p\) modes), helioseismology has successfully mapped the Sun's differential rotation and its thermal structure throughout the convection zone (see Christensen-Dalsgaard, 2002). However, some quantities, such as the turbulent viscosity and radial entropy gradient in the convection zone, are essentially invisible to the \(p\) mode oscillations. Further, the \(p\) modes, as measured from the ecliptic, do not sample high-latitudes well; therefore, flows and thermal structures in the polar caps are poorly constrained. All of these missing pieces are important elements in theories of stellar dynamics and the dynamo. Thus, there remains a prominent gap in our understanding of the solar interior. The observation of these new class of oscillations, which are likely to have sensitivity to many of these parameters, could bridge that gap. Thermal Rossby waves (e.g., Roberts, 1968; Busse, 1970; Hindman and Jain, 2022) are a specific type of gravito-inertial wave that corresponds to convective modes that have been partially or completely stabilized by rotation. While they have yet to be detected in the Sun observationally, they are ubiquitous in laboratory experiments of convection in a rotating fluid (e.g., Mason, 1975; Busse and Hood, 1982; Smith et al., 2014; Lonner et al., 2022). Further, thermal Rossby waves are seen in numerical simulations of stellar convection and are a crucial ingredient in the maintenance of a star's differential rotation (e.g., Brun et al., 2011; Miesch et al., 2000; Hindman et al., 2020). These waves appear at convective onset and persist even when the fluid becomes turbulent. In a Boussinesq fluid, the waves fill the spatial domain; but, in a gravitationally stratified fluid, numerical simulations have shown that the waves can be trapped in radius, being concentrated near the surface or near the bottom of the convection zone depending on the stratification (Jones et al., 2009; Hindman et al., 2020; Hindman and Jain, 2023). This strong variation in the location of the wave cavity is a clear indication that these waves are extremely sensitive to the stratification, and in particular the radial entropy gradient. Thus, if detected, such waves would serve as an excellent seismic probe. While there is a rich literature on Rossby waves in stratified astrophysical disks (e.g., Li et al., 2000; Lin, 2012), to date only a few studies have explored how thermal Rossby waves propagate through a stratified star (Glatzmaier and Gilman, 1981; Gilman, 1987; Hindman and Jain, 2022, 2023). Further, for simplicity, all of these have ignored propagation and reflection in the latitudinal direction. Studying the radial and latitudinal wave cavity of low-mass stars with a near-surface convection zone will be our main aim here. The present paper considers the propagation of thermal Rossby waves and their kin in all three directions (zonal, latitudinal and radial) within an adiabatically stratified background atmosphere. Section 2 describes a model to this effect and in Section 3 we derive the resulting governing equation and discuss the nature of the solutions. In section 4, we consider the eigenmodes of a semi-infinite polytropic atmosphere and in Section 5 we consider finite domains. Finally, we summarize and present our conclusions in Section 6. ## 2 The model Although most stars are nearly spherical, we consider a local Cartesian coordinate system by defining a tangent plane at the star's equator and assuming that the rotation vector is uniform over the whole tangent plane. This f-plane approximation simplifies the study of waves that have short horizontal wavelengths. Therefore, we place the origin at the stellar surface with the unit vectors \(\mathbf{\hat{x}}\), \(\mathbf{\hat{y}}\) and \(\mathbf{\hat{z}}\) pointing in the longitudinal, latitudinal and radial directions, respectively. We adopt uniform rotation to avoid singularities in the equations resulting from critical layers where the local rotation rate equals a wave's phase speed (e.g., Gilman, 1987; Gizon et al., 2021). We investigate the linearized fluid equations in the absence of a background flow. We consider a steady-state background, denoted with subscript \(0\), with perturbations about that background indicated with subscript \(1\). The background atmosphere is assumed to be a plane-parallel atmosphere which is gravitationally stratified with a constant gravitational acceleration \(\mathbf{g}=-g\mathbf{\hat{z}}\). Thus, the background pressure and density vary as a function of \(z\) and are denoted by \(P_{0}(z)\) and \(\rho_{0}(z)\), respectively. These quantities are related by hydrostatic balance and the ideal gas law. The linearized equation of motion that governs a rotating, inviscid fluid on an f-plane is given by: \[\frac{\partial\mathbf{u}_{1}}{\partial t}=-2\left(\mathbf{\Omega}\times\mathbf{u}_{1} \right)-\frac{1}{\rho_{0}}\mathbf{\nabla}P_{1}-\frac{g\rho_{1}}{\rho_{0}}\mathbf{\hat {z}}\;, \tag{1}\] where \(\mathbf{u}_{1}=u\mathbf{\hat{x}}+v\mathbf{\hat{y}}+w\mathbf{\hat{z}}\) is the perturbed fluid velocity and \(P_{1}\) and \(\rho_{1}\) are the Eulerian fluctuations of the pressure and density. The rotation vector points purely in the latitudinal direction, \(\mathbf{\Omega}=\Omega\mathbf{\hat{y}}\). The linearized continuity equation for a compressible fluid is, \[\frac{\partial\rho_{1}}{\partial t}+\rho_{0}(\mathbf{\nabla}\cdot\mathbf{u}_{1})-\rho _{0}\frac{w}{H}=0\;, \tag{2}\] where \[H=-\left(\frac{d\ln\rho_{0}}{dz}\right)^{-1}\;,\] is the density scale height. We consider adiabatic motions such that \[\frac{\partial\delta P}{\partial t}=\frac{\partial P_{1}}{\partial t}-g\rho_{ 0}w=c_{s}^{2}\left(\frac{\partial\rho_{1}}{\partial t}+w\frac{d\rho_{0}}{dz}\right) \tag{3}\] where \(\delta P\) is the Lagrangian pressure fluctuation and \(c_{s}^{2}\) is the square of the sound speed defined by \[c_{s}^{2}(z)=\frac{\gamma P_{0}}{\rho_{0}}\] with \(\gamma\) as the ratio of specific heats. ## 3 The governing wave equation Since the background atmosphere is steady and invariant in the \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{y}}\) directions, we seek horizontal plane-wave solutions of the form \[f(x,y,z,t)=\tilde{f}(z)\,e^{i(k_{x}x+k_{y}y)}\,e^{-i\omega t}\;, \tag{4}\] where \(f\) is any perturbed fluid variable. Furthermore, \(k_{x}\) and \(k_{y}\) are the wavenumbers in the \(x\) and \(y\) directions and \(\omega\) is the temporal frequency. Without loss of generality, we only consider positive longitudinal wavenumbers, \(k_{x}>0\). The waves propagate in the prograde direction if the frequency is positive, \(\omega>0\), and in the retrograde direction for negative frequencies, \(\omega<0\). Using the above plane-wave form along with equations (1)-(3), we obtain the following governing equation for \(\delta P\) \[\left\{\frac{d^{2}}{dz^{2}}+\frac{1}{H}\frac{d}{dz}+\Lambda^{2}( z)\right\}\delta P=0\;, \tag{5}\] \[\Lambda^{2}(z)\equiv\frac{\omega^{2}-4\Omega^{2}}{c_{s}^{2}}-k_ {h}^{2}\left(1-\frac{N^{2}}{\omega^{2}}\right)+\frac{2\Omega k_{x}}{\omega \mathcal{H}}+\frac{4k_{y}^{2}\Omega^{2}}{\omega^{2}}\;.\] In this Equation, \(k_{h}\) is the total horizontal wavenumber, i.e., \(k_{h}^{2}=k_{x}^{2}+k_{y}^{2}\). Further, the square of the buoyancy frequency, \(N^{2}\), is defined as follows \[N^{2}(z)=g\left(\frac{1}{H}-\frac{g}{c_{s}^{2}}\right), \tag{6}\] and \(\mathcal{H}\) is a scale height that depends on \(H\) and \(N^{2}\) \[\frac{1}{\mathcal{H}}=\left(\frac{1}{H}-\frac{2N^{2}}{g}\right). \tag{7}\] The governing wave equation (5) can be written as a Helmholtz equation through the substitution \(\delta P=\sqrt{\rho_{0}}\,\Psi(z)\), \[\frac{d^{2}\Psi}{dz^{2}}+k_{z}^{2}\,\Psi=0, \tag{8}\] with \[k_{z}^{2}=\frac{\omega^{2}-(\omega_{ac}^{2}+4\Omega^{2})}{c_{s}^{2}}-k_{h}^{2 }\left(1-\frac{N^{2}}{\omega^{2}}\right)+\frac{2\Omega k_{x}}{\omega\mathcal{ H}}+\frac{4k_{y}^{2}\Omega^{2}}{\omega^{2}}. \tag{9}\] Here, \[\omega_{ac}^{2}\equiv\frac{c_{s}^{2}}{4H^{2}}\left(1-2\frac{dH}{dz}\right) \tag{10}\] is the square of the acoustic cut-off frequency, \(\omega_{ac}\). Equation (9) provides a local dispersion relation for both acoustic waves and gravito-inertial waves. In the **low-frequency limit**, Equation (9) reduces to a local dispersion relation for just gravito-inertial waves, \[k_{z}^{2}=\left[\frac{2\Omega k_{x}}{\omega\mathcal{H}}+\frac{4k_{y}^{2} \Omega^{2}}{\omega^{2}}+k_{h}^{2}\frac{N^{2}}{\omega^{2}}\right]-\left(k_{h}^ {2}+\frac{\omega_{ac}^{2}}{c_{s}^{2}}\right)\;. \tag{11}\] The two terms in the parentheses provide a negative contribution and lead to vertical evanescence (i.e., \(k_{z}^{2}<0\)). Conversely, the terms inside the square brackets can be positive, thereby leading to vertical propagation (\(k_{z}^{2}>0\)). The first two terms in the brackets arise from the Coriolis force. As discussed in Hindman & Jain (2022), the first of these terms is positive for prograde waves and can produce propagating thermal Rossby waves. The second is a newly identified term that also leads to vertical propagation and is responsible for the axisymmetric inertial waves in a sphere previously studied by Guenther & Gilman (1985). As we will see in subsequent sections, this term can in fact lead to vertical detrapping for waves of very low frequency. Finally, the third term in the square brackets is the buoyancy term responsible for internal gravity waves. ### Neutrally Stable Atmosphere Previous studies of latitudinal propagation in a compressible atmosphere have been carried out by Thuburn et al. (2002) and Kasahara (2003); but these efforts considered a stably-stratified isothermal atmosphere and investigated the modifications to acoustic-gravity waves by the Coriolis force. Further, since the atmosphere was isothermal, waves cannot be naturally trapped by the stratification. Thus, it is important to explore the radial and latitudinal propagation in an atmosphere where the density scale height varies with height. Hindman and Jain (2022) demonstrated that inertial waves in a neutrally stable convection zone can be trapped in the radial direction. In that paper we considered waves that did **not** propagate latitudinally. Here we demonstrate that radial trapping is still possible when latitudinal propagation is allowed, but there is also a continuous spectrum of extremely low-frequency inertial waves that are free to propagate to any depth. We now consider an isentropic atmosphere such that \(N^{2}=0\), i.e., the buoyancy forces disappear and the Coriolis force is the only restoring force for the low-frequency waves. Such a neutrally stable atmosphere is polytropic and possesses a single height at which the pressure, density, and temperature all vanish. We place the origin \(z=0\) at this singular point and let the atmosphere exist within the region below, for \(z<0\). In an isentropic polytrope the atmospheric profiles have the following power-law forms: \[\rho_{0}(z) = A_{0}(-z)^{\alpha}\;, \tag{12}\] \[P_{0}(z) = \frac{gA_{0}}{\alpha+1}(-z)^{\alpha+1}\;,\] (13) \[c_{s}^{2}(z) = \frac{\gamma g}{\alpha+1}(-z)\;,\] (14) \[H(z)=\mathcal{H}(z) = \frac{(-z)}{\alpha}\;,\] (15) \[\frac{\omega_{ac}^{2}(z)}{c_{s}^{2}(z)} = \frac{\alpha(\alpha+2)}{z^{2}} \tag{16}\] where \(A_{0}\) is an arbitrary scale factor and the dimensionless parameter \(\alpha\) is the polytropic index given by \(\alpha=\left(\gamma-1\right)^{-1}\). ### Propagation Diagram and Eigenmodes For an isentropic stratification, where \(N^{2}=0\), the low-frequency form of the local dispersion relation (11) reduces to \[k_{z}^{2}=\left[\frac{2\alpha\Omega k_{x}}{\omega(-z)}+4k_{y}^{2}\frac{\Omega^ {2}}{\omega^{2}}\right]-\left(k_{h}^{2}+\frac{\alpha(\alpha+2)}{4z^{2}}\right)\;. \tag{17}\] Near the upper boundary of the atmosphere (\(z\to 0\)), the second term in the parentheses (arising from the acoustic cut-off frequency) is large and leads to reflection and vertical evanescence. Deep within the atmosphere (\(z\rightarrow-\infty\)), the dispersion relation reduces to \[k_{z}^{2}\approx k_{h}^{2}\left(\varpi^{2}-1\right)\;, \tag{18}\] where, for later convenience, we have defined \[\varpi^{2}\equiv\frac{k_{y}^{2}}{k_{h}^{2}}\frac{4\Omega^{2}}{\omega^{2}}= \frac{4\Omega^{2}}{\omega^{2}}\sin^{2}\chi\;. \tag{19}\] Here, \(\chi\) indicates the direction of horizontal propagation, with \(\chi=0\) corresponding to pure prograde propagation and \(\chi=\pi/2\) indicating pure northward propagation (\(k_{x}=k_{h}\cos\chi\) and \(k_{y}=k_{h}\sin\chi\)). From Equation (18), we can easily determine that the inertial waves can be either vertically evanescent (\(k_{z}^{2}<0\)) or vertically propagating (\(k_{z}^{2}>0\)) depending on the frequency and the horizontal direction of propagation. A wave cavity exists when the waves become evanescent, which requires \(\varpi^{2}<1\) or equivalently, \[\omega>2\frac{|k_{y}|}{k_{h}}\Omega=2\Omega|\sin\chi|\;. \tag{20}\] These waves are naturally trapped by the density stratification and form a discrete spectrum of inertial eigenmodes that in the limit of \(k_{y}=0\) become the thermal Rossby waves of Hindman & Jain (2022) and the fast branch of thermal Rossby waves as discussed by Hindman & Jain (2023). For frequencies lower than the critical value given in Equation (20), the waves remain vertically propagating to all depths and a downward propagating inertial wave is never reflected back upwards in a semi-infinite domain: no cavity exists. This family of solution corresponds to a continuous spectrum of untrapped inertial eigenmodes. This behavior is fully revealed in Figure 1 which provides a propagation diagram as a function of height within an isentropic atmosphere. The lightly shaded region indicates those frequencies that correspond to vertically propagating waves. The green horizontal lines indicate the critical value (and its negative) provided by Equation (20). The frequencies above the upper bound correspond to trapped inertial waves, while frequencies between the two bounds constitute the untrapped modes. As one can see, the trapped waves propagate between two turning points and therefore form a discrete spectrum of normal modes. Conversely, the untrapped inertial waves are screened from the origin by an upper turning point, but lack a lower turning point. The trapped waves all possess positive frequencies, and are hence prograde-propagating, whereas the untrapped continuum modes can be prograde thermal Rossby waves or retrograde inertial waves. ### Vertical Wave Equation for an Isentropic Stratification For the polytropic atmosphere, the Helmholtz equation with \(k_{z}^{2}\) given by Equation (17) can be transformed into the well-known Whittaker Equation by making a change of variable \(\zeta=-2\sqrt{1-\varpi^{2}}\,k_{h}z\), \[\frac{d^{2}\Psi}{d\zeta^{2}}+\left[\frac{\kappa}{\zeta}-\frac{1}{4}+\frac{1/4 -\mu^{2}}{\zeta^{2}}\right]\Psi=0 \tag{21}\] where \(\mu\equiv(\alpha+1)/2\) is a constant. The parameter \(\kappa\) is the eigenvalue of the second-order ordinary differential equation (21) and it depends on the frequency and direction of propagation of the wave, i.e. \[\kappa=\frac{\alpha\Omega}{\omega}\frac{\cos\chi}{\sqrt{1-\varpi^{2}}}\;. \tag{22}\] Notice, that for all of the trapped prograde waves (refer to Figure 2) we have \(\varpi^{2}<1\); hence, for these trapped waves, the parameter \(\kappa\) and the dimensionless depth \(\zeta\) have real values. Conversely, the low-frequency waves without a lower turning point have \(\varpi^{2}>1\) and both \(\kappa\) and the dimensionless depth \(\zeta\) are purely imaginary. Whittaker's Equation (21) has two solutions, the Whittaker functions \(M_{\kappa\mu}(\zeta)\) and \(W_{\kappa\mu}(\zeta)\) (see Abramowitz & Stegun, 1968). These can be expressed with Kummer's confluent hypergeometric functions \(M\) and \(U\) as follows: \[M_{\kappa\mu}(\zeta)=e^{-\zeta/2}\zeta^{\mu+\frac{1}{2}}M\left(- \eta,1+2\mu,\zeta\right)\] \[W_{\kappa\mu}(\zeta)=e^{-\zeta/2}\zeta^{\mu+\frac{1}{2}}U(-\eta, 1+2\mu,\zeta)\] with \[\eta\equiv\kappa-\left(\mu+\frac{1}{2}\right)\;. \tag{23}\] ## 4 Semi-infinite Domain In this section we consider an atmosphere of semi-infinite extent in radius. The atmosphere has a physical upper surface (at \(z=0\)), but is infinitely deep (\(z\to-\infty\)). Such an atmosphere is valid to use if the waves are naturally trapped and are confined over a region of finite radius. However, here we primarily explore the eigenmodes for such an atmosphere because their mathematics provides illumination for the behavior of waves in finite domains. ### Naturally Trapped Modes in Radius The radially trapped modes are acquired by demanding that the solutions vanish and remain regular at the two singular points of Whittaker's Equation (21), \(\zeta=0\) and \(\zeta\to\infty\), \[\delta P(z)=\rho_{0}^{1/2}\Psi(z)=C_{n}\,\zeta\,e^{-\zeta/2}\,M(-n,\alpha+2, \zeta)\;, \tag{24}\] where \(C_{n}\) is an arbitrary constant and the parameter \(\eta\) must be a non-negative integer, \(\eta=n\in[0,1,2,3,...]\), in order to avoid divergence of the eigenfunction in the limit \(z\to-\infty\). Therefore, the eigenvalue \(\kappa\) takes on discrete values that depend on the radial order \(n\) and the polytropic index \(\alpha\), \[\kappa_{n}=n+1+\frac{\alpha}{2}\;. \tag{25}\] Since the frequency depends on the eigenvalue through Equation (22), the discretization of the eigenvalue leads to discretization of the corresponding frequencies. These eigenfrequencies depend on the direction of propagation \(\chi\), the radial order \(n\), the rotation rate \(\Omega\), and the polytropic index \(\alpha\), \[\frac{\omega_{n}}{\Omega}=\left[\frac{\alpha^{2}\cos^{2}\chi}{(n+1+\alpha/2)^ {2}}+4\sin^{2}\chi\right]^{1/2}\;. \tag{26}\] The blue dotted horizontal lines in Figure 1 indicate these discrete eigenfrequencies. There are a countable infinity of such modes with an accumulation point at the critical frequency, i.e., \(\omega\to 2\Omega\left|\sin\chi\right|\) as \(n\to\infty\). The expression for the eigenfrequency, \(\omega_{n}\), in Equation (26) clearly shows that the eigenfrequencies are unaffected by the magnitude of the horizontal wavenumber, \(k_{h}\), even though they do depend on the direction of propagation \(\chi\). This is a consequence of the self-similarity of a polytropic atmosphere (for details, see Hindman and Jain 2022). For purely northward or southward propagation, i.e., \(\chi=\pm\pi/2\), the eigenfrequencies become independent of the radial order \(n\) and on the stratification \(\alpha\) taking on the value \(\omega_{n}=2\Omega\). We only illustrate positive values of \(\chi\) because the eigenfrequencies are symmetric in \(\chi\), i.e., \(\omega_{n}(-\chi)=\omega_{n}(\chi)\). The lower turning point can be calculated directly from Equation (17) by solving for the two heights in the atmosphere where \(k_{z}^{2}=0\). The deepest of these two solutions corresponds to the lower turning point, \[-k_{h}z_{\rm turn}=\frac{\kappa_{n}+\sqrt{\kappa_{n}^{2}-\alpha(\alpha+2)/4}}{ \sqrt{1-\varpi^{2}}}. \tag{27}\] Note that the lower turning point for all radial orders deepens as the propagation angle increases from zero. Hence, the cavity is shallowest for purely zonal propagation (\(\varpi^{2}=0\)) and becomes infinitely deep for purely latitudinal propagation (\(\varpi^{2}=1\)). Below the lower turning point, the eigenfunction becomes evanescent and decays exponentially with depth. This behavior is illustrated in Figure 2. The red curve illustrates the eigenfunction for a \(n=3\) mode with a lower turning point near \(k_{h}z=-5\). ### Untrapped Modes in Radius Waves with frequencies that lie with the band \[\left|\omega\right|<2\Omega|\sin\chi|\;, \tag{28}\] do not possess a lower turning point. Hence, these waves continue to propagate deep in the atmosphere. This behavior leads to a continuous spectrum of wave modes that are regular at the origin, \(z=0\), and oscillatory in the limit \(z\to-\infty\). The regularity condition at the origin is the only boundary condition that we can physically enforce. This leads to eigenfunctions of the form, \[\delta P=C(\omega)\,\zeta\,e^{-\zeta/2}M\left(\mu+1/2-\kappa,\alpha+2,\zeta \right)\;, \tag{29}\] where \(C(\omega)\) is an amplitude determined by initial conditions. The dimensionless depth \(\zeta\) and the eigenvalue \(\kappa\) are both imaginary and \(\kappa\) is no longer discrete, \[\kappa=-i\frac{\alpha\Omega}{\omega}\frac{\cos\chi}{\sqrt{\varpi ^{2}-1}}\;, \tag{30}\] \[\zeta=-2i\sqrt{\varpi^{2}-1}\,k_{h}z\;. \tag{31}\] The eigenfunction given by Equation (29) is a standing wave that is comprised of a wave launched from infinite depth that travels upwards, reflects off the acoustic cut-off frequency near the origin, and then travels downwards back to infinite depth. Such an eigenfunction is illustrated in Figure 2, where the blue curve shows a continuum mode that lacks a lower turning point. The wave remains oscillatory in the limit \(z\to-\infty\). For special integer values of the polytropic index, \(\alpha=2L\), where \(L\) is any nonnegative integer, solutions for the untrapped continuum modes can be written in terms of real functions with real parameters and arguments using the regular Coulomb wave function \(F_{L}\)(see Abramowitz & Stegun, 1968), \[\delta P=C(\omega)\,z^{\alpha/2}F_{L}\left(q,\mathcal{Z}\right)\;, \tag{32}\] with \[q \equiv -i\kappa=-\frac{\alpha\Omega}{\omega}\frac{\cos\chi}{\sqrt{ \varpi^{2}-1}}\;, \tag{33}\] \[\mathcal{Z} \equiv -\frac{i}{2}\zeta=-\sqrt{\varpi^{2}-1}\,k_{h}z\;. \tag{34}\] ## 5 Finite Domain ### Radial Trapping In a solar-like star, the convection zone is approximately 200 Mm deep with the stably-stratified radiative zone lying underneath. Within the transition between the two layers, the buoyancy frequency jumps dramatically and this large change should make the bottom of the convection zone an efficient reflector of gravito-inertial waves. Therefore, an appropriate model of inertial waves in a star's convection zone is to apply a regularity condition at the origin (as we did in the previous section) and a reflective lower boundary condition at a finite depth of \(D\). For this later condition we adopt \(\delta P(z=-D)=0\) for \(D=200\) Mm. The global dispersion relation satisfying these boundary conditions is \[M\left(\mu+1/2-\kappa,\,\alpha+2,\,2\sqrt{1-\varpi^{2}}k_{h}D\right)=0\;. \tag{35}\] We note that the imposition of a lower boundary condition at a finite depth converts the continuous spectrum of untrapped waves into a discrete spectrum. In Figure 3, the dimensionless frequency \(\omega/\Omega\) is plotted as a function of \(k_{h}R\) for four different angles of propagation. Note that \(k_{h}R\) corresponds to the harmonic degree of spherical harmonics in a spherical geometry. In each panel, the uppermost curve with the highest positive frequencies corresponds to modes that lack nodes in radius (i.e., radial order \(n=0\)). The second highest indicates modes with one radial node (\(n=1\)). Sequentially lower curves have one additional node and have an accumulation point at zero frequency for an infinite number of nodes. All of these modes have positive frequencies and are prograde propagating. The curve with the largest negative frequency also lacks radial nodes (\(n=0\)), but corresponds to retrograde-propagating inertial waves. Each subsequent curve with smaller negative frequencies has an additional node with an accumulation point at zero frequency as well. The horizontal green lines indicate the frequency bounds that separate the modes that are naturally trapped by the stratification (\(\omega>2\Omega\left|\sin\chi\right|\)) from those that would have been untrapped but are now trapped by the lower boundary of the finite domain. ### Latitudinal Trapping Since we have derived the inertial waves within an equatorial f-plane model instead of spherical geometry, we have implicitly assumed that the waves are confined near the equator and have short horizontal wavelengths, i.e., \(k_{h}R\gg 1\) where \(R\) is the star's photospheric radius. The first of these assumptions allows us to ignore the curvature terms in the fluid equations that arise from the spherical geometry. The second can be justified by examining the results of numerical simulations and eigenmode calculations in spherical geometry (i.e., Jones et al., 2009; Hindman et al., 2020; Bekki et al., 2022), which clearly indicate that thermal Rossby waves are indeed confined or trapped near the equator. The traditional way to capture latitudinal trapping in plane-parallel geometry is to adopt an equatorial \(\beta\)-plane approximation, where all atmospheric and geometric terms in the fluid equations are linearized with respect to the latitudinal coordinate (i.e., one assumes \(y/R\ll 1\)). We will not do so here to avoid the resulting complication of solving the wave equations in a truly 2D atmosphere. Instead, we will retain our f-plane geometry but make the simple assumption that the waves are confined within a latitudinal band that extends north and south of the equator by a fixed distance \(L/2\). To enforce reflection at \(y=\pm L/2\), we impose Neumann boundary conditions on the Lagrangian pressure fluctuation, \[\frac{\partial\delta P}{\partial y}\bigg{|}_{y=\pm L/2}=0\;, \tag{36}\] which is equivalent to an impenetrable boundary condition (\(v=0\)). Such boundary conditions are quite similar to those employed in the study of Rossby waves in astrophysical disks (Lin, 2012). To simplify comparison with waves in a spherical geometry, we will assume that the longitudinal direction is periodic which quantizes the longitudinal wavenumber, \(k_{x}=m/R\), with \(m\) being the azimuthal order of the concomitant spherical harmonic. Thus, our spatial domain is shaped like a millstone, with an outer annular radius of \(R\), an inner radius of \(R-D\), and a cylindrical height of \(L\) (see Figure 4). In numerical simulations, the latitudinal extent of thermal Rossby wave eigenfunction varies as a function of horizontal wavenumber (i.e., Hindman et al., 2020). But, in general, the waves often fill the region outside the cylinder that is tangent to the base of the convection zone at the equator. Hence, we choose the width of the latitudinal band \(L\) to be the length of the chord that is tangent to the bottom of the star's convection zone (see Figure 4), \[L=2\sqrt{R^{2}-(R-D)^{2}}\;. \tag{37}\] Using \(R=700\) Mm and \(D=200\) Mm, one obtains \(L\approx 980\) Mm. The latitudinal boundary conditions discretizes the latitudinal wavenumber, \[k_{y}=\frac{\lambda\pi}{L}\;,\quad\lambda=[0,1,2,3,...]\;, \tag{38}\] leading to eigenfunctions of the form, \[\delta P(x,y,z,t)=C\cos\left[\lambda(y-L/2)\right]\,e^{imx/R}\,\zeta e^{-\zeta /2}\,M(-\eta,\alpha+2,\zeta)\,e^{-i\omega t}\;. \tag{39}\] The quantum number \(\lambda\) can be any non-negative integer, with the value of \(\lambda\) indicating the number of latitudinal nodes that appear in the eigenfunction. Modes with \(\lambda=0\) correspond to \(k_{y}=0\) and we have explored these modes previously in Hindman & Jain (2022). We recognize that our choice of \(L\) results in large latitudinal wavelengths for low latitudinal orders (small \(\lambda\)); hence, they break the short wavelength approximation. Our goal, however, is not to derive accurate quantitative frequencies but to instead generate a general qualitative understanding of the wave behavior. Hence, we carry on nonetheless. The eigenfrequencies for this "millstone" model can be generated by solving Equation (39) numerically, for the discrete values of \(k_{x}\) and \(k_{y}\) that we just discussed. Figure 5 presents the results. The eigenfrequencies are plotted as a function of azimuthal order \(m\) for the five lowest latitudinal orders. For clarity of presentation, only the radial fundamental modes, lacking radial nodes (\(n=0\)), are illustrated. The solid curves show the prograde-propagating thermal Rossby waves and the dashed lines correspond to the retrograde-propagating inertial waves. The color of the curve indicates the latitudinal order. The reader should note that for the \(\lambda=0\) mode, which propagates purely zonally (\(k_{y}=0\)), the retrograde solution is missing because it becomes a zero-frequency geostrophic mode (Hindman & Jain, 2022). The dotted curves show the frequency for which the lower turning point passes through the bottom boundary on the convection zone. Every frequency, for the appropriate value of \(\lambda\), that lies above the dotted curve corresponds to a mode with two turning points in the radial domain; hence, such a mode has a cavity that only partially fills the domain. Those frequencies that lie below the dotted line correspond to modes with only one turning point in the radial domain and that are trapped through reflection off the lower boundary. Initially, the mode frequency increases as the azimuthal order increases until the lower turning point crosses into the domain. At larger azimuthal orders, the turning point continues to move upwards and the depth of the wave cavity shrinks commensurately. In response, the mode frequency asymptotes to a constant value that is independent of the lower boundary condition. For the prograde thermal Rossby waves, the asymptotic value can be obtained from the dispersion relation that applies for the semi-infinite domain, Equation (26), \[\lim_{m\rightarrow\infty}\omega_{n}=\frac{2\alpha\Omega}{2n+\alpha+2} \tag{40}\] For the retrograde inertial waves, the asymptotic value is zero because the angle of propagation \(\chi\) approaches zero as the azimuthal order becomes large, with the result that the retrograde wave cavity shrinks to zero frequency (see Figure 1). ## 6 Conclusion We have carried out a linear wave analysis for a compressible and stratified atmosphere representing a stellar convection zone rotating at a constant rate. The rotation axis is assumed to be perpendicular to the direction of stratification. By adopting an f-plane approximation, we derive and solve dispersion relations for waves propagating through a neutrally-stable polytropic atmosphere in all three spatial directions: zonal, latitudinal and radial. The density stratification enables radial trapping of prograde-propagating waves with frequencies above a threshold frequency--see Equation (28). Low-frequency waves with frequencies below the threshold (both prograde and retrograde) cannot be trapped by an isentropic density stratification. However, the waves can reflect off of strong gradients in the buoyancy frequency (as occurs at the base of the convection zone) and thereby become radially trapped. If we consider the bottom of the convection zone to be perfectly reflective, we obtain the eigenfrequencies illustrated in Figure 3. If we further place impenetrable latitudinal boundaries, as we did for our millstone shaped domain (see Figure 4), we obtain the eigenfrequencies shown in Figure 5. As expected, the eigenfrequencies generally increase as the horizontal wavenumber increases and decrease as the radial wavenumber (or radial order) increases. In particular, we point to the shape of the dispersion curves that appear in Figure 5, as these suggest that all of the latitudinal overtones should have frequencies that initially rise as the azimuthal order increases and eventually asymptote to a common value. While observations have yet to directly detect thermal Rossby waves, numerical simulations have long evinced such waves. For many years now thermal Rossby waves--in their unstable, nonlinear form--have appeared as "banana cells" or "Busse columns." More recently, stable long-wavelength thermal Rossby waves have been identified as well (Bekki et al., 2022). However, only the radial and latitudinal fundamental have been reported. The first latitudinal overtone was actually the first thermal Rossby wave to be discussed in the literature. In a linear stability analysis, Roberts (1968) calculated the \(\lambda=1\) thermal Rossby wave and demonstrated that this sort of wave represents the convective modes in a rotating system at convective onset. However, this antisymmetric mode turns out to be less unstable than the sectoral mode with \(\lambda=0\) and, hence, the nonlinear convective cells that appear in numerical simulations of thermal convection in a spherical shell usually possess rough symmetry across the equator. Our calculation here may suggest the form of previously undetected tesseral modes that, due to being stable and hence low amplitude, have been skulking around in numerical simulations for many years. If observationally detected these modes can serve as seismic probes for specific entropy density. RJ would like to thank MSRC (University of Sheffield) for partial support. BWH is supported by NASA through grants 80NSSC18K1125, 80NSSC19K0267, 80NSSC20K0193 and acknowledges collaboration with the COFFIES DSC. ## Appendix A Nomenclature There is often confusion concerning the names that are applied to the different types of gravito-inertial waves, particularly thermal Rossby waves. Beyond the fact that thermal Rossby waves have been called by a myriad of names (e.g., low-frequency prograde waves, columnar convective modes, overstable convective modes), part of the confusion arises because all of these waves are in some sense related to each other and can transition from one type of wave to another as various parameters vanish or become large. Thermal Rossby waves are distinct from classical Rossby waves only through geometry. The restoring force is essentially the same, arising from the conservation of potential vorticity (or equivalently angular momentum). Classical Rossby waves concern vortical motions that are largely horizontal, either because the fluid layer is thin (such as the Earth's atmosphere) or the stratification is extremely stable, thus discouraging vertical motions. The conservation principle therefore operates on 2D spherical surfaces and this leads to retrograde propagation. Thermal Rossby waves usually reside in thick atmospheres where fluid elements are free to move vertically without inhibition. In fact, in an unstable stratification like that found in a convection zone, such motions are reinforced. Without vertical constraint, the vortex columns align instead with the rotation axis and prograde-propagating waves are produced by conservation of potential vorticity in this rotationally aligned geometry. Even if we restrict our attention to only zonally-propagating waves in the axially constrained geometry, there are two distinct wave modes (see Hindman & Jain, 2022, 2023). In a stable stratification, with only weak rotation influence, the two gravito-inertial wave solutions consist of the retrograde and prograde branches of the internal gravity waves. However, in an atmosphere of neutral stability, the two solutions correspond to pure inertial waves. The prograde branch now transitions to thermal Rossby waves and the retrograde branch has moved to zero frequency, becoming a stationary geostrophic mode. Finally, in a weakly unstable stratification, both branches become prograde. The branch with the faster zonal phase speed is easily identified as thermal Rossby waves while the slow branch has been inconsistently named. Busse (1986) called this branch the thermal mode while Hindman & Jain (2023) called them the slow thermal Rossby wave branch. Here we also consider propagation latitudinally and this complicates the naming scheme further. As we stated previously, for zonal propagation in an isentropic atmosphere, the two solution branches are prograde thermal Rossby waves and zero-frequency geostrophic modes. When the waves are allowed to propagate obliquely to the equator, the prograde branch remains a prograde inertial wave that is firmly a thermal Rossby wave. The zero-frequency branch becomes retrograde and we choose to call it a retrograde inertial wave for lack of better choice. Oblique propagation through a nonadiabatic stratification leads to an even further loss of clarity. The local dispersion relation reveals why, \[k_{z}^{2}=\left[\frac{2\Omega k_{x}}{\omega\mathcal{H}}+\frac{4k_{y}^{2} \Omega^{2}}{\omega^{2}}+k_{h}^{2}\frac{N^{2}}{\omega^{2}}\right]-\left(k_{h}^{ 2}+\frac{\omega_{ac}^{2}}{c_{s}^{2}}\right)\.\] (A1) There are three types of restoring forces that lead to propagation. Stratification coupled with zonal propagation leads to the first term in the square brackets. This term is positive only for waves with a prograde phase speed. This term provides a compressional \(\beta\)-effect and leads to thermal Rossby waves. The second term is always positive and leads to inertial waves that propagate latitudinally. The third term arises from buoyancy and leads to internal gravity waves. Generally, however, more than one of these terms will be in operation and the wave is a three-way hybrid of internal gravity waves and the two types of inertial waves. An obvious naming scheme becomes apparent only when one, or possibly two, of the terms dominate.
2308.16680
Branches of a Tree: Taking Derivatives of Programs with Discrete and Branching Randomness in High Energy Physics
We propose to apply several gradient estimation techniques to enable the differentiation of programs with discrete randomness in High Energy Physics. Such programs are common in High Energy Physics due to the presence of branching processes and clustering-based analysis. Thus differentiating such programs can open the way for gradient based optimization in the context of detector design optimization, simulator tuning, or data analysis and reconstruction optimization. We discuss several possible gradient estimation strategies, including the recent Stochastic AD method, and compare them in simplified detector design experiments. In doing so we develop, to the best of our knowledge, the first fully differentiable branching program.
Michael Kagan, Lukas Heinrich
2023-08-31T12:32:34Z
http://arxiv.org/abs/2308.16680v1
# Branches of a Tree: Taking Derivatives of Programs with ###### Abstract We propose to apply several gradient estimation techniques to enable the differentiation of programs with discrete randomness in High Energy Physics. Such programs are common in High Energy Physics due to the presence of branching processes and clustering-based analysis. Thus differentiating such programs can open the way for gradient based optimization in the context of detector design optimization, simulator tuning, or data analysis and reconstruction optimization. We discuss several possible gradient estimation strategies, including the recent Stochastic AD method, and compare them in simplified detector design experiments. In doing so we develop, to the best of our knowledge, the first fully differentiable branching program. ## I Introduction Gradient-based optimization methods are at the core of many modern successes in Machine Learning (ML) and Artificial Intelligence (AI), especially Deep Learning. The development and application of these ML methods in High Energy Physics (HEP) has similarly enabled large performance improvements in a wide array of tasks, such as particle reconstruction, fast simulation, and parameter inference (for recent topical reviews, see e.g. [1; 2; 3; 4; 5; 6; 7; 8]). Gradient-based optimization methods rely on automatic differentiation (AD) [9; 10], an algorithmic way to efficiently compute the derivatives of numerical software. AD is a general tool that can be applied to scientific software beyond ML, such as simulators and inference algorithms, and used for optimizing the parameters of this software. The broader application of AD to numerical software, potentially mixed with ML components, is often referred to as _Differentiable Programming_ (DP). For instance, combining AD-enabled HEP software with ML can lead to optimizable hybrid physics-AI models with built-in physics knowledge from the HEP software, such as AI-augmented / AI-guided simulation and reconstruction, or analysis-by-synthesis inference methods with simulators in the loop [11; 12; 13]. Using such AD-integrated HEP software in ML models can be considered a complimentary approach to adding inductive bias into ML models through structure and architecture, such as symmetry equivariance and relational inductive bias. While interest is quickly growing to apply gradient-based optimization methods to a broader set challenges in HEP, such as detector design or end-to-end reconstruction, a major limitation has been the fact that standard AD can only compute derivatives of deterministic continuous functions or stochastic functions with reparametrized continuous random variables [14; 15; 16]. Specifically, in HEP, many programs are both stochastic and rely on sampling _discrete_ random variables or decisions, such as branching points in parton showers, particle-material interactions, or clustering steps in jet building. Standard AD may not compute the desired derivative of such programs correctly, particularly when the discrete stochasticity depends on the parameter one aims to optimize (and thus differentiate with respect to). Instead, more careful consideration on how to compute the appropriate derivative is required. There are several methods for gradient-based optimization in programs with discrete randomness, which we explore within the context of HEP applications. One method uses the _score_-based approach to estimating derivatives of expectations values [17], which has been examined sparsely within HEP and not for tasks such as detector design optimization. Recently, Arya _et al._[18] proposed a new AD method for handling and composing programs with discrete randomness. Using these tools, we develop simplified differentiable HEP simulators,which nonetheless exhibit the critical behaviors which until now hampered progress, and case studies to examine the behavior of and the variance of these gradient estimators. A review of methods for computing derivatives of stochastic programs is found in Sec. II. Related work is discussed in Sec. III. Sec. IV presents applications and comparisons of different gradient estimators in HEP case studies, with an emphasis on detector design optimization. **Contributions1:** We introduce several methods to enable differentiation through the discrete randomness to HEP programs. We show how the score function based approach can be used for design optimization in HEP. While the score function is often used outside of HEP for design optimization, and has been used for other tasks within HEP, it has not yet been explored within the context of HEP detector design optimization. We also introduce stochastic AD [18] to HEP and its ability to enable differentiable programming even in programs with discrete randomness. We provide the first application of stochastic AD to branching processes and in doing so we develop the first (to the best of our knowledge) differentiable branching program. To test these methods, we provide comparisons of gradient methods on detector design optimization case studies. ## II Review of differentiation of stochastic programs In many HEP applications, a quantity of interest can be formulated as an expectated value of a function \(f(x,\theta)\) over a parametrized density \(p_{\theta}(x)\): \(\bar{f}(\theta)=\mathbb{E}_{p_{\theta}(x)}[f(x,\theta)]\). For optimizing such quantities one requires the gradients of these expectation values of stochastic programs, e.g. \(\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x,\theta)]\). Importantly, the expected value of such stochastic programs may be continuous and differentiable, even when they depend on discrete randomness. For instance, the expected value of a Bernoulli random variable \(b\sim\)Bernoulli(\(\theta\)) has the derivative \(\frac{d}{d\theta}\mathbb{E}[b]=\frac{d}{d\theta}\theta=1\). However, standard AD tools applied to such expectations may not produce the desired result. For instance, Figure 1 shows two programs with discrete stochasticity. On the left, the discrete stochasticity does not depend on the parameter of differentiation \(\theta\), and standard AD will produce the correct derivative. On the right the discrete stochasticity depends on \(\theta\), standard AD will ignore this dependence and the resulting derivative will be incorrect as it will ignore this dependence. Handling these challenges requires more dedicated consideration on home to compute the appropriate derivative. We briefly review several approaches to gradient estimation below (see Ref. [19] for a detail review). **Finite Differences (Numerical Differentiation)**: Finite difference methods estimate derivatives by computing the difference between forward evaluations of a program and a perturbed version of the program, for instance: \[\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x)]\approx\frac{\mathbb{E}_{p_{ \theta+\theta}(x)}[f(x)]-\mathbb{E}_{p_{\theta}(x)}[f(x)]}{\epsilon} \tag{1}\] Finite difference methods are prone to high variance [10], and require large numbers of program evaluations when \(\theta\) is high dimensional. Central difference methods can reduce error. One large contributor to this large variance is that multiple independent evaluations of the program are used to estimate this gradient, introducing separate stochastic evaluation paths of the program. **Reparameterization Trick**: In many cases, when sampling \(x\sim p_{\theta}(x)\), we can _smoothly reparametrize_ this sampling as \(z\sim p(z)\) and \(x=g(z,\theta)\), where \(p(z)\) is often a simple "base" distribution and \(g(\cdot,\theta)\) provides a differentiable, \(\theta\)-dependent transformation of samples from the base to the desired distribution. For example, the normal distribution \(x\sim\mathcal{N}(\mu,\sigma)\) is location-scale reparameterizable through \(z\sim\mathcal{N}(0,1)\) and \(x=\sigma z+\mu\sim\mathcal{N}(\mu,\sigma)\). This is particularly convenient for computing derivatives of expectation values of differentiable functions \(f(\cdot)\), \[\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x)] = \frac{d}{d\theta}\int p(z)f(g(z,\theta))dz \tag{2}\] \[= \int p(z)\frac{df}{dg}\frac{dg(z,\theta)}{d\theta}dz\] Many HEP simulators, which implement a structural causal model, can be considered as a reparametrization, mapping from noise variables to random variables \(x\) with physical meaning. However if the random variables \(x\) are discrete, the map is not differentiable, which limits the applicability of the reparametrization trick within a HEP context. **Surrogate Methods**: When a reparameterization is not possible, either because \(f(\cdot)\) is non differentiable or \(p_{\theta}(x)\) does not admit a smooth reparameterization, surrogate methods can be used. In this case, an ML model \(S(z,\theta)\), with \(z\sim p(z)\) a chosen distribution, is trained to mimic the stochastic program. As such, surrogate methods try enable reparameterization though a ML model and thus enable differentiation, for instance: \[\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x)] \approx \frac{d}{d\theta}\int p(z)S(z,\theta)dz \tag{3}\] \[= \int p(z)\frac{dS(z,\theta)}{d\theta}dz\] The quality of this derivative estimator will depend significantly on the quality of the surrogate model as an approximation of the original program. Moreover, the surrogate will learn a smooth approximation of non-differentiable elements of the program, but how this approximation is performed and if bias is introduced is difficult to asses. **Score function**: When the parameter dependence of a differentiable distribution \(p_{\theta}(x)\) is known and differen Figure 1: Assuming differentiable \(g:\mathbb{R}\rightarrow\mathbb{R}\) and \(h:\mathbb{R}\rightarrow(0,1)\), **Left:** Toy program without \(\theta\)-dependence in the discrete stochastic behavior. As such, the derivative of the expected value of this program is \(\frac{dE[f(\theta)]}{d\theta}=\frac{dg(\theta)}{d\theta}\) and standard AD can correctly estimate this. **Right:** Toy program with \(\theta\)-dependence in the discrete stochastic behavior through the Bernoulli parameter \(p\). Standard AD would ignore this dependence, and the resulting derivative estimator would be the same as the program on the left. However the correct derivative should be \(\frac{dE[f(\theta)]}{d\theta}=\frac{dg(\theta)}{d\theta}+\frac{dh(\theta)}{d\theta}\). tiable with respect to the parameters, one can compute: \[\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x)] = \int p_{\theta}(x)\frac{d\log p_{\theta}(x)}{d\theta}f(x)dx \tag{4}\] \[= \mathbb{E}_{p_{\theta}(x)}\Big{[}\frac{d\log p_{\theta}(x)}{d \theta}f(x)\Big{]}\] where \(\frac{d}{d\theta}\log p_{\theta}(x)\) is known as the score function. This gradient estimator is also known as REINFORCE [17], and is commonly used in reinforcement learning. The benefit of this approach is that the \(f(x)\) does not need to be differentiable, only \(p_{\theta}(x)\) must be differentiable with respect to \(\theta\). As such, discrete random variables can be used in the computation of \(f(x)\). _Control Variates_: Score function based gradient estimates often have a large variance, which can make tasks like optimization with gradient descent slow and more difficult. A control variate, \(c(x,\theta)\), can be subtracted from \(f(x)\) in Eqn. 4 to reduce the variance of the estimator as long as it does not bias the estimator, i.e. as long as \(\mathbb{E}_{p_{\theta}(x)}\Big{[}\frac{d\log p_{\theta}(x)}{d\theta}c(x, \theta)\Big{]}=0\). Noting that \(\mathbb{E}_{p_{\theta}(x)}\Big{[}\frac{d\log p_{\theta}(x)}{d\theta}\Big{]}=0\), one way to find a control variate is to choose a \(c(\theta)\) which does not depend on \(x\). A common control variate, often also called a baseline, is the function mean \(\bar{f}_{\theta}=\int p_{\theta}(x)f(x)\); in practice \(\bar{f}_{\theta}\) is often estimated using the mean of a mini-batch. We will use this baseline for the experiments in Sec. IV. More details on variance reduction methods for score based gradient estimation can be found in Ref. [20]. _Proposal Distributions_: In some case, we may not know or have access to \(p_{\theta}(x)\), for example when \(g(\theta)\) is a simulator with parameters as input and sampling is done internally within the program. One approach can be to imagine the input to the program as a sample from a proposal distribution \(\theta\sim\pi_{\psi}(\theta)\), where \(\psi\) are the parameters of the proposal distribution. For instance one could choose a normal distribution \(N(\psi,1)\) for the proposal. One would then aim to optimize the mean of this proposal, \[\frac{d}{d\psi}\mathbb{E}_{\pi_{\psi}(\theta)}[g(\theta)]=\mathbb{E}_{\pi_{ \psi}}\Big{[}\frac{d\log\pi_{\psi}(\theta)}{d\psi}g(\theta)\Big{]} \tag{5}\] **Stochastic AD**: The stochastic derivative of the expected value of a function \(f(\cdot)\) of a discrete random variable \(x\sim p_{\theta}(x)\) has the form [18]: \[\frac{d}{d\theta}\mathbb{E}_{p_{\theta}(x)}[f(x)]=\mathbb{E}_{p_{\theta}(x,y) }[\delta+\beta\big{(}f(y)-f(x)\big{)}] \tag{6}\] where \(\delta\) is the standard derivative \(\partial f/\partial\theta\) as computed with AD, and the second term corresponds to the effect of a change in \(\theta\) on the sampling of the discrete random variable \(x\). The weight \(\beta\) depends on the underlying sampling distribution and \(y\) is an alternative value of the discrete random variable. Conceptually, for discrete \(X\) with consecutive integer range one can understand this result through the lens of reparameterization. One can reparamaterize the discrete distribution via the inversion method, e.g.: \[\omega \sim U[0,1]\] \[x = \{X\ :\omega\in[CDF_{p_{\theta}}(X),CDF_{p_{\theta}}(X+1)\ )\ \}\] For example, if \(x\) is a Bernoulli random variable with parameter \(\theta\), then one can reparameterize \(x=H(\omega>1-\theta)\) where \(H(\cdot)\) is the Heaviside step function. The boundaries which define the set of values of \(\omega\) that result in a value of \(X\) are now dependent on the parameters \(\theta\). A change in parameters changes the boundaries, and thus changes the probabilities of different outcomes \(x\). The second term of the stochastic AD derivative accounts for the infinitesimal change in probabilities as the boundaries are changed as well as the alternative value of the program \(y\) that would result from such a change. Importantly, one can define this derivative at each program evaluation and at each stochastic sampling within the program, allowing for the development of composition rules and of forward mode automatic differentiation. For a more detailed discussion of Stochastic AD, see Ref. [18]. The expectation on the right hand side of Equation 6 is taken with respect to the joint distribution \(p_{\theta}(x,y)\), which is often denoted the _coupling_. The marginal distributions of this coupling must be the same as the original sampling distribution, e.g. \(\int dyp_{\theta}(x,y)=\int dxp_{\theta}(x,y)=p_{\theta}(x)\) to ensure both the primal evaluation of the program and the alternative proceed under the normal operation of the program. As such, there is a distribution over possible alternative programs. In practice, out of all possible alternatives from all of the discrete samplings within a program, only one alternative is randomly chosen. This _pruning_ process treats the set of alternatives as a categorical distribution with the probability of a given alternative being the weight of the alternative relative to the total event weight (computed using the composition rules). This alternative may occur in the middle of the program, and is then tracked in parallel to the primal until completion of the alternative program. One can then use this program alternative \(y\) for computing even gradients using Equation 6. _Variance Reduction with Random Number Reuse_: While the marginals of the coupling are fixed, one has considerable flexibility to choose the correlation structure between \(x\) and \(y\). This is important because once an alternative is determined from a discrete sampling within the program, the alternative program will be run to completion and thus may require additional sampling of discrete random variables. The more correlated are the evaluations of the alternative completion to that of the primal evaluation, the lower the variance of the gradient estimator may be. As downstream discrete samplings also occur in the primal program, one can reuse the reparameterized sampling in the primal program, i.e. the \(\omega\) values sampled for the inversion method. By reusing \(\omega\) values, less additional variance is added to the alternative program then if the downstream discrete random variable were sampled independently. In this work for the experiments in Sec. IV, we use a first-in first-out (FIFO) approach, where at each time step we store \(\omega\) values in the FIFO while iterating over branches (particle) in the primal, and then pull \(\omega\) values from the FIFO while iterating over branches in the alternative. If additional \(\omega\) values are needed by the alternative, they are sampled independently of the primal. ## III Related work Automatic differentiation [9; 10], and its use in gradient based optimization, is ubiquitous in ML, statistics, and applied math. AD uses the chain rule to evaluate derivatives of a function that is represented as a computer program. AD takes as input program code and produces new code for evaluating the program and derivatives. AD typically builds a computational graph, or a directed acyclic graph of mathematical operations applied to an input. Gradients are defined for each operation of the graph, and the total gradient can be evaluated from input to output, called forward mode, and from output to input, called reverse mode or backpropagation in ML. AD is backbone of ML / DP frameworks like TensorFlow[21], JAX[22], and PyTorch[23]. Significant work has been performed on developing Monte Carlo estimators for gradients in machine learning, as discussed in the recent review [19], and gradients of stochastic computation graphs [24]. More recently methods such as Stochastic AD [18; 25] have been developed to target derivatives of programs with discrete stochastic behavior in a compositional way, as well as developing specific applications with dedicated variance reduction methods (e.g. developing coupling for these applications). Extensions to AD have recently been proposed for differentiating the expectations of stochastic programs [26; 27], and to account for parametric discontinuities [28]. Differentiable programming approaches have begun to be explored in HEP. Examples include histogram fitting in pyHF[29; 30], in analysis optimization in Neos[12; 31], for modeling parton distribution functions (pdf) used by matrix element generators [32; 33], and for developing differentiable matrix element simulations with MadJax[11]. Related to our work, Ref. [34] studies differentiating a parton shower, which focuses on the derivative of the primal shower program but does not examine differentiation through discrete randomness in such branching programs. Within cosmology, the Differentiable Universe Initiative is developing a differentiable simulation and analysis toolchain for comological data analyses [35]. Within HEP, score function gradient estimators have been used within the context of jet grooming [36] and hierarchical jet clustering [37]. To the best of our knowledge, this work is the first application to detector design optimization in HEP. On HEP detector design, surrogate based optimization methods have been developed and explored for particle detectors in Ref. [38]. Surrogate methods have also been applied to neutrino detector design [39]. Detector design optimization with standard AD tools and with surrogate methods is discussed in Ref. [40]. A branch-and-bound type algorithm is explored in Ref. [41]. ## IV Applications We present a series of applications in a simplified simulation of particles interacting with detector material. In these experiments, we examine how different gradient estimation methods can be applied and compare their performance in terms of the variance of the estimators. ### Particle Interaction Simulator The simplified simulator in this work aims to capture the salient features of particle physics simulators that model the traversal of particles through matter but be simple enough allow reimplementation in a programming language of choice for a detailed study of various gradient estimation in a self-contained setting. ``` 0:\(E_{0}\) : energy threshold 0:\(\epsilon\) : energy loss at interaction 0:\(m(x,\theta)\) := \(\Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{ \Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{ \Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{ \Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{\Replace{1}{\times}{ \Replace{1}{\times}{\Replace{1}{\times}{\times{\Replace{1}{\times}{\times{ \Replace{1}{\times{\times}{\times{\times{\ particles through two main processes. A binary _splitting process_\(p_{0}\to p_{L},p_{R}\) that splits a parent particle momentum evenly across two child particles and a _energy loss process_\(E\to E-\epsilon\). The probability of a particle interaction is modeled as a function of the material map \(m(x)\). The simulation is performed by fixed time steps, first propagating particles in their direction of travel, and then querying the material map to determine if an interaction occurs, and if so which type of interaction (i.e. splitting or energy loss). Pseudo-code for the simulator can be found in Alg. 1. The detector is simulated as a continuous material map \(m_{\theta}(x_{0},x_{1})\) which takes as input a position \((x_{0},x_{1})\) and outputs an interaction probability. This interaction probability is dependent on detector parameters \(\theta\), and we will examine examples where derivatives with respect to \(\theta\) are sought. Only a single detector parameter is used in the following experiments, which is the detector inner radius which will be denoted \(\theta_{R}\). With \(r=\sqrt{x_{0}^{2}+x_{1}^{2}}\), and \(\phi=\arctan\frac{\pi\alpha}{x_{1}}\), the material map is defined as: \[m_{\theta}(x,y)=\frac{1}{2}m_{start}(r,\theta_{R})\ m_{\Phi}(r,\phi)\ m_{R}(r) \ m_{end}(r,\theta_{R}) \tag{7}\] where \[m_{start}(r,\theta_{R}) = \frac{1}{1+e^{-\beta(r-\theta_{R})}}\] \[m_{\Phi}(r,\phi) = \frac{1}{1+e^{\beta\sin(\omega(\phi+2r))}}\] \[m_{R}(r) = \frac{1}{1+e^{\beta\cos(\omega(r-2))}}\] \[m_{end}(r,\theta_{R}) = \frac{1}{1+e^{\beta(r-\theta_{R}-R_{max})}}\.\] The terms \(m_{start}(r,\theta_{R})\) and \(m_{end}(r,\theta_{R})\) determine the inner and outer radius of the detector, respectively. The terms \(m_{\Phi}(r,\phi)\) and \(m_{R}(r)\) determine the segmentation in \(\phi\) and \(r\), respectively. The constants \(\beta\), \(\omega\), and \(R_{max}\) control the sharpness of the smooth material map, the segmentation frequency in the azimuthal direction, and the maximum depth of the detector, respectively. The parameter that will be optimized is \(\theta_{R}\), the inner radius of the detector. Example material maps can be seen in grey in the event displays of Figure 2, where the darkness of the shade of grey indicates the strength of interaction. ### Single Particle Energy Loss In the single particle energy loss setting, interactions which cause splitting are turned off, i.e. there are no showers. At each step, the particle interacts with the detector with a probability \(p_{\rm{Eloss}}=m(x,\theta)\) which is dependent on material map parameters \(\theta\). This probability is large is high density regions of detector material and small in low density regions. A Bernoulli distribution with parameter \(m(x,\theta)\) is sampled at each time step to determine if the interaction occurs and, if so, the particle deterministically loses energy \(\epsilon=1\) GeV. All particles are set to have initial energy of 25 GeV and when the particle energy falls below \(E_{0}=0.5\) GeV, the particle is stopped. An example event display can be seen in Fig. 1(a), where the primal particle trajectory is seen in purple, the alternative trajectory determined with Stochastic AD is seen in yellow, and the material map is seen in grey. In this example, there is only one detector parameter Figure 2: Event Displays of a single particle energy loss (left) and a particle shower with particle splitting (right). A primal event is shown in purple, while an alternative event, as determined using Stochastic AD, is shown in yellow. The material map is shown in grey. \(\theta\equiv R\), the inner radius of the detector, and derivatives are computed with respect to \(R\) using several methods of estimating gradients. The mean squared error between the radial position of points of particle interaction and a target radius \(\bar{R}_{T}\) is used as a loss function that one may minimize for the purposes of design optimization. In this example we set \(\bar{R}_{T}=2\)m. The loss as a function of the detector radius parameter can be seen on the left in Fig. 3. The loss from individual primal simulation samples can be seen in grey, the median loss and interquartile range in black, and a polynomial interpolation of the average loss. Even though the simulation is stochastic, and one can see the variations of the loss at each parameter in the grey points, there is a clear minimum to the loss function. The gradient estimators and their standard deviations, calculated over the 5000 simulation runs, can be seen as a function of the detector radius parameter in the middle in Fig. 3. The distributions of gradient estimators evaluated at the parameter value \(\theta_{R}=2.5\)m can be seen in the box plot on the right in Fig. 3. As expected, numerical derivatives have the largest standard deviation, though it should be noted that this can depend highly on the size of the finite different \(\epsilon\) and the method for calculating the numerical derivative. Similarly, score function gradient estimators without baseline shows a high standard deviation, especially at large radius parameter where the loss function has larger standard deviation over different simulation samples. The score with baseline has a much reduced standard deviation over the score function without baseline across all parameter values. Stochastic AD shows the smallest standard deviation of all estimators, likely owing to the ability to couple much of the alternative program evaluation to that of the primal up to the alternative branching point. Across all gradient estimators, the mean of the gradient estimator, shown in solid lines, are close to the gradient of the polynomial fit of the loss (which serves as a rough guide to the gradient of the expected loss) within one standard deviation. A numerical comparison of gradient estimator mean and variance can be found in Tab. 1. ### Branching Shower In the branching shower example, the same material map, and thus interaction probability, as the single particle energy loss simulation is used but if an interaction occurs, the particle is deterministically split into two daughter particles each with half the energy of the parent particle and with an opening angle of 0.1 radians. Initial particles are set to have starting energy of 25 GeV and when any particle energy falls below 0.5 GeV, the particle is stopped. The same loss function as in the single particle energy loss example is used here. An example event display can be seen in Fig. 2b, where the primal particle shower is seen in purple, the alternative shower determined with Stochastic AD is seen in yellow, and the material map is seen in grey. The loss function and the standard deviation of the gradients, as functions of the detector radius parameter, can be seen on the left and middle, respectively, in Fig. 4. As in the single particle energy loss example, the numerical gradients are found to have the largest standard deviation of gradients, with the score function without baseline estimator having the second largest standard deviation. Notably, the score function with baseline esti Figure 3: For simulations of single particle energy loss: (_Left_) The loss function and various gradient estimators of the loss are shown as a function of the detector radius parameter. Sample primal evaluations of the loss are shown as markers, and the interquantile interval is shown in black. The red dashed line shows the derivative of an polynomial interpolation of the mean loss. (_Middle_) The mean and standard deviation of the numeric, Stochastic AD (STAD), score function (SCORE), and score function with baseline (SCORB) gradient estimators as a function of the detector radius parameter. The gradient of the polynomial interpolation of the mean loss is shown in dashed black. (_Right_) Box plot of the four variance estimators evaluated at parameter value \(\theta_{R}=2.5\)m, with the mean shown as a dashed line. mator is found to have the smallest standard deviation, slightly smaller than the Stochastic AD gradient estimator. Unlike the single particle example, the splitting shower has many program branching points which can create alternative outputs that are significantly different from the primal shower. In turn, this leads to a reduction in the correlation between the primal and alternative showers and ultimately to an increase in the gradient estimator standard deviation. A comparison of the distribution of gradient estimators, at detector parameter value \(\theta_{R}=2.5\)m, can be seen on the right in Fig. 4. While the mean values (dotted lines) in each box agree well across estimators, the variance estimates as well as the tails are significantly better behaved for Stochastic AD and score function with baseline estimators. Similarly, a comparison of the mean and standard deviation of the gradient estimators evaluated at the parameter value \(\theta_{R}=2.5\)m for both the single particle energy loss and the splitting shower can be found in Tab. 1. It should be noted that there is considerable flexibility in Stochastic AD for how to couple the randomness in the primal and alternative programs after the point at which the alternative is produced, i.e. how to choose the join distribution over random variables in the primal and alternative programs. This selection of coupling can have a considerable impact on the Stochastic AD gradient estimator variance. In this work, we have used a simple approach of re-using random variables sampled in the primal for the alternative, without regard for where those random variables are re-used in the alternative. We have seen that this re-use can have a large impact; we observed that removing the re-use of random variables in the alternative can increase the Stochastic AD gradient estimator standard deviation by factors of 1.5 or more. More generally, a more careful strategy of re-using of random variables may considerably reduce the Stochastic AD gradient estimator variance. ### Design Optimization with Splitting Shower We test the ability to use the various gradient estimators to perform a gradient based optimization using of the detector radius parameter using the aforementioned loss with a target radial shower depth of \(\bar{R}_{T}=2\). Each epoch consists of a single step of the optimization, with a mini-batch size of only 2 simulation runs used to estimate gradients in each epoch. The Adam optimizer [42] is used. A learning rate of 0.01 is used for the gradient descent parameter update. For all optimizations, the initial detector radius parameter value is set to \(\theta_{init}=3\)m and the optimization is run for 500 gradient steps. Each gradient method is used in 10 separate optimizations, \begin{table} \begin{tabular}{l l l} \hline \hline **Estimator** & \(E\)-loss & Shower \\ \hline StochAD & \(\mathbf{3.17\pm 4.47}\) & \(2.53\pm 6.37\) \\ Score w/ Baseline & \(3.01\pm 6.59\) & \(\mathbf{2.47\pm 4.42}\) \\ Score w/o Baseline & \(2.68\pm 17.18\) & \(2.76\pm 12.20\) \\ Numerical & \(3.83\pm 139.96\) & \(2.43\pm 74.85\) \\ \hline \hline \end{tabular} \end{table} Table 1: Gradient estimator mean and standard deviation, for both the single particle energy loss and splitting shower, evaluated at parameter value \(\theta_{R}=2.5\)m and determined from 5,000 samples. The estimator with lowest standard deviation is shown in bold. Figure 4: For simulations of particle showers with splitting: (_Left_) The loss function and various gradient estimators of the loss are shown as a function of the detector radius parameter. Sample primal evaluations of the loss are shown as markers, and the interquantile interval is shown in black. The red dashed line shows the derivative of an polynomial interpolation of the mean loss. (_Middle_) The mean and standard deviation of the numeric, Stochastic AD (STAD), score function (SCORE), and score function with baseline (SCORB) gradient estimators as a function of the detector radius parameter. The gradient of the polynomial interpolation of the mean loss is shown in dashed black. (_Right_) Box plot of the four variance estimators evaluated at parameter value \(\theta_{R}=2.5\)m, with the mean shown as a dashed line. and the average and standard deviation of the loss at each optimization step is shown in Fig. 5. As expected, the score function estimator without baseline and the numeric gradients shows the largest standard deviation to the extent that optimization is not feasible in this setting. We also see that the numeric and score function without baseline estimators are significantly slower at optimizing the objective. The score with baseline estimator and Stochastic AD estimators show similar standard deviation and similar progress towards the loss minimum as a function of optimization step. This suggests that Stochastic AD and score function with baseline estimators provide significantly better gradient estimates, even with very small sample sizes, and are likely interesting estimators for further study of detector design optimizations in more complex settings. ## V Conclusion In this work, we discuss several strategies for differentiating stochastic programs, with a focus on methods capable of differentiating programs with discrete randomness, and discuss their application to High Energy Physics detector design optimization. We develop the first application of Stochastic AD to branching processes and, more generally, the first differentiable branching program capable of estimating gradients through the discrete processes within a particle shower. We also introduce score function gradient estimators within this HEP detector design context. We find that Stochastic AD and score function gradient estimators, using control variates, provide the best gradient estimators in terms of smallest standard deviation among the gradient estimators examined within a case study of detector design. We show that both techniques can successfully be used for gradient-based HEP detector design on a toy detector simulator. More broadly, we believe that the careful study and application of techniques like Stochastic AD and score function estimation can open the way to a wide array of new differentiable programming applications in HEP and other sciences. ## Acknowledgements We thank Gaurav Arya, Frank Schafer, and Moritz Schauer for the helpful discussions regarding Stochastic AD, and thank Gaurav Arya for the helpful feedback on the manuscript. We thank Michael Brenner for the helpful discussions regarding score function gradient estimators at the Aspen Center for Physics, as this work was partially performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452. We also thank the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311, as this work was partially performed at the MIAPbP workshop on Differentiable and Probabilistic Programming for Fundamental Physics. MK is supported by the US Department of Energy (DOE) under grant DE-AC02-76SF00515. LH is supported by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094-390783311.
2309.06962
Quantum non-locality: from denigration to the Nobel prize, via quantum cryptography
In the late 1960s, a young physicist was sailing along the coast of California towards Berkeley, where he got a post-doc position in astronomy. But his real goal was not astronomy, at least not immediately. First, John Clauser eagerly wanted to test some predictions of quantum theory that were at odds with a then recent and mostly ignored result by an Irish physicist John Stewart Bell, working at the celebrated CERN near Geneva.
Nicolas Gisin
2023-09-13T13:56:58Z
http://arxiv.org/abs/2309.06962v1
# Quantum non-locality: ###### Abstract In the late 1960s, a young physicist was sailing along the coast of California towards Berkeley, where he got a post-doc position in astronomy. But his real goal was not astronomy, at least not immediately. First, John Clauser eagerly wanted to test some predictions of quantum theory that were at odds with a then recent and mostly ignored result by an Irish physicist John Stewart Bell, working at the celebrated CERN near Geneva. Bell, inspired by David Bohm's hidden variable model of quantum theory, proved that all possible correlations that can be described by _local_ variables necessarily satisfy some inequalities, today known as Bell inequalities. These inequalities are mathematically quite trivial. However, quantum theory predicts that they can be violated even when the correlation is between outcomes of far distant measurements. Denote \(a\) and \(b\) the measurement outcomes and \(x\) and \(y\) the measurement settings (e.g. polarizers' orientations), and denote \(\lambda\) the hypothetical hidden local variables. Accordingly, the entire statistics of the experiment is captured by the so-called "correlation" - strictly speaking, conditional probability distribution - \(p(a,b|x,y,\lambda)\). The \(\lambda\)'s are hidden in the sense that they are not part of quantum theory, though the usual quantum state \(\psi\) could well be a part of \(\lambda\). Here, local - or Bell-local - refers to the assumption that the correlation factorizes in two parts, one for each of the distant sides of the experiment: \[p(a,b|x,y,\lambda)=p(a|x,\lambda)\cdot p(b|y,\lambda) \tag{1}\] That's the only assumption necessary to derive Bell inequalities. The \(\lambda\)'s denote the state of the system as described by any possible future physical theory (except that the settings \(x\) and \(y\) are assumed to be independent of \(\lambda\)). In this sense, Bell inequalities go way beyond quantum theory: a violation of a Bell inequality proves that no future theory can satisfy the locality condition (1). John Clauser, Abner Shimony, Michael Horne and Richard Holt were among the very few who understood this in the 1960s and all wanted to test Bell inequalities, Clauser to prove quantum theory wrong, Holt, a young student at Harvard, to prove the Bell-locality assumption (1) wrong. Clauser was in a good position thanks to existing equipment at Berkeley. Indeed, Carl Kocher had done a similar experiment in 1967, though for other purposes. Unfortunately, Kocher, and even earlier Chien-Shiung Wu, had only measured the correlations when the polarizers were either parallel or orthogonal, while a proper violation of Bell inequality requires intermediate orientations. Note that assuming that polarization is a 2-dimensional quantum system, a qubit as one says today, correlations at \(45^{o}\) can be derived from the parallel and orthogonal correlations assuming no-signaling [1]: \(E_{45}=(E_{\parallel}+E_{\perp})/\sqrt{2}\). That wasn't known at the time. But regardless, the visibilities measured by Kocher and Wu were below 50%, while a proper violation requires visibilities larger than 71%. Hence the race was on. Clauser got there first, confirming quantum predictions, against his expectation. But then Holt obtained his own result, confirming the inequality, against his expectation. Somehow, the score was one to one. At that time, these fascinating and intriguing results interested almost no one, except some hippies who could later claim to have saved physics [2]. Clauser had long discussions with them, though the last time I met him he had turned into a loud climate skeptic. In the 1970's, my friend Alain Aspect was doing his French civil service in Africa, reading physics, as we all do. When he hit on Bell inequalities, it was love at first sight: "I want to work on that". Back in Paris, he traveled to Geneva to meet John Bell and told him about his plans. Bell replied: "Do you have a permanent position?". Indeed, in those times, working on - or even just showing interest in - Bell inequalities was a kind of scientific suicide. Bohr had it all solved, went the dogma. Looking back, it is difficult to appreciate how deeply denigrated was all research around Bell inequalities and entanglement - the quantum resource necessary to violate them. At the time, French had no agreed-upon translation of entanglement, some used "enchevetrement", others "intrication" (the latter has by now been officially recognized by the French academy). Fortunately, the French system allowed young physicists such as Aspect to hold permanent positions, so he decided to score the winning goal. Crucially, he planned to add fast switches that would allow one to chose the measurement settings \(x\) and \(y\) while the photons were already too far away to possibly influence the other side. Aspect was able to achieve this using newly developed lasers to pump his entanglement source, while Clauser and Holt had to use flash lamps. In a series of three beautiful experiments in the early 1980's, Aspect settled the dispute in favor of quantum theory. Accordingly, no future theory will ever satisfy the locality condition (1). Today, this is often expressed by the short expression non-local, which really means not-Bell-local. Despite these beautiful experiments and the intellectually fascinating discoveries, Bell inequalities remained dismissed and poorly understood. Even to this day, the clear terminology non-local (equivalently, not-Bell-local) is too often blurred as not satisfying "local-realism", as if non-realism was a way out [3; 4]. The fact is that assumption (1) is no longer tenable. As an example, consider the scientific background provided by the Nobel Committee [5]. A few lines after correctly presenting Bohm's non-local hidden variable model, one reads that Bell inequality violation shows "that no hidden variable theory would be able to reproduce all the results of quantum mechanics", contradicting the just cited Bohm model (which does predict violation of Bell inequalities). The correct statement is that no _local_ variable theory is able to reproduce all results of quantum mechanics. And a few lines further, locality is defined as no-signaling - no communication without any physical object carrying the information, despite the fact that one of the main contribution of quantum information to the foundations of physics is a clear distinction between these two concepts. Next, realism is defined as determinism, even though Bell inequalities also hold in all stochastic theories satisfying (1). All this illustrates that Bell inequalities are still poorly understood by the general physics community. The 2022 Nobel Prize in physics allows one to hope that henceforth Bell inequalities will be part of all physics cursus. One major step towards a better appreciation of Bell inequalities came from a young Polish PhD student at Oxford University, Artur Ekert. In 1991, he realized that non-local quantum correlations are nothing but cryptographic keys! Indeed, in both cases, the correlation is private and, after some error correction, the bits on both sides are identical. This proposal to exploit non-local correlations for cryptographic applications changed everything (though it took several years to prove Ekert's intuition correct [6; 7]). Moreover, just a few years later, Peter Shor showed how one can exploit entanglement to break the commonly used public key cryptography system RSA. Thus, in the 1990's, non-locality and entanglement were in the spotlight, at last. But that would not have sufficed. The entanglement source used so far was too complex. Leonard Mandel, at Rochester University, realized that a humble non-linear crystal could provide highly entangled photons when pumped by a simple diode. Moreover, the entangled photons could easily be coupled into optical fibers, opening thus the road to quantum cryptography using existing infrastructure, e.g. our demonstrations of quantum non-locality over the Swisscom network and quantum cryptography under Lake Geneva [8], illustrated in Fig. 1. The focus thus changed from foundations of quantum physics to quantum information science and technologies. New ideas emerged, like quantum teleportation and quantum error correction, in addition to fast experimental progress. Anton Zeilinger had been interested in foundations since his early days as a physicist in neutron interferometry. He quickly joined the quantum information community and became a leading figure. His demonstration of quantum teleportation, immediately after the one in Rome, by De Martini and Popescu, attracted enormous attention, both within the scientific community and from the public at large. Soon thereafter, Zeilinger went further and demonstrated the teleportation of entanglement. Generally, quantum teleportation is the resource behind quantum computation and long-distance quantum communication. Zeilinger also improved on Aspect's experiments in fast choices of the measurement settings (though with low detection efficiencies, hence much simpler experiments were possible [10]). Next, the so-called detection loophole was closed in an optical experiment by Paul Kwiat's group at Ilinois University [11], before a series of "final" loophole-free experiments, one of which was carried out by Zeilinger's group. Zeilinger continued with a long series of remarkable experiments, including dense coding, and the demonstration of 3 and 4 photon entanglement, culminating with the long-distance free-space communication in the Canary Islands, making him a clear leader of the new field of experimental quantum information. The next step was the understanding that entanglement is actually not necessary for point-to-point quantum cryptography, though it remains essential for the security proofs. Indeed, while in tests of Bell inequalities one sets the source about half way between the measurement devices, in applications it is much more practical to put it on one side. Hence, a mere single photon travels to the receiver. This, and a few additional tricks - especially the move to wavelengths compatible with standard telecom optical fibers, which we initiated in Geneva with the development of specific single-photon detectors, made it possible to not only demonstrate quantum cryptography, but to industrialize and commercialize it. Today, there are many small companies selling quantum cryptography equipments, some quite advanced as illustrated in Fig. 2. Development efforts will continue, but no longer with the aim of excluding local variables satisfying (1); the goals Figure 1: _Quantum cryptography under Lake Geneva was the first quantum experiment requiring a satellite photo to illustrate it. Nowadays in commercial use [9]._ now are to make the equipments cheaper, faster and able to cover longer distances, probably by exploiting quantum teleportation. On the conceptual side, the violation of Bell inequalities dramatically revolutionized our world-view. Interestingly, Newton's theory of gravity was also non-local, even signaling. But Einstein improved on it, making gravity local. It is thus not surprising that he strongly objected to quantum non-locality, not fully appreciating that it is of a very different sort: without any action at a distance, just non-local randomness without any possibility to use it for signaling [12; 13]. In contrast to Newton's non-locality, quantum non-locality is here to stay; the experimental evidence is clear on that point. Today, non-local quantum correlations are explored for device-independent quantum information processing [7], in particular device-independent quantum cryptography, a truly fascinating research field unthinkable before Bell's work. Another timely and exciting conceptual goal is to take non-locality beyond the simple Bell scenario and place it in the context of quantum networks with several independent sources of entanglement [14]; this already led to the remarkable result that some quantum networks can't be described using only real-number Hilbert spaces [15]. Sincere congratulations John, Alain and Anton, you made me so happy. Congratulations also to the Nobel Committee for recognizing, finally, the game-changing findings of the late John Stewart Bell, with whom, along with his wife Mary, I had the pleasure of sharing several cheese rackettes in downtown Geneva. ## Acknowledgement Many thanks to Benjamin Feddersen for polishing my English.
2309.08797
On the Asymptotics of Graph Cut Objectives for Experimental Designs of Network A/B Testing
A/B testing is an effective way to assess the potential impacts of two treatments. For A/B tests conducted by IT companies, the test users of A/B testing are often connected and form a social network. The responses of A/B testing can be related to the network connection of test users. This paper discusses the relationship between the design criteria of network A/B testing and graph cut objectives. We develop asymptotic distributions of graph cut objectives to enable rerandomization algorithms for the design of network A/B testing under two scenarios.
Qiong Zhang
2023-09-15T22:43:20Z
http://arxiv.org/abs/2309.08797v1
# On the Asymptotics of Graph Cut Objectives for Experimental Designs of Network A/B Testing ###### Abstract A/B testing is an effective way to assess the potential impacts of two treatments. For A/B tests conducted by IT companies, the test users of A/B testing are often connected and form a social network. The responses of A/B testing can be related to the network connection of test users. This paper discusses the relationship between the design criteria of network A/B testing and graph cut objectives. We develop asymptotic distributions of graph cut objectives to enable rerandomization algorithms for the design of network A/B testing under two scenarios. Keywords:Network-correlated responses; Network interference; Design of controlled experiments; Optimal design. ## 1 Introduction IT companies such as Facebook and LinkedIn frequently conduct controlled experiments to evaluate the performance of two versions (e.g., A and B) of products, features, services, etc. This task is an experimental design problem that requires assigning the users to one of the two versions and collecting their responses for evaluation (Larsen et al., 2023). Often, the users are connected through the apps and form a social network. We refer to network A/B testing as the case that the users participating in A/B testing experiments are connected in a social network, and the social connection may imply the potential dependence between connected users. Therefore, network A/B testing design requires assigning users to A or B version according to their network connection with other users, some examples of recent works including Gui et al. (2015); Parker et al. (2017); Basse and Airoldi (2018); Pokhilko et al. (2019); Zhang and Kang (2022). In the literature of graph theory and optimization (Ben-Tal and Nemirovski, 2001; Gross and Yellen, 2005), this problem is related to cutting the graph into two partitions with respect to some objectives. In this section, we first provide an overview of the graph cut problem and then connect it with the experimental objectives of network A/B testing. ### Graph Cut Problems Consider an undirected and unweighted graph with \(n\) vertexes, each representing a user in the social network given by the graph. We can express the connection between two vertexes by an \(n\times n\) adjacency matrix \(W=\{w_{ij}\}\) whose \((i,j)\)-th entry is \(w_{ij}\). The diagonal entries \(w_{ii}\)'s of this matrix are loaded with zeros, whereas the off-diagonal entries are \[w_{ij}=\begin{cases}1,&\text{if there is an edge between vertexes $i$ and $j$}\\ 0,&\text{otherwise.}\end{cases} \tag{1}\] Let \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\top}\in\{-1,1\}^{n}\) be the assignments of two options to the \(n\) users. This is equivalent to cutting the graph into two disjoint subsets, each with the users assigned to one of the two options, respectively. If there is a cut between two connected users \(i\) and \(j\) (i.e., \(w_{ij}=1\)), then \(x_{i}\) and \(x_{j}\) take different values 1 and -1. A graph cut is minimum if the edges across two subsets resulting from the cut are minimized (Gross and Yellen, 2005). Equivalently, two connected users are more likely to receive the same treatment. This problem can be formulated by \[\max_{\mathbf{x}\in\{-1,1\}^{n}}w_{ij}x_{i}x_{j},\ \ \text{s.t.}\ \ -n+1\leq\sum_{i=1}^{n}x_{i}\leq n-1. \tag{2}\] By maximizing the objective, the solution to this problem tends to assign the same value to \(x_{i}\) and \(x_{j}\) if \(i\) and \(j\) are connected. The constraint \(-n+1\leq\sum_{i=1}^{n}x_{i}\leq n-1\) rules out the situation that all \(x_{i}\)'s are assigned with 1 or -1 as a trivial maximum of the objective. The value of \(\sum_{i=1}^{n}x_{i}\) can be constrained to be zero or in a small interval containing zero if the sizes of the two sub-graphs are required to be relatively the same. The minimum cut problem in (2) is polynomial-time solvable (Lawler, 2001). A graph cut is maximum if the edges across two subsets are maximized (Ben-Tal and Nemirovski, 2001), which is equivalent to \[\min_{\mathbf{x}\in\{-1,1\}^{n}}w_{ij}x_{i}x_{j}. \tag{3}\] The solution of this problem tends to assign opposite signs to connected vertexes \(i\) and \(j\). The graph cut problem is related to experimental designs for network A/B testing. We describe the connections in the following section. ### Optimal Designs in Network A/B Testing Assume that the users' responses are modeled by \[y_{i}=\alpha+x_{i}\beta+\delta_{i}, \tag{4}\] where \(\alpha\) is the intercept, \(\beta\) represents the treatment effect, and \(\delta_{i}\) represents the network effect. Next, we describe two common scenarios of the network effect models. Scenario I: Network Correlated Responses.Under this scenario, two connected users are assumed to share common features. Thus, the responses of two connected users are correlated due to their common features, examples include Basse and Airoldi (2018); Pokhilko et al. (2019); Zhang and Kang (2022). Let \(\mathbf{\delta}=\{\delta_{1},\ldots,\delta_{n}\}^{\top}\) with \(\delta_{i}\) be the error term from (4). Following the assumption in (Zhang and Kang, 2022), we have that \[\mathbf{\delta}\sim\mathcal{MVN}_{n}(0,\sigma^{2}R(W,\rho)^{-1}), \tag{5}\] with \(\sigma^{2}\) being the variance parameter, \(\rho\) being the correlation parameter that characterizes the strength of the correlation between responses of connected users. An example of \(R(W,\rho)\) in (5) is the conditional auto-regressive model (Besag, 1974) with \[R(W,\rho)=\text{diag}\left(d_{1},\ldots,d_{n}\right)-\rho W \tag{6}\] where \(d_{i}=\sum_{j=1}^{n}w_{ij}\) is the degree of the \(i\)-th vertex for \(i=1,\ldots,n\). According to Pokhilko et al. (2019), given the value of \(\rho\), the variance of the weighted least squared estimator of the treatment effect \(\widehat{\beta}\) is \[\text{Var}\left(\widehat{\beta}\right)=\sigma^{2}\left(\sum_{i,j}w_{i,j}-\rho \sum_{i,j}w_{i,j}x_{i}x_{j}-\frac{\left(1-\rho\right)\left(\sum_{i=1}^{n}d_{i }x_{i}\right)^{2}}{\sum_{i,j}w_{i,j}}\right)^{-1}. \tag{7}\] For a given network \(W\), a lower bound of (7) can be expressed by \[\text{Var}\left(\widehat{\beta}\right)\geq\frac{\sigma^{2}}{\left(1+\rho \right)}\left\{\sum_{i,j}w_{i,j}\right\}^{-1}. \tag{8}\] As noted by Pokhilko et al. (2019), this lower bound is attained if \(\sum_{i,j}w_{ij}x_{i}x_{j}=-\sum_{i,j}w_{ij}\) and \(\sum_{i=1}^{n}d_{i}x_{i}=0\) hold exactly. The first condition \(\sum_{i,j}w_{ij}x_{i}x_{j}=-\sum_{i,j}w_{ij}\) requires that any pair of connected users are assigned with different treatments, whereas the second condition \(\sum_{i=1}^{n}d_{i}x_{i}=0\) requires that the treatment allocation is stratified with respect to the degrees of the vertexes. It is obvious that this lower bound can not be exactly attained for all the networks. To reduce the variance of the estimated treatment effect, it is desired to allocate design points to produce smaller values of \(\sum_{i,j}w_{ij}x_{i}x_{j}\) and \(\left(\sum_{i=1}^{n}d_{i}x_{i}\right)^{2}\). Also, \(|\sum_{i=1}^{n}x_{i}|\leq 1\) are required sometimes to ensure that the design is balanced over the two treatments. Scenario II: Network Interference.Under this scenario, the users' responses are affected by the design allocation of their connected users. A commonly used model (e.g., Gui et al. (2015); Parker et al. (2017)) for \(\delta_{i}\) is \[\delta_{i}=\sum_{j=1}^{n}w_{ij}\alpha+\left(\sum_{j=1}^{n}w_{ij}x_{j}\right) \gamma+\varepsilon_{i}, \tag{9}\] where \(\alpha\) and \(\gamma\) are unknown parameters, and \(\varepsilon_{i}\) is an independent random error with mean zero and variance \(\sigma^{2}\). Under this model assumption, the variance of the least squared estimator of \(\beta\) is \[\mathrm{Var}\left(\widehat{\beta}\right)=\sigma^{2}\left\{\mathbf{x}^{\top} \left(I-F_{n}(F_{n}^{\top}F_{n})^{-1}F_{n}^{\top}\right)\mathbf{x}\right\}^{-1} \tag{10}\] where \(F_{n}=[\mathbf{1}_{n}\ W\mathbf{1}_{n}\ W\mathbf{x}]\) is an \(n\times 3\) matrix and \(\mathbf{1}_{n}\) is an \(n\) dimensional vector loaded with ones. It is obvious that \[\mathrm{Var}\left(\widehat{\beta}\right)\geq\frac{\sigma^{2}}{n}. \tag{11}\] The objective \(\mathrm{Var}\left(\widehat{\beta}\right)\) is minimized if \(\mathbf{x}^{\top}F_{n}(F_{n}^{\top}F_{n})^{-1}F_{n}^{\top}\mathbf{x}=0\). Then, if \(n\) is even, the sufficient condition to minimize \(\mathrm{Var}\left(\widehat{\beta}\right)\) is that \[\sum_{i=1}^{n}x_{i}=0,\quad\sum_{i=1}^{n}d_{i}x_{i}=0\quad\text{and}\quad\sum_ {i=1}^{n}\sum_{j=1}^{n}w_{ij}x_{i}x_{j}=0\] Through the two examples under the two scenarios, we see that the design criteria for network A/B testing often contain three components: \[\mathbf{x}^{\top}W\mathbf{x} =\sum_{i=1}^{n}\sum_{j=1}^{n}w_{ij}x_{i}x_{j}, \tag{12}\] \[\mathbf{x}^{\top}W\mathbf{1}_{n} =\sum_{i=1}^{n}d_{i}x_{i},\] (13) \[\mathbf{x}^{\top}\mathbf{1}_{n} =\sum_{i=1}^{n}x_{i}. \tag{14}\] If \(|\mathbf{x}^{\top}\mathbf{1}_{n}|\leq 1\), the design \(\mathbf{x}\) is balanced over the two treatments. If \(\mathbf{x}^{\top}W\mathbf{1}_{n}=0\), the design \(\mathbf{x}\) is balanced with respect to the degrees of the network connection of all \(n\) users. Therefore, \(|\mathbf{x}^{\top}\mathbf{1}_{n}|\leq 1\) and \(\mathbf{x}^{\top}W\mathbf{1}_{n}=0\) give two balanced constraints of the designs. The objective \(\mathbf{x}^{\top}W\mathbf{x}\) is minimized under the scenario of network correlated responses, which is the objective of the max-cut problem in (3). As noted by Pokhilko et al. (2019), the ideal case of the optimal design is given by solving the max-cut problem with the two balanced constraints: \[\min \mathbf{x}^{\top}W\mathbf{x}\] s.t. \[-\delta_{1}\leq\mathbf{x}^{\top}W\mathbf{1}_{n}\leq\delta_{1}\] \[-\delta_{2}\leq\mathbf{x}^{\top}\mathbf{1}_{n}\leq\delta_{2}\] \[\mathbf{x}\in\{-1,1\}^{n},\] for \(\delta_{1}>0\) and \(\delta_{2}>0\), where \(\delta_{2}\) can be set to be one if an exact balance over the two treatments is required. The objective \(\mathbf{x}^{\top}W\mathbf{x}\) should be close to zero under Scenario II. This problem can not be directly solved as a min-cut or a max-cut problem. A visualization of the ideal cases of the optimal designs is given by Figure 1. The network contains 24 users, and the users form 12 pairs, with each pair of users connected. The two colors denote the allocation of two treatments. We observe that the left of Figure 1 shows an optimal design under Scenario I, which allocates different treatments for each pair of connected users to attain the minimized value of the objective in (12). The right of Figure 1 shows an optimal design under Scenario II. The resulting design Figure 1: The optimal designs of A/B testing under Scenario I (Left) and Scenario II (Right) for a network containing 24 users. contains three pairs of users allocated with treatment 1, three pairs allocated with treatment -1, and six allocated with different treatments. Therefore, the value of the objective in (12) attains zero exactly. For an arbitrary network, it is not guaranteed that the ideal optimal values in (8) and (11) can be attained. Also, obtaining the exact optimal design for large social networks without randomization can be inefficient in computation. Also, exact optimal design without sufficient randomization can cause robustness concerns in statistical inference (e.g., Morgan and Rubin (2012)). Next, we propose an algorithm to obtain random designs that reduces the variance of estimated treatment effects under each scenario. ## 2 A Random Design Algorithm for Network A/B Testing Random designs with a certain amount of variance reduction can often be obtained by rerandomization until satisfying some stopping criteria (e.g., Morgan and Rubin (2012); Li and Ding (2017)). Let \[g(\mathbf{x};W,\delta_{1},\delta_{2})=\begin{cases}1&\text{if}\quad-\delta_{1} \leq\mathbf{x}^{\top}W\mathbf{1}_{n}\leq\delta_{1}\quad\text{and}\quad-\delta _{2}\leq\mathbf{x}^{\top}\mathbf{1}_{n}\leq\delta_{2}\\ 0&\text{o.w.}\end{cases}, \tag{15}\] for \(\delta_{1}>0\) and \(\delta_{2}>0\). The value of this function indicates whether or not the two balanced constraints are met. For Scenarios I & II, we denote the stopping rule associated with the graph cut objectives by \[\phi_{1}(\mathbf{x};W,c)=\begin{cases}1&\text{if}\quad\mathbf{x}^{\top}W \mathbf{x}\leq c\\ 0&\text{o.w.}\end{cases}\qquad\text{and}\quad\phi_{2}(\mathbf{x};W,c)= \begin{cases}1&\text{if}\quad|\mathbf{x}^{\top}W\mathbf{x}|\leq c\\ 0&\text{o.w.}\end{cases}, \tag{16}\] respectively. For given values of \(\delta_{1}\), \(\delta_{2}\), and \(c\), we can obtain the random design and check the values of \(g(\mathbf{x};W,\delta_{1},\delta_{2})\) and \(\phi_{1}(\mathbf{x};W,c)\) (or \(\phi_{2}(\mathbf{x};W,c)\)) in a sequence to form a rerandomization algorithm (Pokhilko, 2019). For a stopping rule containing two or more criteria, it can be challenging to investigate the efficiency of the algorithm and control the running time. Therefore, we first propose an algorithm that can generate random designs \(\mathbf{x}\) under the condition that \(g(\mathbf{x};W,\delta_{1},\delta_{2})=1\) for some \(\delta_{1}\) and \(\delta_{2}\). **Algorithm 1**: _We obtain a random design \(\mathbf{x}\) of size \(n\) for a given network \(W\) with degrees \(d_{i}\)'s._ **Step 1:**: _Define_ \[\tilde{d}_{i}=d_{i}+u_{i},\quad\mbox{with}\quad u_{i}\sim\mbox{i.i.d.}\quad U(0, 1). \tag{17}\] _Denote the rank of \(\tilde{d}_{i}\) by \(r_{i}\), which is taking value from 1 to \(n\)._ **Step 2:**: _Denote_ \[c_{i}=\begin{cases}\lfloor\frac{r_{i}}{2}\rfloor&\mbox{if}\quad n\quad\mbox{ is even}\\ \lfloor\frac{r_{i}-1}{2}\rfloor&\mbox{if}\quad n\quad\mbox{is odd}\end{cases},\] _where \(\lfloor r\rfloor\) takes the largest integer that is smaller or equal to \(r\). Then the vertexes are divided into \(n/2\) or \((n+1)/2\) groups, each sharing a common value of \(c_{i}\) and with size one or two._ **Step 3:**: _For the groups given by Step 2, we randomly shuffle \(\{-1,-1\}\) within each group independently to assign their corresponding design values for the vertexes with \(c_{i}>0\). If \(n\) is odd, there will be one group containing a single vertex with \(c_{i}=0\). We randomly assigned with 1 or -1 to the user associated with this vertex._ The purpose of step 1 is to make sure that there are no ties in the rank of the vertexes, and in the meantime, the random variables \(u_{i}\)'s in step 1 provide extra randomness on the resulting design \(\mathbf{x}\). The aim of step 2 and step 3 is to balance over the degrees of different vertexes. As a result of utilizing this algorithm, \(|\mathbf{x}^{\top}\mathbf{1}_{n}|\) will be set as zero or one exactly and \(\mathbf{x}^{\top}W\mathbf{1}_{n}\) will be set close to zero by generating a vector \(\mathbf{x}\) that balancing the degrees. We state Proposition 1 below to formally demonstrate that the random design \(\mathbf{x}\) given by Algorithm 1 satisfies the balanced constraints in (15) for some \(\delta_{1}\) and \(\delta_{2}\). **Proposition 1**: _Algorithm 1 leads a random design \(\mathbf{x}\) that satisfying \(g(\mathbf{x};W,\delta_{1},\delta_{2})=1\) for any \(\delta_{1}\geq c(W)\) and any \(\delta_{2}\geq 1\), where_ \[c(W)=\begin{cases}\sum_{i=1}^{n/2}\left(d_{(2i+1)}-d_{(2i)}\right)&\text{if}\quad n \quad\text{is even}\\ d_{(1)}+\sum_{i=1}^{(n-1)/2}\left(d_{(2i+1)}-d_{(2i)}\right)&\text{if}\quad n \quad\text{is odd}\end{cases},\] _with \(d_{(i)}\)'s be the ordered degrees based on the rank \(r_{i}\)'s in (17). The value of \(c(W)\) is a constant given the network adjacency matrix \(W\)._ The random design \(\mathbf{x}\) given by Algorithm 1 meet the balanced criteria in \(g(\mathbf{x};W,\delta_{1},\delta_{2})\). By utilizing this algorithm, we can rerandomize designs \(\mathbf{x}\) to obtain a random design satisfying one of the stopping rules in (16) for a given \(c\). The proposed random design algorithm is described below. **Algorithm 2**: _Let \(T\) be the maximum number of randomizations. For \(t\leq T\), we loop over the following two steps until the stopping rule in the second step is met._ **Step 1:**: _Generate a random design_ \(\mathbf{x}_{t}\) _using Algorithm_ 1_._ **Step 2:**: _Compute_ \(\mathbf{x}_{t}^{\top}W\mathbf{x}_{t}\)_. For Scenario I (or II), stop the loop if_ \(\phi_{1}(\mathbf{x}_{t};W,c)=1\) _(or_ \(\phi_{2}(\mathbf{x}_{t};W,c)=1\) _for Scenario II) is satisfied._ _For \(t<T\), return the design \(\mathbf{x}_{t}\). For \(t=T\), return the design \(\mathbf{x}_{t^{*}}\) with_ \[t^{*}=\begin{cases}\operatorname*{argmin}_{t=1,\ldots,T}\mathbf{x}_{t}^{\top}W \mathbf{x}_{t}&\text{for Scenario I}\\ \operatorname*{argmin}_{t=1,\ldots,T}|\mathbf{x}_{t}^{\top}W\mathbf{x}_{t}|& \text{for Scenario II}\end{cases}\] This algorithm is provided for a given threshold value \(c\) for (16) and some specific values of \(\delta_{1}\) and \(\delta_{2}\) in Proposition 1. It is important to understand how small those values are compared with the distributions of the objectives in (14) under random designs. In the next section, we propose some asymptotic results to support the investigation of their distributions with random designs. Asymptotic Results on the Graph Cut Objectives Proposition 1 gives the sufficient lower bounds of \(\delta_{1}\) and \(\delta_{2}\) given by Algorithm 1. In practice, the value of \({\bf x}^{\top}W{\bf 1}_{n}\) given by this randomization algorithm can be smaller than \(c(W)\). Although Proposition 1 specifies some values of \(\delta_{1}\) and \(\delta_{2}\) that meet the balanced constraints in (15), it is also necessary to justify how small those values are compared with the results from complete random designs. First of all, the value of \(|{\bf x}^{\top}{\bf 1}_{n}|\) given by Algorithm 1 is taking the minimum possible value. Given that \(|{\bf x}^{\top}{\bf 1}_{n}|\leq 1\), we now develop the asymptotic distribution of \({\bf x}^{\top}W{\bf 1}_{n}\) to compare the lower bound of \(\delta_{1}\) for justification purpose. **Proposition 2**: _For a random allocation of \({\bf x}\) satisfying \(|{\bf x}^{\top}{\bf 1}_{n}|\leq 1\), and_ \[\frac{\max_{1\leq i\leq n}(d_{i}-\bar{d})^{2}}{\sum_{i=1}^{n}(d_{i}-\bar{d})^{ 2}}\to 0,\quad\mbox{\rm with}\quad\bar{d}=n^{-1}\sum_{i=1}^{n}d_{i}\] _as \(n\to\infty\), we have that_ \[\frac{\sum_{i=1}^{n}d_{i}x_{i}}{\sqrt{\sum_{i=1}^{n}(d_{i}-\bar{d})^{2}}}\to N (0,1)\] _in distribution as \(n\to\infty\)._ The asymptotic distribution is used to compute the probability of more extreme cases compared to a given threshold value \(c\) \[{\rm P}\left(\left|{\bf x}^{\top}W{\bf 1}_{n}\right|\leq c\left|{\bf x}^{\top}{ \bf 1}_{n}\right|\leq 1\right)\approx 2\Phi\left(\frac{c}{\sqrt{\sum_{i=1}^{n}(d_{i}- \bar{d})^{2}}}\right)-1 \tag{18}\] where \(\Phi(\cdot)\) is the CDF of the standard normal distribution. Once the network adjacency matrix \(W\) is given, we are able to compute this probability. For example, by specifying \(c=c(W)\), the above probability tells the possibility of a random design \({\bf x}\) satisfying \(|{\bf x}^{\top}W{\bf 1}_{n}|\leq c(W)\) given that \(|{\bf x}^{\top}{\bf 1}_{n}|\leq 1\). Also, given a specific design \({\bf x}_{0}\), we can specify \(c={\bf x}_{0}^{\top}W{\bf 1}_{n}\) to validate if this design can lead to a sufficiently small value. For both scenarios, the value of \(c\) can be chosen as a smaller quantile according to the distributions of the graph cut objective \({\bf x}^{\top}W{\bf x}\) or \(|{\bf x}^{\top}W{\bf x}|\) for a random design \({\bf x}\) generated from Algorithm 1. We first define some convenient notation and then provide the asymptotic distribution of \({\bf x}^{\top}W{\bf x}\). Let \(\tilde{W}\) be the adjacency matrix reorder by \(r_{i}\) in (17) from the smallest to the largest. If \(n\) is an odd number, \(r_{i}=1\) is removed before reordering. Then the size of \(\tilde{W}\) is even. We further define \[W_{0}=\left({\bf I}\otimes[1,-1]\right)\tilde{W}\left({\bf I}\otimes[1,-1]^{ \top}\right) \tag{19}\] with \({\bf I}\) be the identity matrix with size \(n/2\) or \((n-1)/2\) for \(n\) being even or odd. **Proposition 3**: _Given \(W_{0}\) in (19), we assume that_ \[\frac{\min_{i=1,\ldots,n}d_{i}}{\sum_{i\neq j}w_{0,ij}^{2}}\to 0\quad\mbox{ and}\quad\frac{\lambda_{\max}(\tilde{W}_{0})}{\sqrt{\sum_{i\neq j}w_{0,ij}^{2}}} \to 0\quad\mbox{as}\quad n\to\infty,\] _where \(\tilde{W}_{0}\) is a matrix with off-diagonal entries equal to the corresponding entries of \(W_{0}\) in (19), and diagonal entries equal to 0 and \(w_{0,ij}\) is the \(i,j\)-th entry of \(W_{0}\) in (19). The notation \(\lambda_{\max}(A)\) gives the maximum eigenvalue of a symmetric matrix \(A\). Then we have that_ \[\frac{{\bf x}^{\top}W{\bf x}-{\rm trace}(W_{0})}{2\sqrt{\sum_{i<j}w_{0,ij}^{2} }}\to N(0,1)\] _in distribution as \(n\to\infty\)._ Under this proposition, we can set \(c\) in Algorithm 2 based on the asymptotic normal distribution \[{\bf x}^{\top}W{\bf x}\sim AN\left({\rm trace}(W_{0}),4\sum_{i<j}w_{0,ij}^{2}\right)\] For \(\phi_{1}\), we set the stopping threshold as the \(\alpha\)-th quantile of \({\bf x}^{\top}W{\bf x}\), i.e. \[c={\rm trace}(W_{0})+2\sqrt{\sum_{i<j}w_{0,ij}^{2}}\Phi^{-1}(\alpha),\] where \(\Phi^{-1}\) is the inverse function of the standard normal cumulative distribution function. For \(\phi_{2}\), we set the stopping threshold as the \(\alpha\)-th quantile of \(|{\bf x}^{\top}W{\bf x}|\), which asymptotically follows the folded normal distribution with parameters \(\text{trace}(W_{0})\) and \(4\sum_{i<j}w_{0,ij}^{2}\). Therefore, the process of rerandomization in Algorithm 2 constructs a Geometric distribution with the success probability approximated by \(\alpha\). Then we can set the maximum number of randomization \(T\) according to this distribution. Throughout the numerical results of this paper, we set \(T=5000\), \(\alpha=0.005\) for Scenario I, and \(\alpha=0.1\) for Scenario II. ## 4 Numerical Study We evaluate the performances of the proposed design approach by computing two performance measures. We first generate 1000 random balanced designs satisfying that \(\sum_{i=1}^{n}x_{i}=0\), where \(n\) is set to be an even number in the numerical study for convenience. We compute the percentile of \(\text{Var}(\widehat{\beta})\) led by the proposed design approach among the variances from the 1000 random designs: \[\text{Percentile}=\frac{\sum_{i=1}^{1000}I(v_{i}\leq v_{opt})}{1000}, \tag{20}\] where \(v_{i}\)'s are the variances of \(\widehat{\beta}\) led by the 1000 random balanced designs, whereas \(v_{opt}\) is that led by the proposed design approach. We also compute the optimality gap of the proposed design with respect to the lower bounds (denoted by \(v_{lb}\)) given by (8) and (11): \[\text{Gap}=1-\frac{v_{lb}}{v_{opt}}. \tag{21}\] For comparison purposes, we compute the optimality gap of the median variance of the 1000 random balanced designs: \(\text{Gap}_{\text{median}}=1-v_{lb}/v_{median}\) with \(v_{median}\) be the median variance. ### Example I: Synthetic Networks In this section, we evaluate the performances of the proposed approach using synthetic networks. Given the total number of vertexes \(n\) and network density \(p\), for \(i<j\), \(w_{ij}\)'s are generated as iid Bernoulli random variables with the probability equal to one be \(p\). We remove the isolated vertexes, so the actual size of the generated network can be smaller than \(n\). For convenience of implementation, we force the resulting size of the network to be even. We first check the probability in (18) for the synthetic networks. The results are depicted in Figure 2. In the left of Figure 2, we set \(c\) in (18) be \(c(W)\). The results show that the probability of the upper bound is at most 0.1 for networks with a size above 1000, but for networks with a size 100 and smaller density, the probability might be as high as 0.5. For networks with size 1000 or above, the guaranteed upper bound \(c(W)\) for \(\sum_{i=1}^{n}d_{i}x_{i}\) is small with respect to its asymptotic distribution. In the right of Figure 2, we set \(c\) in (18) be \(\sum_{i=1}^{n}d_{i}x_{i}\) with 100 copies of the design \(\mathbf{x}\) generated by Algorithm 1. The average probability values are reported. The actual values of \(\sum_{i=1}^{n}d_{i}x_{i}\) can be significantly smaller than its upper bound \(c(W)\) with small probability values compared to their asymptotic distributions. Next, we compute the Percentile and Gap in (20) and (21) for the designs generated by the proposed method. The results of Scenarios I and II are given in Table 1. We vary the size of network \(n\) and network density \(p\), and generate ten networks under each setting. In the results, we provide the averages of Percentile and Gap over the ten networks under each setting. For comparison purposes, we also give the average value of Gap\({}_{\text{median}}\) in the tables. For Scenario I, all the percentiles are below 0.004, which shows that the variance led by the proposed design is nearly optimal among the 1000 random designs. And the Gap values of the proposed design are all smaller than Gap\({}_{\text{median}}\). The advantage of the proposed design in terms of Gap becomes smaller for \(n\) larger than 1000. For Scenario II, the percentiles of the proposed design approach Figure 2: The resulting probability in (18) with \(c=c(W)\) (left, probability of upper bound) and and the average probability of \(c=|\sum_{i=1}^{n}d_{i}x_{i}|\) (right, we probability of the actual values) for 100 random designs generated by Algorithm 1 with synthetic networks. are around 0.01 for smaller \(n\) and drop to around 0.005 for larger \(n\). The Gap values are smaller than in the case of Scenario I. For \(n=1000\) and 2000, the Gap values to the ideal designs are nearly zero. ### Example II: Real Networks from Facebook We evaluate the performances of the proposed method using real networks from Facebook (Leskovec and Mcauley, 2012). This data contains ten sampled social networks collected from survey participants using Facebook app. After removing completely isolated users, the sizes of the ten networks range from 52 to 1034 and the network densities range from 0.034 to 0.150. Similar to Figure 2, we first check the probability of more extreme situations given by the upper bound \(c(W)\) and the actual values of \(|\sum_{i=1}^{n}d_{i}x_{i}|\) for 100 random designs from Algorithm 1 in Figure 3. The figure shows that the resulting design given by this Algorithm can achieve smaller values of \(|\sum_{i=1}^{n}d_{i}x_{i}|\) to balance over the degrees of vertexes. For both Scenarios, the percentile values of the ten networks are given in Figure 4, whereas the gap values are given in Figure 5. The percentiles of Scenario I are all below 0.0025, whereas the percentiles of Scenario II are all below 0.015, which shows that th \begin{table} \begin{tabular}{|c|r r|r r r|} \hline Scenario & \(n\) & \(p\) & Percentile & Gap & Gap\({}_{\text{median}}\) \\ \hline & 50 & 0.1 & 0.0021 & 0.2471 & 0.3278 \\ & & 0.3 & 0.0032 & 0.2836 & 0.3271 \\ & 100 & 0.1 & 0.0019 & 0.2853 & 0.3302 \\ I & & 0.3 & 0.0027 & 0.3080 & 0.3299 \\ \cline{2-6} & 1000 & 0.01 & 0.0019 & 0.3191 & 0.3331 \\ & & 0.1 & 0.0024 & 0.3289 & 0.3330 \\ & 2000 & 0.01 & 0.0028 & 0.3267 & 0.3332 \\ & & 0.1 & 0.0033 & 0.3311 & 0.3332 \\ \hline \hline & 50 & 0.1 & 0.0120 & 0.0005 & 0.0448 \\ & & 0.3 & 0.0145 & 0.0011 & 0.0466 \\ & 100 & 0.1 & 0.0153 & 0.0004 & 0.0211 \\ II & & 0.3 & 0.0080 & 0.0003 & 0.0228 \\ \cline{2-6} & 1000 & 0.01 & 0.0046 & 0.0000 & 0.0020 \\ & & 0.1 & 0.0053 & 0.0000 & 0.0021 \\ & 2000 & 0.01 & 0.0037 & 0.0000 & 0.0010 \\ & & 0.1 & 0.0051 & 0.0000 & 0.0010 \\ \hline \end{tabular} \end{table} Table 1: Averages of Percentile, Gap, and Gap\({}_{\text{median}}\) over ten generated networks under each setting to smaller variance than the variances from complete random designs. The comparison of Gap and Gap\({}_{\text{median}}\) is similar to the synthetic networks. Figure 4: The percentile in (20) for the ten sampled networks from Facebook Figure 3: The resulting probability in (18) with \(c=c(W)\) (left, probability of upper bound) and the average probability of \(c=|\sum_{i=1}^{n}d_{i}x_{i}|\) (right, ave probability of the actual values) for 100 random designs generated by Algorithm 1 with the ten sampled networks from Facebook ## 5 Conclusion This paper discovered the relationship between the design criteria of network A/B testing and graph cut objectives. We developed asymptotic distributions of two graph cut objectives to enable rerandomization algorithms to design network A/B testing. The numerical results show that the proposed algorithm effectively generates random designs under certain constraints to reduce the variance of parameter estimation from complete random designs. The proposed asymptotic results can also serve as stopping rules of other random algorithms for graph cut-related problems. ## Appendix: Proofs and Additional Numerical Validation ### Proof of Proposition 1 Note that \(d_{(i)}\)'s are the ordered degrees based on the rank \(r_{i}\)'s in (17). Therefore, for \(i<i^{\prime}\), \(d_{(i)}\leq d_{(i^{\prime})}\). If \(n\) is an even number, after applying this algorithm, we have that \[\sum_{i=1}^{n}x_{i}d_{i}=\sum_{i=1}^{n}x_{i}d_{(i)}=\sum_{i=1}^{n/2}\left[d_{(2 i)}x_{2i}-d_{(2i-1)}x_{2i-1}\right],\] Figure 5: The Gap (opt, blue triangles) and Gap\({}_{\text{median}}\) (median, red dots) in (21) for the ten sampled networks from Facebook where \(x_{2i}\) and \(x_{2i-1}\) are assigned to different treatments, 1 or -1. We can express \[\sum_{i=1}^{n}x_{i}d_{i}=\sum_{i=1}^{n/2}\left[d_{(2i)}-d_{(2i-1)}\right]z_{i},\] with \(z_{i}\in\{-1,1\}\). Then, we have that \[-\sum_{i=1}^{n/2}\left[d_{(2i)}-d_{(2i-1)}\right]\leq\mathbf{x}^{\top}W \mathbf{1}_{n}\leq\sum_{i=1}^{n/2}\left[d_{(2i)}-d_{(2i-1)}\right]\] and \[\mathbf{x}^{\top}\mathbf{1}_{n}=0.\] If \(n\) is an odd number, we have that \[-d_{(1)}-\sum_{i=1}^{(n-1)/2}\left(d_{(2i+1)}-d_{(2i)}\right)\leq\mathbf{x}^{ \top}W\mathbf{1}_{n}\leq d_{(1)}+\sum_{i=1}^{(n-1)/2}\left(d_{(2i+1)}-d_{(2i)}\right)\] and \[-1\leq\mathbf{x}^{\top}\mathbf{1}_{n}\leq 1.\] We denote \[c(W)=\begin{cases}\sum_{i=1}^{n/2}\left(d_{(2i+1)}-d_{(2i)}\right)&\text{if} \quad n\quad\text{is even}\\ d_{(1)}+\sum_{i=1}^{(n-1)/2}\left(d_{(2i+1)}-d_{(2i)}\right)&\text{if}\quad n \quad\text{is odd}\end{cases},\] then the conclusion holds. ### Proof of Proposition 2 Note that the asymptotic results for \(\sum_{i=1}^{n}d_{i}x_{i}\) is described in Pokhilko (2019) under random allocations without \(|\mathbf{x}^{\top}\mathbf{1}_{n}|\leq 1\) constraint. Under \(|\mathbf{x}^{\top}\mathbf{1}_{n}|\leq 1\), we derive the asymptotic distribution based on finite population asymptotics (Li and Ding, 2017). We first state a Lemma to support the proof of Proposition 2. **Lemma 1**: _Let \(z_{n}\) be a sequence of random variables that converge in distribution to the standard normal distribution. Let \(s\) be a binary random variable with_ \[\mathrm{P}(s=1)=\mathrm{P}(s=-1)=\frac{1}{2},\] _and \(s\) is independent with \(z_{n}\). Then \(g_{n}=sz_{n}\) converges in distribution to the standard._ _Proof:_ Let \(\Phi(t)\) be the cumulative distribution function (CDF) of \(\mathcal{N}(0,1)\). The CDF of \(g_{n}\) is \[\mathrm{P}(sz_{n}\leq t) =\mathrm{P}(sz_{n}\leq t|s=1)\mathrm{P}(s=1)+\mathrm{P}(sz_{n} \leq t|s=-1)\mathrm{P}(s=-1)\] \[=\frac{1}{2}\left[\mathrm{P}(z_{n}\leq t)+\mathrm{P}(z_{n}\geq-t )\right]\] Since \(z_{n}\) converges in distribution to \(N(0,1)\), we have that \[\mathrm{P}(z_{n}\leq t)\to\Phi(t)\quad\text{and}\quad\mathrm{P}(z_{n}\geq-t) \to 1-\Phi(-t)=\Phi(t)\] as \(n\to\infty\). Accordingly, \(\mathrm{P}(z_{n}\leq t)\to\Phi(t)\) as \(n\to\infty\). The conclusion holds. We now state proof of Proposition 2. We first assume that the \(n\) vertexes have been randomly split into two balanced groups with sizes \(n_{1}\) and \(n_{2}\). Without loss of generality, we assume that \(n_{1}=n_{2}=n/2\) if \(n\) is even, whereas \(n_{1}=n_{2}+1=(n+1)/2\) if \(n\) is odd. Let \(\bar{d}_{1}\) and \(\bar{d}_{2}\) be the average degrees of the vertexes in the first and second groups, respectively. Then, according to Theorem 1 in Li and Ding (2017), we have that \[\frac{\bar{d}_{1}-\bar{d}}{\sqrt{\mathrm{Var}(\bar{d}_{1})}}\to N(0,1)\] in distribution as \(n\to\infty\) under the assumptions given in this proposition. Note that \(\mathrm{Var}(\bar{d}_{1})=(n_{1}^{-1}-n^{-1})(n-1)^{-1}\sum_{i=1}^{n}(d_{i}- \bar{d})^{2}\). Therefore, we have that \[\frac{n_{1}\bar{d}_{1}-n_{2}\bar{d}_{2}}{\sqrt{\sum_{i=1}^{n}(d_{i}-\bar{d})^ {2}}}=\frac{2n_{1}\bar{d}_{1}-n\bar{d}}{\sqrt{\sum_{i=1}^{n}(d_{i}-\bar{d})^ {2}}}=\frac{2n_{1}}{\sqrt{(n-1)n_{1}n/n_{2}}}\frac{\bar{d}_{1}-\bar{d}}{\sqrt{ \mathrm{Var}(\bar{d}_{1})}}+\frac{2n_{1}-n}{\sqrt{(n-1)n_{1}n/n_{2}}}\frac{ \bar{d}}{\sqrt{\mathrm{Var}(\bar{d}_{1})}}\] As \(n\to\infty\), \[\frac{2n_{1}}{\sqrt{(n-1)n_{1}n/n_{2}}}\to 1\quad\text{and}\quad\frac{2n_{1}-n}{ \sqrt{(n-1)n_{1}n/n_{2}}}\to 0\] Therefore, \[\frac{n_{1}\bar{d}_{1}-n_{2}\bar{d}_{2}}{\sqrt{\sum_{i=1}^{n}(d_{i}-\bar{d})^{2 }}}\to N(0,1)\] in distribution as \(n\to\infty\). A random allocation of the design vector \(\mathbf{x}\) with the balanced constraint can be equivalently implemented by a random split with fixed size \(n_{1}\) and \(n_{2}\) and then randomize 1 and -1 over the two groups. Therefore, the conclusion holds according to Lemma 1. ### Proof of Proposition 3 Notice that the entries in \(\mathbf{x}\) from Algorithm 1 are dependent. Then, it is not straightforward to develop the distribution of \(\mathbf{x}^{\top}W\mathbf{x}\) directly. To investigate this, we first provide an alternative expression of the objective \(\mathbf{x}^{\top}W\mathbf{x}\) to simplify the derivation of the asymptotic distribution of \(\mathbf{x}^{\top}W\mathbf{x}\) in the following proposition for the case with an even value of \(n\). **Lemma 2**: _Assume that \(n\) is even. Let \(\mathbf{z}=(z_{1},\ldots,z_{n/2})^{\top}\) with \(z_{i}\in\{-1,1\}\) for \(i=1,\ldots,n/2\). Denote \(\mathbf{x}=\mathbf{z}\otimes[1,-1]^{\top}\). We have that_ \[\mathbf{x}^{\top}W\mathbf{x}=\mathbf{z}^{\top}\left(\mathbf{I}_{n/2}\otimes[ 1,-1]\right)W\left(\mathbf{I}_{n/2}\otimes[1,-1]^{\top}\right)\mathbf{z}.\] The conclusion of this Lemma is obvious, since \(\left(\mathbf{I}\otimes[1,-1]^{\top}\right)\mathbf{z}=\mathbf{x}\). For a random design \(\mathbf{x}\) given by Algorithm 1, we reordered by \(r_{i}\)s in (17) from the smallest to the largest. Denote \(\tilde{\mathbf{x}}\) be the reordered random design. According to the definition of \(\tilde{W}\) and \(W_{0}\) in (19) and Algorithm 1, we have that \[\mathbf{x}^{\top}W\mathbf{x}=\tilde{\mathbf{x}}^{\top}\tilde{W}\tilde{ \mathbf{x}}=\mathbf{z}^{\top}W_{0}\mathbf{z},\] where \(\mathbf{z}\in\{-1,1\}^{n/2}\) is a random vector representing the independent random shuffle of \(\{-1,1\}\) within each group in Step 3 of Algorithm 1. Therefore, we can alternatively evaluate the asymptotic distribution of \(\mathbf{z}^{\top}W_{0}\mathbf{z}\) with the entries of \(\mathbf{z}\) being iid random variable. We consider \(\mathbf{z}=(z_{1},\ldots,z_{n/2})^{\top}\) be independent and identically distributed from the distribution with \(\mathbb{P}(z_{i}=1)=\mathbb{P}(z_{i}=-1)=0.5\). Then \[\mathbb{E}(z_{i})=0,\quad\text{Var}(z_{i})=1,\quad\mathbb{E}|z_{i}|^{3}=1.\] Let \(A\) be a matrix with \(ij\)-th element \(a_{ij}=\frac{w_{0,ij}}{\sqrt{\sum_{i\neq j}w_{0,ij}^{2}}}\) if \(i\neq j\), and \(a_{ii}=0\) for \(i=1,\ldots,n/2\). We have that \[\sum_{ij}a_{ij}^{2}=\frac{\sum_{i\neq j}w_{0,ij}^{2}}{\sum_{i\neq j}w_{0,ij}^{ 2}}=1\] According to Theorem 1 in Gotze and Tikhomirov (1999), we have that \[\sup_{u}\left|\mathbb{P}\left(\frac{\mathbf{z}^{\top}A\mathbf{z}}{\sqrt{\text {Var}(\mathbf{z}^{\top}A\mathbf{z})}}\leq u\right)-\Phi(u)\right|\leq C|\lambda _{\max}(A)|,\] for some constant \(C\). Since \[\lambda_{\max}(A)=\frac{\lambda_{\max}(\tilde{W}_{0})}{\sqrt{\sum_{i\neq j}w_{ 0,ij}^{2}}},\] we have that \[\frac{\mathbf{x}^{\top}W\mathbf{x}-\text{E}\left(\mathbf{z}^{\top}W_{0} \mathbf{z}\right)}{\sqrt{\text{Var}(\mathbf{z}^{\top}W_{0}\mathbf{z})}}=\frac {\mathbf{z}^{\top}W_{0}\mathbf{z}-\text{E}\left(\mathbf{z}^{\top}W_{0} \mathbf{z}\right)}{\sqrt{\text{Var}(\mathbf{z}^{\top}W_{0}\mathbf{z})}}=\frac {\mathbf{z}^{\top}\tilde{W}_{0}\mathbf{z}}{\sqrt{\text{Var}(\mathbf{z}^{\top }\tilde{W}_{0}\mathbf{z})}}=\frac{\mathbf{z}^{\top}A\mathbf{z}}{\sqrt{\text{ Var}(\mathbf{z}^{\top}A\mathbf{z})}}\to N(0,1)\] in distribution as \(n\rightarrow\infty\). We see that \[\text{E}\left(\mathbf{z}^{\top}W_{0}\mathbf{z}\right)=\text{trace}(W_{0})\] and \[\text{Var}(\mathbf{z}^{\top}W_{0}\mathbf{z})=4\sum_{i<j}w_{0,ij}^{2}\] which concludes the case for \(n\) being even. If \(n\) is an odd number, the difference between the original objective and the alternative expression comes from the vertex with the minimum degree. We have that \[\mathbf{x}^{\top}W\mathbf{x}=\tilde{\mathbf{x}}^{\top}\tilde{W}\tilde{\mathbf{x}} +2x_{i_{1}}\sum_{j\neq i_{1}}w_{i_{1},j}x_{j}=\mathbf{z}^{\top}W_{0}\mathbf{z}+ 2x_{i_{1}}\sum_{j=1}^{n}w_{i_{1},j}x_{j},\] where \(i_{1}\) is the index of the vertex with the smallest \(\tilde{d}_{i}\) and \(x_{i_{1}}\) is randomly allocated with 1 or -1 independent with \(x_{j}\) for \(j\neq i_{1}\). Let \(d_{(1)}=\min_{i=1,\ldots,n}d_{i}\). Note that \[d_{(1)}\left(1-\frac{d_{(1)}}{n-1}\right)\leq\text{Var}\left(x_{i_{1}}\sum_{j= 1}^{n}w_{i_{1},j}x_{j}\right)\leq\sum_{j=1}^{n}w_{i_{1},j}=d_{(1)}.\] Then if \(d_{(1)}/{\sum_{i<j}w_{0,ij}^{2}}\to 0\) holds \[\frac{x_{i_{1}}\sum_{j=1}^{n}w_{i_{1},j}x_{j}}{\sqrt{\sum_{i<j}w_{0,ij}^{2}}} \to 0\quad\text{as}\quad n\rightarrow\infty\] in probability. We also have the conclusion holds for \(n\) being odd. This concludes the proof. ### Numerical Validation of the Assumptions in Proposition 3 In this section, we provide numerical validation for the assumptions in Proposition 3. We generate random networks as in Section 4.1. We vary the values of \(n\) (size of network) from 100 to 5000, and the values of network density as 0.01, 0.1, and 0.3. Left of Figure 6 shows how \(\min_{i=1,\ldots,n}d_{i}/\sum_{i\neq j}w_{0,ij}^{2}\) change with the size of network and network density, whereas right of Figure 6 shows how \(\lambda_{max}(\tilde{W}_{0})/\sqrt{\sum_{i\neq j}w_{0,ij}^{2}}\) change with the size of network and network density. The trends in the Figures show evidences of the convergences of those values for randomly generated networks as in Section 4.1.
2308.16858
Majorization-Minimization for sparse SVMs
Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F 1 score) as well as computational cost.
Alessandro Benfenati, Emilie Chouzenoux, Giorgia Franchini, Salla Latva-Aijo, Dominik Narnhofer, Jean-Christophe Pesquet, Sebastian J. Scott, Mahsa Yousefi
2023-08-31T17:03:16Z
http://arxiv.org/abs/2308.16858v1
# Majorization-Minimization for sparse SVMs ###### Abstract Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F\({}_{1}\) score) as well as computational cost. ## 1 Introduction Support Vector Machines (SVMs) are well-tailored for regression and classification applications. They were introduced in the seminal work [15] for supervised learning. In addition to being grounded on sound optimization techniques [28, 34], various extensions of them can be performed. They remain one of the most widely used methods in classification tasks despite the increasing role played by neural networks. As linear classifiers, SVMs have been shown to outperform many supervised methods [23, 9, 26]. Real-world applications include image classification [25], face detection [33, 37], hand-written character recognition [21], melanoma classification [1, 2], text categorization [27, 12]. The interested reader can find a complete review in [10]. The supervised learning problem in the SVM framework consists in minimizing a suitable function measuring the distance between the predicted and the true labels corresponding to a dataset sample. This minimization is carried out with respect to the SVM model parameters. The SVM training problem may be formulated as a quadratic programming one [36]. It may involve least squares loss (hard-margin SVM) under a suitable constraint [32], or the hinge loss [12]. For SVM training, the optimization problem can be solved via Lagrangian duality approaches [15, 5], naturally leading to some clever strategies such as kernel tricks [29], splitting the problem into simpler subproblems [5], cutting plane procedures [14]. In several applications, it is common to promote sparsity on the SVM parameters. This is equivalent to implicitly enforcing _feature selection_, meaning that only the features that are essential to a task are kept, and those that are useless or even result in noisy solutions are dropped. A classification task with a limited or severely unbalanced dataset, facing the so-called overfitting problem, is a standard scenario in which this sparsity condition is required. In such a case, standard (nonsparse) SVMs-based models, which are particularly tailored for specific datasets, might not be able in generalization to adapt properly to unseen data. Introducing regularization to achieve this property in the training loss is the most popular method for inducing sparsity on SVM parameters. The most specifically designed functionals that impose sparsity on a solution are the so-called \(\ell_{0}\)-norm1[35], and its well-known convex relaxation, i.e., \(\ell_{1}\)-norm [8]. Other functionals, including \(\ell_{1,p}\) norms, or elastic-net functionals, have also been shown to effectively promote sparse regularization, see e.g.[31, 30]. To effectively accommodate the additional penalty term, the modification of a training algorithm must be considered. For example, the SVM-based objective function involving the hinge loss and a \(\ell_{1,p}\)-norm can be efficiently minimized by primal-dual methods [12]. Footnote 1: Recall that actually this function is not a proper norm. The presented work focuses on training SVMs when we employ the squared hinge loss as a data fidelity function, coupled with a smooth sparsity-promoting regularization functional. In this way, as we will show hereafter, the loss function is Lipschitz differentiable. This paves the way to the application of fast training techniques. This work investigates first-order methods such as the gradient descent algorithm and discusses its acceleration via Majorization-Minimization (MM) techniques. _Contribution._ The focus of this work is on training an SVMs-based model using a smooth regularization functional that promotes sparsity and the squared hinge loss as a data fidelity function. This makes the loss function Lipschitz differentiable, as we demonstrate, and allows fast training techniques. Moreover, this work explores and analyzes how Majorization-Minimization (MM) strategies can speed up first-order methods. _Outline._ This paper is organized as follows. Together with the regularization functionals taken into consideration in this work, the problem is formulated in Section 2. Section 3 presents the theoretical foundations of MM methods and Section 4 depicts the MM-based algorithms as well as their practical implementation in our context. The goal of Section 5 is to assess the performance of the suggested procedures. Extensive comparisons are conducted with respect to state-of-the-art training algorithms. Finally, Section 6 draws conclusions. _Notation._ Bold letters denote vectors, while bold capital letters denote matrices. Greek and italic Latin letters refer to scalars. \(\mathbb{R}^{n}\) is the real Euclidean space of dimension \(n\), \(\mathbb{R}^{m\times n}\) denotes the real space of matrices with \(m\) rows and \(n\) columns. For \(\mathbf{x}\in\mathbb{R}^{n}\)\(\|\mathbf{x}\|\) denotes the classical Euclidean (or \(\ell_{2}\)) norm of the vector. For a matrix \(\mathbf{A}\), \(\|\mathbf{A}\|\) is the spectral norm of \(\mathbf{A}\). The function \(\mathbf{1}_{\Omega}\) denotes the binary indicator of the set \(\Omega\), \(\mathbf{1}_{\Omega}(x)=1\) if \(x\in\Omega\), \(0\) otherwise. ## 2 Problem Formulation The problem addressed in this work is binary classification. Given a new observation \(\mathbf{x}\in\mathbb{R}^{n}\), which contains the describing \(n\) scalar features, the aim is to categorize \(\mathbf{x}\) into one of two classes. In this section, we present the two main components of the mathematical model we propose to solve this task: the SVM data fidelity term, namely here, the squared hinge loss, and the regularization functionals that promote sparsity. ### SVM loss function The mathematical model for the categorization of the observation \(\mathbf{x}\) into two classes encompasses a linear classifier \[M:\mathbb{R}^{n} \rightarrow\{-1,1\} \tag{1}\] \[\mathbf{x} \rightarrow\text{sign}(\mathbf{w}^{\top}\mathbf{x}+\beta)\] where \(\mathbf{w}\in\mathbb{R}^{n}\) and \(\beta\in\mathbb{R}\). The classifier in (1) defines a separating hyperplane whose purpose is to distinguish between items belonging to different classes, see e.g. Fig. 1 for a 2D example. The output of \(M\) in Eq. (1) will be then \(1\) or \(-1\), and these two labels correspond to the categorization in one of the classes. The parameters \(\mathbf{w}\) and \(\beta\) in (1) must be estimated during a training phase. Given a dataset \(\{(\mathbf{x}_{k},y_{k})\}_{k=1,\dots,K}\), \(\mathbf{x}_{k}\in\mathbb{R}^{n}\), \(y_{k}\in\{-1,1\}\), and where \(\mathbf{x}_{k,i}\) denotes the \(i\)-th feature of the \(k\)-th sample, the ideal training loss would consist in the misclassification count \[\ell\left(M(\mathbf{x}_{k}),y_{k}\right)=\frac{1-y_{k}M(\mathbf{x}_{k})}{2}= \rho\left(y_{k}(\mathbf{w}^{\top}\mathbf{x}_{k}+\beta)\right) \tag{2}\] where \[(\forall\nu\in\mathbb{R})\qquad\rho(\nu)=\frac{1-\text{sign}(\nu)}{2}.\] The training would then be carried on by solving the following optimization problem: \[\underset{\mathbf{w}\in\mathbb{R}^{n},\,\beta\in\mathbb{R}}{\text{minimize}} \ \sum_{k=1}^{K}\rho\left(y_{k}(\mathbf{w}^{\top}\mathbf{x}_{k}+\beta)\right). \tag{3}\] Unfortunately, (3) reveals to be a difficult nonconvex problem. To overcome this issue, a popular choice is to subsitute the hinge loss \(\rho_{\text{hinge}}\) for \(\rho\) in (2), where \[(\forall\nu\in\mathbb{R})\qquad\rho_{\text{hinge}}(\nu)=\max\{1-\nu,0\}. \tag{4}\] Function (4) provides the minimal convex upper bound of the misclassification count function, see Fig. 2 for a visual inspection. **Remark 1**.: _When the two classes are nonempty and separable, the goal of the training is to find a separating hyperplane such that_ \[\begin{cases}\mathbf{w}^{\top}\mathbf{x}_{k}+\beta>0&\text{if }y_{k}=1\\ \mathbf{w}^{\top}\mathbf{x}_{k}+\beta<0&\text{if }y_{k}=-1\end{cases}\] Figure 1: Toy example of a linear classifier. The line easily separates the two classes. Figure 2: Hinge loss function (cyan) versus misclassification count function (orange) for the case in which the true label is 1. The hinge loss strongly penalizes uncorrected labels less than 0, while it assumes low values for outputs in \([0,1]\). Obviously, when the classifier provides the correct label, the loss is zero. _In the case of nonseparable classes, one should employ a slack variable:_ \[\begin{cases}\mathbf{w}^{\top}\mathbf{x}_{k}+\beta\geq 1-\xi_{k}&\text{ if }y_{k}=1\\ \mathbf{w}^{\top}\mathbf{x}_{k}+\beta\leq-1+\xi_{k}&\text{ if }y_{k}=-1\end{cases}\] _which has the following interpretation:_ \[\ell(M(\mathbf{x}_{k}),y_{k})=\min_{\xi_{k}\in[0,+\infty]}\xi_{k}\quad\text{ s.t.}\quad y_{k}(\mathbf{w}^{\top}\mathbf{x}_{k}+\beta)\geq 1-\xi_{k}.\] The hinge loss in (4) is convex but not differentiable and it may cause numerical issues around \(\upsilon=1\). The training can be performed by using primal-dual methods for solving the related optimization problem [12], which are usually costly and might lack flexibility. To overcome this issue, we focus instead on the squared hinge loss \[(\forall\upsilon\in\mathbb{R})\qquad\rho^{2}_{\text{hinge}}(\upsilon)=\max\{ (1-\upsilon)^{2},0\}. \tag{5}\] Function (5) is convex and differentiable on the entire domain. Moreover, it has a 2-Lipschitz gradient, which is a useful property when dealing with optimization problems. A possible drawback is that the squared hinge loss might be more sensitive than the hinge function with respect to larger errors (see Figure 2 for comparison). ### Regularization Our work focuses on the regularized version of the squared-hinge loss problem: \[\underset{\mathbf{w}\in\mathbb{R}^{N},\,\beta\in\mathbb{R}}{\text{minimize}} \quad\sum_{k=1}^{K}\rho^{2}_{\text{hinge}}\big{(}y_{k}(\mathbf{w}^{\top} \mathbf{x}_{k}+\beta)\big{)}+f(\mathbf{w}). \tag{6}\] Various choices can be made for function \(f\), to favor the sparsity of the solution. Below, we list some examples covered by our approach. Namely, we consider \[\Big{(}\forall\,\mathbf{w}=(w_{i})_{1\leq i\leq N}\in\mathbb{R}^{N}\Big{)} \quad f(\mathbf{w})=\sum_{i=1}^{N}\varphi(w_{i})+\frac{\eta}{2}\|\mathbf{w}\| ^{2}, \tag{7}\] where \(\varphi:\mathbb{R}\to\mathbb{R}\) is a potential function and \(\eta\geq 0\), for which we introduce the following requirements: 1. \(\varphi\) is even ; 2. \(\varphi\) is differentiable on \(\mathbb{R}\) ; 3. \(\varphi\left(\sqrt[\gamma]{\cdot}\right)\) is concave on \([0,+\infty[\). This framework is rather versatile, as it allows us to consider several interesting choices, such as smooth approximations for the \(\ell_{1}\) norm or for the \(\ell_{0}\) pseudo-norm. For \(\varphi\equiv 0\), we retrieve the standard quadratic penalty often used in SVMs. When \(\eta\neq 0\) and \(\varphi\) is a sparse promoting term, \(f\) can be viewed as an elastic-net penalty. See Fig. 3 for a visual inspection. Typically, we can use the so-called hyperbolic potential defined, for \(\lambda\geq 0\), by \[(\forall w\in\mathbb{R})\quad\varphi(w)=\lambda\sqrt{w^{2}+\delta^{2}},\, \delta>0. \tag{8}\] Function (8) is a convex function approximating \(w\mapsto\lambda|w|\). Another choice is the Welsh potential \[(\forall w\in\mathbb{R})\quad\varphi(w)=\lambda\left(1-\exp\left(-\frac{w^{2} }{2\delta^{2}}\right)\right),\,\delta>0, \tag{9}\] Function (9) is nonconvex and approximates the binary indicator function \[w\mapsto\lambda 1_{w\neq 0}.\] ### General formulation Problem (6) can be reformulated as in the following way: \[\underset{\mathbf{\theta}\in\mathbb{R}^{N+1}}{\text{minimize}}\ \ (g(\mathbf{L}\mathbf{ \theta})+\tilde{f}(\mathbf{\theta})\equiv\Phi(\mathbf{\theta})) \tag{10}\] where * \(\mathbf{\theta}=[\mathbf{w}^{\top}\ \ \beta]^{\top}\in\mathbb{R}^{N+1}\) * \(\mathbf{L}=\text{Diag}(y_{1},\cdots,y_{K})\begin{bmatrix}\mathbf{x}_{1}^{\top} &1\\ \vdots&\vdots\\ \mathbf{x}_{K}^{\top}&1\end{bmatrix}\in\mathbb{R}^{K\times(N+1)}\) * \(\left(\forall\,\mathbf{v}=(v_{k})_{1\leq k\leq K}\right)\,g(\mathbf{v})=\sum_{ k=1}^{K}\rho_{\text{hinge}}^{2}(v_{k})\) * \(\tilde{f}(\mathbf{\theta})=f(\mathbf{w})\). Note that the regularization term only affects the variable \(\mathbf{w}\) and not the bias \(\beta\). Function \(\Phi\) involved in (6) is differentiable on \(\mathbb{R}^{N+1}\). Its gradient reads \[\left(\forall\,\mathbf{\theta}\in\mathbb{R}^{N+1}\right)\quad\nabla\Phi(\mathbf{ \theta})=\mathbf{L}^{\top}\nabla g(\mathbf{L}\mathbf{\theta})+\nabla\tilde{f}(\bm {\theta}). \tag{11}\] The derivative of the squared hinge loss is \[(\forall\mathbf{v}\in\mathbb{R}^{K})\quad\nabla g(\mathbf{v})=\left(\max(2(v_ {k}-1),0)\right)_{1\leq k\leq K}. \tag{12}\] Moreover, \[(\forall\mathbf{\theta}\in\mathbb{R}^{N+1})\quad\nabla\tilde{f}(\mathbf{\theta})= \left(\begin{array}{c}\varphi^{\prime}(w_{1})+\eta w_{1}\\ \vdots\\ \varphi^{\prime}(w_{N})+\eta w_{N}\\ 0\end{array}\right), \tag{13}\] with \(\varphi^{\prime}\) as the derivative of the potential function \(\varphi\) involved in the construction of the regularization term \(f\). In particular, for the hyperbolic potential function (8), \[(\forall w\in\mathbb{R})\quad\varphi^{\prime}(w)=\lambda\frac{w}{\sqrt{w^{2}+ \delta^{2}}}. \tag{14}\] Figure 3: (a) Absolute value and its smooth approximation with hyperbolic potential. (b) Binary indicator function and its smooth approximation with Welsh potential. while, for the Welsh potential (9), \[(\forall w\in\mathbb{R})\quad\varphi^{\prime}(w)=\lambda\frac{w}{ \delta^{2}}\exp\left(-\frac{w^{2}}{2\delta^{2}}\right) \tag{15}\] In Section 3, we give some important additional properties of function \(\Phi\). Then, in Section 4, we provide a family of algorithms based on the MM principle to solve Problem (10). ## 3 Majorization Properties This section is devoted to presenting a key tool as the core of the training algorithms proposed in this work, namely the MM technique and the underlying concept of majorizing approximation. The MM technique consists of alternating between two steps to solve an initial complex optimization problem. The first step involves computing the tangent majorant of the objective function, and the second is to minimize that majorant in order to progressively converge to a reliable minimizer of the original function. The definition of a majorant function for the function \(\Phi\) in (10) is given in the following. **Definition 1**.: _A tangent majorant \(h(\cdot;\mathbf{\theta}^{\prime}):\mathbb{R}^{N+1}\to\mathbb{R}\) of \(\Phi\) at \(\mathbf{\theta}^{\prime}\in\mathbb{R}^{N+1}\) is a function such that_ \[h(\mathbf{\theta};\mathbf{\theta}^{\prime}) \geq\Phi(\mathbf{\theta})\quad(\forall\,\mathbf{\theta}\in\mathbb{R}^{N+ 1})\] \[h(\mathbf{\theta}^{\prime};\mathbf{\theta}^{\prime}) =\Phi(\mathbf{\theta}^{\prime})\] The general MM iterative scheme then reads: \[(\forall n\in\mathbb{N})\quad\mathbf{\theta}^{(n+1)}=\operatorname{ argmin}_{\mathbf{\theta}\in\mathbb{R}^{N+1}}h(\mathbf{\theta};\mathbf{\theta}^{(n)}), \tag{16}\] with some initialization \(\mathbf{\theta}^{(0)}\in\mathbb{R}^{N+1}\). Under suitable hypotheses on the loss function in (10) and its majorizing approximations, the iterative scheme leads to a sequence converging to a solution [20]. Let us now discuss the construction of reliable majorizing approximations for the considered function \(\Phi\). ### Descent lemma majorant **Proposition 1**.: _Assume that \(\varphi\) is a-Lipschitz differentiable on \(\mathbb{R}\), with \(a>0\). Then, function \(\Phi\) involved in (10) is \(\mu\)-Lipschitz differentiable on \(\mathbb{R}^{N+1}\) with_ \[\mu=2\|\mathbf{L}\|^{2}+a+\eta. \tag{17}\] _As a consequence, for every \(\mathbf{\theta}^{\prime}\in\mathbb{R}^{N+1}\), the following function is a tangent majorant of \(\Phi\) at \(\mathbf{\theta}^{\prime}\),_ \[(\forall\mathbf{\theta}\in\mathbb{R}^{N+1})\quad h(\mathbf{\theta},\mathbf{ \theta}^{\prime})=\Phi(\mathbf{\theta}^{\prime})+\nabla\Phi(\mathbf{\theta}^{\prime}) ^{\top}(\mathbf{\theta}-\mathbf{\theta}^{\prime})+\frac{\mu}{2}\|\mathbf{\theta}-\mathbf{ \theta}^{\prime}\|^{2}. \tag{18}\] Note that functions (8) and (9) are \(a\)-Lipschitz differentiable with \(a=\frac{\lambda}{\delta}\) and \(a=\frac{\lambda}{\delta^{2}}\), respectively. ### Half-quadratic majorant The previous majorizing approximation is interesting but might lack accuracy, as its curvature does not depend on the tangency point \(\mathbf{\theta}^{\prime}\). Hereafter, we propose a more sophisticated approximation, reminiscent of the constructions in half-quadratic algorithms for image processing [3]. **Proposition 2**.: _For every \(\mathbf{\theta}^{\prime}\in\mathbb{R}^{N+1}\), the following function is a tangent majorant of function \(\Phi\) involved in Problem (10):_ \[(\forall\mathbf{\theta}\in\mathbb{R}^{N+1})\quad h(\mathbf{\theta};\mathbf{ \theta}^{\prime})=\Phi(\mathbf{\theta}^{\prime})+\nabla\Phi(\mathbf{\theta}^{\prime}) ^{\top}(\mathbf{\theta}-\mathbf{\theta}^{\prime})+\frac{1}{2}(\mathbf{\theta}-\mathbf{\theta }^{\prime})^{\top}\mathbf{A}(\mathbf{\theta}^{\prime})(\mathbf{\theta}-\mathbf{\theta}^{ \prime}) \tag{19}\] _with,_ \[(\forall\mathbf{\theta}\in\mathbb{R}^{N+1})\quad\mathbf{A}(\mathbf{\theta })=2\mathbf{L}^{\top}\mathbf{L}+\operatorname{Diag}\left(\left[\begin{array}[] {c}\psi(\theta_{1})+\eta\\ \vdots\\ \psi(\theta_{N})+\eta\\ \varepsilon\end{array}\right]\right), \tag{20}\] _with: \(\psi:w\mapsto\varphi^{\prime}(w)/w\) and \(\varepsilon>0\)._ For the potential (8), we have \[(\forall w\in\mathbb{R})\quad\psi(w)=\lambda\frac{1}{\sqrt{w^{2}+\delta^{2}}}, \tag{21}\] while, for (9), \[(\forall w\in\mathbb{R})\quad\psi(w)=\frac{\lambda}{\delta^{2}}\exp\left(-\frac {w^{2}}{2\delta^{2}}\right). \tag{22}\] ## 4 Training SVMs In this section, we present a set of MM-based strategies to solve optimization (10). First, using the descent lemma majorant, we describe a basic gradient descent algorithm with constant stepsize. Then, using a more sophisticated majorant construction, we derive an MM quadratic approach and provide a skillful strategy for the inversion of the majorant curvature. We also present a subspace acceleration of the aforementioned MM method. Finally, we discuss the stochastic implementation of the training methods and propose a set of hybrid methods with fast convergence in both warm-up and asymptotic regimes. ### Gradient Descent Approach The iterative procedure reads as \[(\forall n\in\mathbb{N})\quad\boldsymbol{\theta}^{(n+1)}=\boldsymbol{\theta}^ {(n)}-\alpha\nabla\Phi(\boldsymbol{\theta}^{n}), \tag{23}\] with \(\boldsymbol{\theta}^{(0)}\in\mathbb{R}^{N}\). The iterates produced by (23) are guaranteed to converge to a stationary point of (10) for \(\alpha\in]0,2/\mu[\), where \(\mu\) is defined in Proposition 1. ### MM Quadratic Approach The gradient descent method is often characterized by a slow convergence. Improved performance can be obtained by the MM quadratic scheme based on Proposition 2: \[(\forall n\in\mathbb{N})\quad\boldsymbol{\theta}^{(n+1)} =\operatorname*{argmin}_{\boldsymbol{\theta}}\left(\nabla\Phi( \boldsymbol{\theta}^{(n)})^{\top}(\boldsymbol{\theta}-\boldsymbol{\theta}^{(n )})+\frac{1}{2}(\boldsymbol{\theta}-\boldsymbol{\theta}^{(n)})^{\top}\mathbf{ A}(\boldsymbol{\theta}^{(n)})(\boldsymbol{\theta}-\boldsymbol{\theta}^{(n)})\right)\] \[=\boldsymbol{\theta}^{(n)}-(\mathbf{A}(\boldsymbol{\theta}^{(n) }))^{-1}\nabla\Phi(\boldsymbol{\theta}^{(n)})\,. \tag{24}\] The iterative scheme (24), related to half-quadratic techniques popular in imaging [3], can be viewed as a preconditioned gradient algorithm. The practical implementation and acceleration of this scheme are discussed below. #### 4.2.1 Numerically inverting the majorant curvature The computation of the inverse of \(\mathbf{A}(\boldsymbol{\theta}^{(n)})\) at each iteration, in (24), might be time-consuming. We propose an approach for computing the product \((\mathbf{A}(\boldsymbol{\theta}^{(n)}))^{-1}\nabla\Phi(\boldsymbol{\theta}^{ (n)})\), without explicitly constructing the inverse of the matrix. Referring to (20), we majorize the curvature matrix as follows \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad\mathbf{A}(\boldsymbol{ \theta})\leq\overline{\mathbf{A}}(\boldsymbol{\theta})=2\mathbf{L}^{\top} \mathbf{L}+\sigma_{\text{max}}(\boldsymbol{\theta})\mathbf{I}_{\text{d}}, \tag{25}\] where \[\sigma_{\text{max}}(\boldsymbol{\theta})=\text{max}\,\left\{\psi(\theta_{1}) +\eta,\ldots,\psi(\theta_{N})+\eta,\varepsilon\right\}. \tag{26}\] Suppose \(\mathbf{L}^{\top}=\mathbf{QR}\) is the QR factorization of \(\mathbf{L}^{\top}\in\mathbb{R}^{(N+1)\times K}\), where \(\mathbf{Q}\) is an orthogonal matrix of order \(N+1\) and \(\mathbf{R}\) is a \((N+1)\times K\) trapezoidal matrix (and hence \(\mathbf{RR}^{\top}\) is a symmetric matrix of order \(N+1\)). Then \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad 2\mathbf{L}^{\top} \mathbf{L}+\sigma_{\text{max}}(\boldsymbol{\theta})\mathbf{I}_{\text{d}}=2 \mathbf{QR}\mathbf{R}^{\top}\mathbf{Q}^{\top}+\sigma_{\text{max}}(\boldsymbol {\theta})\mathbf{I}_{\text{d}}. \tag{27}\] Let \(\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\) be the spectral decomposition of \(\mathbf{RR}^{\top}\) where \(\mathbf{U}\) is a matrix whose columns are the eigenvectors of \(\mathbf{RR}^{\top}\), and \(\mathbf{\Lambda}=\text{Diag}(\lambda_{1},\ldots,\lambda_{N+1})\) is diagonal whose elements are the associated eigenvalues. Substituting into (27), we obtain \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad 2\mathbf{L}^{\top} \mathbf{L}+\sigma_{\text{max}}(\boldsymbol{\theta})\mathbf{I}_{\text{d}}=2 \mathbf{QU}\mathbf{A}\mathbf{U}^{\top}\mathbf{Q}^{\top}+\sigma_{\text{max}}( \boldsymbol{\theta})\mathbf{I}_{\text{d}}. \tag{28}\] Since \(\mathbf{Q}\) and \(\mathbf{U}\) have orthogonal columns, considering orthogonal matrix \(\mathbf{P}=\mathbf{QU}\) yields \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad\overline{\mathbf{A}}( \boldsymbol{\theta})=\mathbf{P}\hat{\boldsymbol{\Lambda}}(\boldsymbol{\theta}) \mathbf{P}^{\top}, \tag{29}\] where \(\hat{\boldsymbol{\Lambda}}(\theta)=2\boldsymbol{\Lambda}+\sigma_{\text{max}}( \boldsymbol{\theta})\mathbf{I}_{\text{d}}\). Consequently, constructing \(\overline{\mathbf{A}}(\boldsymbol{\theta})\) as defined in (29) allows us to compute its inversion efficiently as follows \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad(\overline{\mathbf{A}}( \boldsymbol{\theta}))^{-1}=\mathbf{P}\hat{\boldsymbol{\Lambda}}(\boldsymbol{ \theta})^{-1}\mathbf{P}^{\top}. \tag{30}\] **Remark 2**.: _The proposed approach for computing the inverse of curvature matrix approximation might be even more efficient if \(K<N\). Indeed, the "thin" QR factorization leads to quickly computing the spectral decomposition \(\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\) of the smaller \(K\times K\) matrix \(\mathbf{RR}^{\top}\)._ #### 4.2.2 Subspace acceleration Another approach for reducing the complexity of (24), without jeopardizing its convergence properties, is to resort to a subspace acceleration technique. The method then reads \[(\forall n\in\mathbb{N})\quad\boldsymbol{\theta}^{(n+1)}=\boldsymbol{\theta}^ {(n)}-\mathbf{D}^{(n)}((\mathbf{D}^{(n)})^{\top}\mathbf{A}(\boldsymbol{\theta} ^{(n)})\mathbf{D}^{(n)})^{\dagger}(\mathbf{D}^{(n)})^{\top}\nabla\Phi( \boldsymbol{\theta}^{n}). \tag{31}\] Hereabove, \(\dagger\) stands for the pseudo-inversion, and \(\mathbf{D}^{(n)}\in\mathbb{R}^{(N+1)\times M_{n}}\) with \(M_{n}\geq 1\) (typically small), is the so-called subspace matrix. A standard choice is \[(\forall n\in\mathbb{N})\quad\mathbf{D}^{(n)}=\left[-\nabla\Phi(\boldsymbol{ \theta}^{(n)})\mid\boldsymbol{\theta}^{(n)}-\boldsymbol{\theta}^{(n-1)}\right], \tag{32}\] (i.e., \(M_{n}=2\)), with the convention \(\boldsymbol{\theta}^{(0)}=\boldsymbol{0}\), which leads to the 3MG (MM Memory Gradient) algorithm. Another simplest possibility is \[(\forall n\in\mathbb{N})\quad\mathbf{D}^{(n)}=-\nabla\Phi(\boldsymbol{\theta} ^{(n)}), \tag{33}\] (i.e., \(M_{n}=1\)) which results in a gradient descent technique with varying stepsize \[(\forall n\in\mathbb{N})\quad\boldsymbol{\theta}^{(n+1)}=\boldsymbol{\theta}^ {(n)}-\frac{\nabla\Phi(\boldsymbol{\theta}^{(n)})^{\top}\nabla\Phi( \boldsymbol{\theta}^{(n)})}{\nabla\Phi(\boldsymbol{\theta}^{(n)})^{\top} \mathbf{A}(\boldsymbol{\theta}^{(n)})\nabla\Phi(\boldsymbol{\theta}^{(n)})} \nabla\Phi(\boldsymbol{\theta}^{(n)}). \tag{34}\] Convergence of the iterates produced by (31) to a stationary point of \(\boldsymbol{\Phi}\) is shown in [13] under mild assumptions. Convergence to the (unique) solution to (10) is obtained when we additionally assume that the potential function \(\varphi\) is convex and \(\eta>0\). Interestingly, the previously introduced schemes (23) and (24) can both be viewed as special cases of (31), and thus inherit the same convergence properties. #### 4.2.3 Convergence result Let us now state the theoretical convergence guaranties for the MM quadratic method (24) and its variants. **Theorem 1**.: _Let \(\varphi\) given by (8) or (9). Let \((\boldsymbol{\theta}^{(n)})_{n\in\mathbb{N}}\) be generated either by (24), or (31)-(32), or (34). Then, \((\boldsymbol{\theta}^{(n)})_{n\in\mathbb{N}}\) converges to a stationary point of \(\Phi\) in (10). Moreover, if \(\eta>0\) and \(\varphi\) is given by (8), \(\Phi\) is strongly convex, and \((\boldsymbol{\theta}^{(n)})_{n\in\mathbb{N}}\) converges to its unique minimizer._ Proof.: Function \(\Phi\) in (10) is Lipschitz differentiable, by Proposition 1. Moreover, it satisfies Kurdika-Lojasewicz inequality [4] for \(\varphi\) given by (8) or (9). The proof results directly from [13, Theo.3], using Proposition 2, and noticing than (24), (31)-(32), and (34), are all particular cases of an MM quadratic subspace algorithm with one inner iteration. ### Stochastic minimization approaches When we face minimization with a particularly large dataset, it is often necessary to use a stochastic technique based on minibatches [7]. The same issue arises in the context of online learning [6] when the entire dataset is not completely available at the beginning of the learning process. Employing a stochastic method may also be convenient for the speed of convergence, especially in a warm-up phase (i.e., first iterations). The stochastic gradient descent updates the current iterate by a gradient calculated on a single sample \((\mathbf{x}_{k},y_{k})\) with randomly chosen \(k\in\{1,\ldots,K\}\) in order to lighten the computational cost. For every \(k\in\{1,\ldots,K\}\), let us denote \[(\forall\boldsymbol{\theta}=[\mathbf{w}^{\top}\,\beta]^{\top}\in \mathbb{R}^{N+1})\quad\Phi_{k}(\boldsymbol{\theta}) =\rho_{\text{hinge}}^{2}(y_{k}(\mathbf{w}^{\top}\mathbf{x}_{k}+ \beta))+f(\mathbf{w}), \tag{35}\] \[=\rho_{\text{hinge}}^{2}(\mathbf{L}_{k}^{\top}\boldsymbol{\theta}) +\widetilde{f}(\boldsymbol{\theta}), \tag{36}\] with \(\mathbf{L}_{k}\in\mathbb{R}^{N+1}\) as the \(k\)-th row of \(\mathbf{L}\). We deduce the gradient for the \(k\)-th sample \[(\forall\boldsymbol{\theta}\in\mathbb{R}^{N+1})\quad\nabla\Phi_{k}( \boldsymbol{\theta})=\mathbf{L}_{k}\max(2(\mathbf{L}_{k}^{\top}\boldsymbol{ \theta}-1),0)+\nabla\widetilde{f}(\boldsymbol{\theta}).\] We present in Algorithm 1 a basic constant stepsize implementation of the stochastic gradient descent method. ``` 1:Choose an initial iterate \(\boldsymbol{\theta}_{0}\), the stepsize \(\alpha>0\) and the maximum number of iterates \(maxit\). 2:for\(n\in\{0,\ldots,maxit\}\)do 3: Draw at random an index \(\kappa^{(n)}\in\{1,\ldots,K\}\). 4: Compute the stochastic descent direction \(\nabla\Phi_{\kappa^{(n)}}(\boldsymbol{\theta}^{(n)})\). 5: Set the new iterate as \(\theta^{(n+1)}\leftarrow\boldsymbol{\theta}^{(n)}-\alpha\nabla\Phi_{\kappa^{(n) }}(\boldsymbol{\theta}^{(n)})\) 6:endfor ``` **Algorithm 1** Stochastic Gradient (SG) Method In the same fashion, we can also adopt Momentum [24] and AdaM [22] methods, described in Algorithm 2 and Algorithm 3, respectively. ``` 1:Choose an initial iterate \(\boldsymbol{\theta}^{(0)}\), the stepsize \(\alpha>0\), the maximum number of iterates \(maxit\) and \(\beta\in[0,1)\) 2:Initialize \(\mathbf{m}_{0}\gets 0\) 3:for\(n\in\{1,\ldots,maxit\}\)do 4: Draw at random an index \(\kappa^{(n)}\in\{1,\ldots,K\}\). 5: Compute the stochastic descent direction \(\nabla\Phi_{\kappa^{(n)}}(\boldsymbol{\theta}^{(n)})\). 6:\(\mathbf{m}^{(n+1)}\leftarrow\beta\mathbf{m}^{(n)}+\nabla\Phi_{\kappa^{(n)}}( \boldsymbol{\theta}^{(n)})\) 7:\(\boldsymbol{\theta}^{(n)}\leftarrow\boldsymbol{\theta}^{(n-1)}-\alpha\mathbf{ m}^{(n+1)}\) 8:endfor ``` **Algorithm 2** Momentum In Algorithm 3, \(\otimes\) in step 7 denotes the element-wise product, and \(\oslash\) in step 9 is the element-wise division. **Remark 3**.: _For simplicity, only one index \(\kappa^{(n)}\) is selected at each iteration of the above schemes. One may employ the idea of minibatch, where a set \(\mathcal{B}\) of \(B\) indexes is randomly chosen, and the descent direction is given by the weighted sum of the gradients. For example, step 5 in Algorithm 1 becomes_ \[\boldsymbol{\theta}^{(n+1)}\leftarrow\boldsymbol{\theta}^{(n)}-\alpha\frac{1} {B}\sum_{i\in\mathcal{B}}\nabla\Phi_{i}(\boldsymbol{\theta}^{(n)})\] Algorithms 1, 2 and 3 are effective when the hyperparameters are fine-tuned. Several papers in the literature investigate how to develop reliable stepsize selection strategies in stochastic methods, mainly in an adaptive manner [19, 18, 17]. Details on the hyperparameter choice will be given in Section 5. #### 4.3.1 Hybrid approach Considering the practical benefits of stochastic methods, and while keeping in mind their convergence-related constraints, we propose to introduce a hybrid strategy. It consists in using stochastic methods to minimize the objective function for a preset \(\iota\in\mathbb{N}^{*}\) number of iterations, and, thus, taking the advantage of their initial learning speed characteristic. After this phase (the _warm-up_), the iterate \(\boldsymbol{\theta}^{(\iota)}\) obtained from the stochastic methodology is used as an initial point of the deterministic method, benefiting from more stable convergence. Special attention should be paid to the choice of \(\iota\), which must result from a trade-off between the benefit offered by the initial speedup of stochastic methods and the convergence properties of deterministic methods. The choice of \(\iota\) will also be discussed in Section 5. **Remark 4**.: _From the perspective of convergence guarantees, this warm-up phase has no impact, since it is equivalent to choosing a specific initial point \(\boldsymbol{\theta}^{(0)}\)._ ## 5 Numerical Experiments This section is devoted to numerically assessing the performance of the proposed methods. In particular, we consider three different datasets summarized in Table 1. We split each dataset into training and testing sets, and we use 80% of the elements for the training set and the remaining 20% for the testing set. The datasets a1a and w8a can be found at [11], while cina@ is available at [16]. We minimize the loss in Eq. (10) with three different choices for the regularizer term \(\tilde{f}(\boldsymbol{\theta})\), namely \(\varphi=0\) (i.e., squared \(l_{2}\) norm regularization), or \(\varphi\) equal to the potential either (8) or (9). We emphasize that the case when a squared \(l_{2}\) norm is adopted as the regularizer in addition to the fidelity term in the loss is entirely comparable to the formulation of SVM's primary problem; thus ensuring an experimental comparison with standard SVM as well. **Remark 5** (Hyperparameters setting).: _As detailed in Section 4, the choice of proper hyperparameters is crucial for the speed and convergence of stochastic methods. Starting with stochastic methods (SG, Momentum, AdaM) special attention was paid to the choice of the learning rate, which was manually set after an exhaustive search for the optimal one in each method. All other hyperparameters were chosen as default ones found in the literature. In general, \(100\) iterations in the deterministic case (or epochs in the stochastic case) appeared enough for all methods. In the case of hybrid method we consider a total of \(100\) epochs\(+\)iterates. In the hybrid case, we set the warm-up parameter to \(\iota=10\), considering it a good trade-off between the speed of stochastic methods and the stability of deterministic ones. In a nutshell, we perform \(10\) stochastic epochs for the warm-up phase, and then \(90\) deterministic steps (or epochs, which is the same in the deterministic case). Moreover, we empirically set \(\eta=10^{-4}\) ans \(\lambda=\delta=0\) to experiment only \(\ell_{2}\) regularizer, while \(\lambda=\delta=10^{-4}\) and \(\eta=0\) is used for other regularizers, which lead to fair performance on all datasets. A discussion on the influence of \(\lambda\) on the results is provided at the end of the section._ \begin{table} \begin{tabular}{l|c|c|c} dataset & \(N+1\) & \(K_{\text{training}}\) & \(K_{\text{testing}}\) \\ \hline a1a & 120 & 1284 & 321 \\ \hline cina@ & 133 & 12827 & 3206 \\ \hline w8a & 301 & 39800 & 9949 \\ \hline \end{tabular} \end{table} Table 1: Data set characteristics. ### Results To test the effectiveness of the methods on the datasets in the Table 1, we will compare the results of various methods associated with different regularizers. In particular, we start by comparing stochastic methods to choose which one is most suitable for the warm-up phase. We define _optimality gap_ as the difference between the value of the loss function at the point and the function calculated at an estimate of its minimum. This estimate is derived by letting a deterministic method run for thousands of iterates. In Figure 4 we compare the optimality gap of a1a dataset with smooth \(l_{1}\) norm regularizer. As we can see, the best method at the beginning is AdaM: the behaviour on the other datasets is similar, hence we employ AdaM for the warm-up phase. #### 5.1.1 Optimality gap The methods we are going to compare are as follows: * Gradient descent approach (23) called FULL GRADIENT (FG) * MM quadratic approach (24) with approximated inverse curvature (30) called MM INVERSION (MM I) * MM quadratic approach (24) with approximated inverse curvature (30) and an initial warm up of 10 AdaM iterates called HYBRID MM INVERSION (H MM I) * MM quadratic approach (24) with exact curvature * MM quadratic approach (24) with exact curvature and an initial warm-up of 10 AdaM iterates called HYBRID MM (H MM) * MM quadratic with subspace acceleration method (31) called SUBSPACE (SUB) * MM quadratic with subspace acceleration method (31) and an initial warm-up of 10 AdaM iterates called HYBRID SUBSPACE (H SUB). In the plots of Figure 5 we consider on the \(x\)-axis the number of epochs, where an epoch in the deterministic framework means an iterate, whilst in the stochastic framework is a full vision of the dataset. In the \(y\)-axis we consider the optimality gap. As we can observe from Figure 5, hybrid methods outperform all the deterministic methods: the warm up strategy pays off even when a low number of warm up iteration \(\iota\) is set. Moreover, MM methods seems to exploit the second order information of the functional, allowing an evident boost towards the solution. Among Figure 4: Stochastic methods with a1a dataset and smooth \(l_{1}\) norm like regularizer. Figure 5: Optimality gap for different dataset-regularizer combinations. In panel (d) the method MM INVER-SION has been removed since it appeared instable. these mixed strategies (MM plus warm up), HYBRID MM INVERSION overcomes all the other, reaching the best optimality gap among all the coupling dataset-regularizer. #### 5.1.2 Performance measure In the previous section, we presented results for the optimality gap, calculated on the training set. Here we present performance measures calculated on the test set, to emphasize the generalization ability on unseen data of the proposed method. In this regard, we considered four performance measures well known in the literature: accuracy, precision, recall, and F1-score. In our framework of binary classification, we denote with "positive" and "negative" the two classes, corresponding to the labels 1 and -1 in Eq. (1), to be consistent with standard definitions of the previous measures. True Positive (TP) denotes the number of elements of the datasets that are correctly classified as "positive", while True Negative (TN) is the number of elements that are correctly classified as "negative". These two numbers denote the correct classifications. On the other hand, False Positive (FP) and False Negative (FN) denote the elements that are actually negative and are classified as "positive", and vice versa. This terminology stems from classification tasks in medicine and biology. The performance measures considered can be expressed as \[\text{Accuracy} =\frac{TP+TN}{K_{\text{testing}}},\qquad\text{Precision} =\frac{TP}{TP+FN},\] Recall \[=\frac{TP}{TP+FN},\qquad\text{F}_{1}\text{-score} =\frac{TP}{TP+\frac{FN+FP}{2}}.\] In Tables 2-4 we report the results of the performance measures, each table referring to a different dataset. The second row identifies the type of regularizer. These tables clearly show how hybrid methods provide higher performances when a fixed number of iterations is selected: this confirms that this approach allows to start the deterministic method from a suitable initial point, reaching sooner (with respect to a deterministic method) a reliable estimation of the solution. This behaviour is clear from the plots in Figure 5. This numerical results hence show that they are able to generalize and that they are particularly effective even on data not seen in the training phase. The choice of a suitable regularization functional induces more reliable results, considering all the performance indices: sparse-preserving functions are providing with higher scores among a1a and w8a datasets, while in cina@ the \(\ell_{2}\) penalty seems enough to get good results. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} & Reg. & FG & MMI & H MMI & MM & H MM & SUB & H SUB \\ \hline \multirow{3}{*}{Accuracy} & (8) & 0.7944 & 0.8100 & 0.8100 & 0.8193 & **0.8255** & 0.8193 & **0.8255** \\ & (9) & 0.7788 & 0.8037 & 0.8100 & 0.8224 & 0.8224 & 0.8193 & 0.8126 \\ & \(\times\) & 0.7944 & 0.8100 & 0.8100 & 0.8224 & 0.8224 & 0.8193 & 0.8193 \\ \hline \multirow{3}{*}{Recall} & (8) & 0.6279 & 0.6753 & 0.6753 & 0.6974 & 0.7013 & 0.6974 & **0.7105** \\ & (9) & 0.6000 & 0.6623 & 0.6753 & 0.7013 & 0.7013 & 0.6974 & 0.6883 \\ & \(\times\) & 0.6279 & 0.6753 & 0.6753 & 0.6962 & 0.7067 & 0.6974 & 0.6976 \\ \hline \multirow{3}{*}{Precision} & (8) & 0.6136 & 0.5909 & 0.5909 & 0.6023 & 0.6136 & 0.6023 & 0.6136 \\ & (9) & 0.5795 & 0.5795 & 0.5909 & 0.6136 & 0.6136 & 0.6023 & 0.6023 \\ & \(\times\) & 0.6136 & 0.5909 & 0.5909 & **0.6250** & 0.6023 & 0.6023 & 0.6023 \\ \hline \multirow{3}{*}{F\({}_{1}\)} & (8) & 0.6207 & 0.6303 & 0.6303 & 0.6463 & **0.6585** & 0.6463 & 0.6585 \\ & (9) & 0.5896 & 0.6182 & 0.6303 & 0.6545 & 0.6545 & 0.6463 & 0.6424 \\ \cline{1-1} & \(\times\) & 0.6207 & 0.6303 & 0.6303 & 0.6587 & 0.6503 & 0.6463 & 0.6463 \\ \hline \end{tabular} \end{table} Table 2: Performance measures for a1a dataset. (8) and (9) refers to the choice of the regularization functional, while ‘\(\times\)’ denotes the case where \(\varphi=0\) (i.e., only \(\ell_{2}\)-norm regularization is used). The names of the methods refer to the list presented at the beginning of Section 5.1.1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} & Reg. & FG & MMI & H MMI & MM & H MM & SUB & H SUB \\ \hline \multirow{3}{*}{Accuracy} & (8) & 0.7748 & 0.9195 & 0.9245 & 0.8531 & 0.9148 & 0.8525 & 0.8668 \\ & (9) & 0.7748 & 0.9195 & 0.9245 & 0.8534 & 0.9148 & 0.8531 & 0.8696 \\ & \(\times\) & 0.7748 & 0.9195 & **0.9248** & 0.8534 & 0.9145 & 0.8531 & 0.8677 \\ \hline \multirow{3}{*}{Recall} & (8) & 0.5615 & 0.8000 & 0.8608 & 0.7138 & 0.8261 & 0.7121 & 0.7378 \\ & (9) & 0.5615 & 0.8501 & 0.8600 & 0.7178 & 0.8253 & 0.7138 & 0.7500 \\ & \(\times\) & 0.5615 & 0.8501 & **0.8610** & 0.7121 & 0.8236 & 0.7133 & 0.7416 \\ \hline \multirow{3}{*}{Precision} & (8) & 0.5845 & 0.6154 & 0.8442 & 0.7198 & 0.8490 & 0.7198 & 0.7512 \\ & (9) & 0.5845 & 0.8357 & 0.8454 & 0.7126 & 0.8502 & 0.7198 & 0.7428 \\ & \(\times\) & 0.5845 & 0.8357 & 0.8454 & 0.7258 & **0.8514** & 0.7210 & 0.7488 \\ \hline \multirow{3}{*}{F\({}_{1}\)} & (8) & 0.5728 & 0.6957 & 0.8524 & 0.7168 & 0.8374 & 0.7159 & 0.7445 \\ & (9) & 0.5728 & 0.8429 & 0.8526 & 0.7152 & 0.8376 & 0.7168 & 0.7464 \\ & \(\times\) & 0.5828 & 0.8429 & **0.8531** & 0.7189 & 0.8373 & 0.7171 & 0.7452 \\ \hline \end{tabular} \end{table} Table 4: Performance measures for cina@ dataset. (8) and (9) refers to the choice of the regularization functional, while ‘\(\times\)’ denotes \(\varphi=0\) (i.e. only \(\ell_{2}\)-norm regularization). The names of the methods refer to the list presented at the beginning of Section 5.1.1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} & Reg. & FG & MMI & H MMI & MM & H MM & SUB & H SUB \\ \hline \multirow{3}{*}{Accuracy} & (8) & 0.9607 & 0.9916 & **0.9919** & 0.9876 & 0.9912 & 0.9914 & 0.9916 \\ & (9) & 0.9602 & 0.9837 & 0.9918 & 0.9876 & 0.9916 & 0.9911 & 0.9913 \\ & \(\times\) & 0.9607 & 0.6471 & 0.9917 & 0.9876 & 0.9912 & 0.9910 & 0.9918 \\ \hline \multirow{3}{*}{Recall} & (8) & 0.1362 & 0.8000 & **0.8205** & 0.6299 & 0.7931 & 0.8182 & 0.8051 \\ & (9) & 0.1341 & 0.4837 & 0.8033 & 0.6299 & 0.7813 & 0.8073 & 0.7717 \\ & \(\times\) & 0.1362 & 0.0187 & 0.8067 & 0.6299 & 0.7982 & 0.8173 & 0.8136 \\ \hline \multirow{3}{*}{Precision} & (8) & 0.2821 & 0.6154 & 0.6154 & 0.5128 & 0.5897 & 0.5769 & 0.6090 \\ & (9) & 0.2821 & 0.5705 & 0.6282 & 0.5128 & **0.6410** & 0.5641 & 0.6282 \\ & \(\times\) & 0.2821 & 0.4167 & 0.6154 & 0.5128 & 0.5833 & 0.5449 & 0.6154 \\ \hline \multirow{3}{*}{F\({}_{1}\)} & (8) & 0.1837 & 0.6957 & 0.7033 & 0.5654 & 0.6765 & 0.6767 & 0.6934 \\ & (9) & 0.1818 & 0.5235 & **0.7050** & 0.5654 & 0.7042 & 0.6642 & 0.6926 \\ \cline{1-1} & \(\times\) & 0.1837 & 0.0357 & 0.6982 & 0.5654 & 0.6741 & 0.6538 & 0.7007 \\ \hline \end{tabular} \end{table} Table 3: Performance measures for w8a dataset. (8) and (9) refers to the choice of the regularization functional, while ‘\(\times\)’ denotes \(\varphi=0\) (i.e., only \(\ell_{2}\)-norm regularization). The names of the methods refer to the list presented at the beginning of Section 5.1.1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \(\lambda\) & sparsity & accuracy & precision & recall & F1-score \\ \hline \(10^{-1}\) & 55/120 & **0.8162** & **0.6986** & 0.5795 & 0.6335 \\ \hline \(10^{-2}\) & 35/120 & **0.8162** & **0.68838** & **0.6023** & **0.6424** \\ \hline \(10^{-3}\) & 21/120 & 0.8100 & 0.6753 & 0.5909 & 0.6303 \\ \hline \(10^{-4}\) & 18/120 & 0.8100 & 0.6753 & 0.5909 & 0.6303 \\ \hline \(10^{-5}\) & 16/120 & 0.8100 & 0.6753 & 0.5909 & 0.6303 \\ \hline \hline \end{tabular} \end{table} Table 6: Sparsity ratio and classification metrics for the a1a dataset with (8) regularization functional, for different \(\lambda\) choices. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \(\lambda\) & sparsity & accuracy & precision & recall & F1-score \\ \hline \(10^{-1}\) & 249/300 & 0.9921 & 0.8469 & 0.6026 & 0.7041 \\ \hline \(10^{-2}\) & 159/300 & **0.9922** & **0.8482** & 0.609 & **0.709** \\ \hline \(10^{-3}\) & 44/300 & 0.9921 & 0.8407 & 0.609 & 0.7063 \\ \hline \(10^{-4}\) & 33/300 & 0.9919 & 0.8205 & **0.6154** & 0.7033 \\ \hline \(10^{-5}\) & 5/300 & 0.9919 & 0.8205 & **0.6154** & 0.7033 \\ \hline \hline \end{tabular} \end{table} Table 8: Sparsity ratio and classification metrics for the w8a dataset with (8) regularization functional, for different \(\lambda\) choices. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \(\lambda\) & sparsity & accuracy & precision & recall & F1-score \\ \hline \(10^{-1}\) & 249/300 & 0.9921 & 0.8469 & 0.6026 & 0.7041 \\ \hline \(10^{-2}\) & 159/300 & **0.9922** & **0.8482** & 0.609 & **0.709** \\ \hline \(10^{-3}\) & 44/300 & 0.9921 & 0.8407 & 0.609 & 0.7063 \\ \hline \(10^{-4}\) & 33/300 & 0.9919 & 0.8205 & **0.6154** & 0.7033 \\ \hline \(10^{-5}\) & 5/300 & 0.9919 & 0.8205 & **0.6154** & 0.7033 \\ \hline \hline \end{tabular} \end{table} Table 7: Sparsity ratio and classification metrics for the cina0 dataset with (8) regularization functional, for different \(\lambda\) choices. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline & Reg. & FG & MMI & H MMI & MM & H MM & SUB & H SUB \\ \hline \multirow{4}{*}{a1a} & (8) & 0.010434 & 0.161774 & 0.031648 & 0.161625 & 0.025913 & 0.036542 & 0.164651 \\ & (9) & 0.043637 & 0.186570 & 0.031504 & 0.160993 & 0.034640 & 0.043813 & 0.180505 \\ & \(\times\) & 0.173047 & 1.769611 & 0.223276 & 1.680880 & 0.198365 & 0.211096 & 1.738608 \\ \hline \multirow{4}{*}{cina0} & (8) & 0.181315 & 1.857992 & 0.225515 & 1.808992 & 0.231394 & 0.224193 & 1.888193 \\ & (9) & 0.191336 & 1.986022 & 0.230983 & 2.019854 & 0.255824 & 0.297928 & 1.975059 \\ & \(\times\) & 0.173007 & 1.698734 & 0.220615 & 1.702079 & 0.214507 & 0.249245 & 2.063324 \\ \hline \multirow{4}{*}{w8a} & (8) & 1.098484 & 8.006783 & 1.843458 & 8.389577 & 1.112776 & 1.288889 & 10.018341 \\ & (9) & 1.074437 & 11.936826 & 20.957777 & 13.887202 & 1.180779 & 1.133178 & 13.653358 \\ \cline{1-1} & \(\times\) & 1.060243 & 7.085468 & 1.596996 & 8.402317 & 1.279365 & 1.217616 & 8.584693 \\ \hline \hline \end{tabular} \end{table} Table 5: Time in second for all the datasets. (8) and (9) refers to the choice of the regularization functional, while ‘\(\times\)’ denotes \(\varphi=0\) (i.e., only \(\ell_{2}\)-norm regularization). The names of the methods refer to the list presented at the beginning of Section 5.1.1. In Table 5, we report the computational time in seconds for training the various methods. When we refer to computational time, we mean the entire training phase on all the elements of the dataset, thus referring to the 100 epochs/iterations. All experiments were run on an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz 2.81 GHz. For completeness, all combinations of datasets and regularizers have been reported. The faster training is performed by FG, H MMI, H MM, and SUB approaches. The best compromise between performance and complexity is achieved by hybrid MM methods. We evaluate in Tables 6-8 the influence of setting parameter \(\lambda\), when choosing the convex regularizer (8). As expected, the sparsity (i.e., number of zero coefficients) in the retrieved coefficients increases with \(\lambda\). We can also notice that the best classification metrics are obtained for an intermediary value of \(\lambda\). This is in particular the case for a1a and w8a datasets. This emphasizes the important role of the introduced sparsifying penalty. ## 6 Conclusions This paper revisits existing approaches for training Support Vector Machines by considering modern developments around MM strategies. The novel family of proposed optimization methods address formulations combining a square hinge loss data fidelity function with a smooth sparsity-promoting regularization functional. This combination results in a differentiable objective function, enabling the use of efficient optimization methods for training. The numerical tests performed on three datasets show that the proposed approaches provide reliable results in terms of accuracy, precision, recall, and F1 score and that a hybrid approach integrating some stochastic gradient iterations as a warm up provides an initial boost that leads to a better performance. The results demonstrate that this new approach for training SVMs can be used effectively in a variety of real-world applications, also in a big data context with the joint use of stochastic gradient methods. A natural extension of this work would be to investigate multi-class formulations of SVMs.
2310.00243
Age-Optimal Multi-Flow Status Updating with Errors: A Sample-Path Approach
In this paper, we study an age of information minimization problem in continuous-time and discrete-time status updating systems that involve multiple packet flows, multiple servers, and transmission errors. Four scheduling policies are proposed. We develop a unifying sample-path approach and use it to show that, when the packet generation and arrival times are synchronized across the flows, the proposed policies are (near) optimal for minimizing any time-dependent, symmetric, and non-decreasing penalty function of the ages of the flows over time in a stochastic ordering sense.
Yin Sun, Sastry Kompella
2023-09-30T03:54:39Z
http://arxiv.org/abs/2310.00243v2
# Age-Optimal Multi-Flow Status Updating with Errors: A Sample-Path Approach ###### Abstract In this paper, we study an age of information minimization problem in _continuous-time_ and _discrete-time_ status updating systems that involve _multiple packet flows_, _multiple servers_, and _transmission errors_. Four scheduling policies are proposed. We develop a unifying sample-path approach and use it to show that, when the packet generation and arrival times are synchronized across the flows, the proposed policies are (near) optimal for minimizing any _time-dependent_, _symmetric_, and _non-decreasing_ penalty function of the ages of the flows over time in a stochastic ordering sense. Age of information, Status Updating, Errors, Multiple Channels, Multiple Flows, Sample-path Approach. ## I Introduction In many information-update and networked control systems, such as news updates, stock trading, autonomous driving, remote surgery, robotics control, and real-time surveillance, information usually has the greatest value when it is fresh. A metric for information freshness, called _age of information_ or simply _age_, was introduced in [2, 3]. Consider a flow of status update packets that are sent from a source to a destination through a channel. Let \(U(t)\) be the time stamp (i.e., generation time) of the newest update that the destination has received by time \(t\). Age of information, as a function of time \(t\), is defined as \(\Delta(t)=t-U(t)\), which is the time elapsed since the newest update was generated. In recent years, there have been a lot of research efforts on (i) analyzing the distributional quantities of age \(\Delta(t)\) for various network models and (ii) controlling \(\Delta(t)\) to keep the destination's information as fresh as possible, e.g., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. If there is a single flow of status update packets, the Last Generated, First Served (LGFS) update transmission policy, in which the last generated packet is served the first, has been shown to be (nearly) optimal for minimizing the age process \(\{\Delta(t),t\geq 0\}\) in a stochastic ordering sense for queueing networks with multiple servers or multiple hops [14, 15, 16, 17, 18]. These results hold for arbitrary packet generation times at the information source (e.g., a sensor) and arbitrary packet arrival times at the transmitter's queueing buffer; they also hold for minimizing any non-decreasing functional \(\phi(\{\Delta(t),t\geq 0\})\) of the age process \(\{\Delta(t),t\geq 0\}\). If packets arrive at the queue in the order of their generation times, then the LGFS policy reduces to the Last Come, First Served (LCFS) policy, thus demonstrating the (near) age-optimality of the LCFS policy. These studies motivated us to delve deeper into the design of scheduling policies to minimize age of information in more complex networks involving _multiple flows of status update packets_ and _transmission errors_, where each flow is from one source node to a destination node. In this scenario, the transmission scheduler must compare not only packets from the same flow, but also packets from different flows. Additionally, the presence of transmission errors adds an additional layer of complexity to the scheduling problem. As a result, addressing these challenges becomes crucial in achieving efficient age minimization in such systems. In this paper, we investigate age-optimal scheduling in _continuous-time_ and _discrete-time_ status updating systems that involve _multiple flows_, _multiple servers_, and _transmission errors_, as illustrated in Figure 1. Each server can transmit packets to their respective destinations, one packet at a time. Different servers are not allowed to simultaneously transmit packets from the same flow. We assume that the packet generation and arrival times are _synchronized_ across the flows. In other words, when a packet from flow \(n\) arrives at the queue at time \(A_{i}\), with its generation time denoted as \(S_{i}\) (where \(S_{i}\leq A_{i}\)), one corresponding packet from each flow simultaneously received at time \(A_{i}\), and all of these packets were generated at the same time \(S_{i}\). In practice, synchronized packet generations and arrivals occur when there is a single source and multiple destinations (e.g., [22]), or in periodic sampling where multiple sources are synchronized by the same clock, which is common in monitoring and control systems (e.g., [45, 46]). We develop a unifying sample-path approach and use it to show that the proposed scheduling policies can achieve optimal or near-optimal age performance in a quite strong sense (i.e., in terms of stochastic ordering of age-penalty stochastic processes). The contributions of this paper are summarized as follows: * Let \(\mathbf{\Delta}(t)\) denote the age vector of multiple flows. We introduce an age penalty function \(p_{t}(\mathbf{\Delta}(t))\) to represent Fig. 1: System model. the level of dissatisfaction for having aged information at the destinations at time \(t\), where \(p_{t}\) can be any _time-dependent_, _symmetric_, and _non-decreasing_ function of the age vector \(\mathbf{\Delta}(t)\). * For continuous-time status updating systems with one or multiple flows, one or multiple servers, and _i.i.d._ exponential transmission times, we propose a _Preemptive, Maximum Age First, Last Generated First Served (P-MAF-LGFS) scheduling policy_.1 If the packet generation and arrival times are synchronized across the flows, then for any age penalty function \(p_{t}\) defined above, any number of flows, any number of servers, any synchronized packet generation and arrival times, and regardless the presence of transmission errors or not, the P-MAF-LGFS is proven to minimize the continuous-time age penalty process \(\{p_{t}(\mathbf{\Delta}(t)),t\geq 0\}\) among all causal policies in a stochastic ordering sense (see Theorem 1 and Corollary 1). Theorem 1 is more general than [1, Theorem 1], as the latter was established for the special case of single-server status updating systems without transmission errors. In addition, if packet replication is allowed, we show that a _Preemptive, Maximum Age First, Last Generated First Served scheduling policy with packet Replications (P-MAF-LGFS-R)_ is age-optimal for minimizing the age penalty process \(\{p_{t}(\mathbf{\Delta}(t)),t\geq 0\}\) in terms of stochastic ordering (see Corollary 2). Footnote 1: This new P-MAF-LGFS policy is suitable for both single-server and multi-server systems, whereas the original P-MAF-LGFS policy, as presented in [1], was specifically tailored for single-server scenarios. * For continuous-time status updating systems with one or multiple flows, one or multiple servers, and _i.i.d._ exponential transmission times (which in-age-optimal multi-flow scheduling is quite difficult to achieve. In this case, we consider an age lower bound called the _Age of Served Information_ and propose a _Non-Preemptive, Maximum Age of Served Information First, Last Generated First Served (NP-MASIF-LGFS) scheduling policy_. The NP-MASIF-LGFS policy is shown to be near age-optimal. Specifically, it is within an additive gap from the optimum for minimizing the expected time-average of the average age of the flows, where the gap is equal to the mean transmission time of one packet (see Theorem 2 and Corollary 3). This additive sub-optimality gap is quite small. * For discrete-time status updating systems with one or multiple flows and one or multiple servers, we propose a _Discrete Time, Maximum Age First, Last Generated First Served (DT-MASIF-LGFS) scheduling policy_. If the packet generation and arrival times are synchronized across the flows, then for any age penalty function \(p_{t}\), any number of flows, any number of servers, any synchronized packet generation and arrival times, and regardless the presence of transmission errors or not, the DT-MAF-LGFS policy is proven to minimize the discrete-time age penalty process \(\{p_{t}(\mathbf{\Delta}(t)),t=0,T_{s},2T_{s},\ldots\}\) among all causal policies in a stochastic ordering sense, where \(T_{s}\) is the fundamental time unit of the discrete-time systems (see Theorem 3). Our results can be potentially applied to: (i) cloud-hosted Web services where the servers in Figure 1 represent a pool of threads (each for a TCP connection) connecting a front-end proxy node to clients [47], (ii) industrial robotics and factory automation systems where multiple sensor-output flows are sent to a wireless AP and then forwarded to a system monitor and/or controller [48], and (iii) Multi-access Edge Computing (MEC) that can process fresh data (e.g., data for video analytics, location services, and IoT) locally at the very edge of the mobile network. ## II Related Work The age of information concept has attracted a significant surge of research interest; see, e.g., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43] and a recent survey [44]. Initially, research efforts were centered on analyzing and comparing the age performance of different queueing disciplines, such as First-Come, First-Served (FCFS) [3, 5, 9, 11], preemptive and non-preemptive Last-Come, First-Served (LCFS) [4, 20], and packet management [8, 10]. In [14, 15, 16, 17, 18], a sample-path approach was developed to prove that Last-Generated, First-Served (LGFS)-type policies are optimal or near-optimal for minimizing a broad class of age metrics in multi-server and multi-hop queueing networks with a single packet flow. When packets arrive in the order of their generation times, the LGFS policy becomes the well-known Last Come, First Served (LCFS) policy. Hence, the LCFS policy is (near) age-optimal in these queueing networks. In recent years, researchers have expanded the aforementioned studies to consider age minimization in multi-flow discrete-time status updating systems [22, 23, 24, 25]. In [22], the authors utilized a sample-path method to establish the optimality of the Maximum Age First (MAF) policy in minimizing the time-averaged sum age of multiple flows. This investigation focused on discrete-time systems with periodic arrivals and a single broadcast channel, which is susceptible to _i.i.d._ transmission errors. Moreover, in [23], a Markov decision process (MDP) approach was adopted to prove that the MAF policy minimizes the time-averaged sum age of multiple flows in discrete-time systems with Bernoulli arrivals, a single broadcast channel, and no buffer. In this bufferless setup, arriving packets are discarded if they cannot be transmitted immediately in the arriving time slot. In [24], the authors studied discrete-time systems with multiple flows and multiple ON/OFF channels, where the state of each channel (ON/OFF) is known for making scheduling decisions. It was demonstrated that a Max-Age Matching policy is asymptotically optimal for minimizing non-decreasing symmetric functions of the age of the flows as the numbers of flows and channels increase. In [25], it was shown that the MAF policy minimizes the Maximum Age of multiple flows in discrete-time systems with periodic arrivals and a single broadcast channel susceptible to _i.i.d._ transmission errors, where the transmission error probability may vary across the flows. In [49], a sample-path method was employed to demonstrate that the round-robin policy minimizes a service regularity metric called _time-since-last-service_ in discrete-time systems with multiple flows and transmission errors. In the definition of time-since-last-service, a user can receive service even if its queue is empty. Consequently, time-since-last-service bears similarities to the age of information concept, albeit these two metrics are different. The present paper, alongside its conference version [1], complements the aforementioned studies in several essential ways: (i) It considers general time-dependent, symmetric, and non-decreasing age penalty functions \(p_{t}\). (ii) Both continuous-time and discrete-time systems with multiple flows, multiple channels (a.k.a. servers), and transmission errors are investigated. (iii) The paper establishes near age-optimal scheduling results in scenarios where achieving age-optimality is inherently challenging. ## III System Model ### _Notations and Definitions_ We use lower case letters such as \(x\) and \(\mathbf{x}\), respectively, to represent deterministic scalars and vectors. In the vector case, a subscript will index the components of a vector, such as \(x_{i}\). We use \(x_{[i]}\) to denote the \(i\)-th largest component of vector \(\mathbf{x}\). Let \(\mathbf{0}\) denote a vector with all 0 components. A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is termed _symmetric_ if \(f(\mathbf{x})=f(x_{[1]},\ldots,x_{[n]})\) for all \(\mathbf{x}\in\mathbb{R}^{n}\). A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is termed _separable_ if there exists functions \(f_{1},\ldots,f_{n}\) of one variable such that \(f(\mathbf{x})=\sum_{i=1}^{n}f_{i}(x_{i})\) for all \(\mathbf{x}\in\mathbb{R}^{n}\). The composition of functions \(f\) and \(g\) is denoted by \(f\circ g(x)=f(g(x))\). For any \(n\)-dimensional vectors \(\mathbf{x}\) and \(\mathbf{y}\), the elementwise vector ordering \(x_{i}\leq y_{i}\), \(i=1,\ldots,n\), is denoted by \(\mathbf{x}\leq\mathbf{y}\). Let \(\mathcal{A}\) and \(\mathcal{U}\) denote sets and events. For all random variable \(X\) and event \(\mathcal{A}\), let \([X|\mathcal{A}]\) denote a random variable with the conditional distribution of \(X\) for given \(\mathcal{A}\). We will need the following definitions: **Definition 1**.: _Stochastic Ordering of Random Variables [50]:_ A random variable \(X\) is said to be _stochastically smaller_ than another random variable \(Y\), denoted by \(X\leq_{\text{st}}Y\), if \[\Pr(X>t)\leq\Pr(Y>t),\ \forall\ t\in\mathbb{R}. \tag{1}\] **Definition 2**.: _Stochastic Ordering of Random Vectors [50]:_ A set \(\mathcal{U}\subseteq\mathbb{R}^{n}\) is called _upper_, if \(\mathbf{y}\in\mathcal{U}\) whenever \(\mathbf{y}\geq\mathbf{x}\) and \(\mathbf{x}\in\mathcal{U}\). Let \(\mathbf{X}\) and \(\mathbf{Y}\) be two \(n\)-dimensional random vectors, \(\mathbf{X}\) is said to be _stochastically smaller_ than \(\mathbf{Y}\), denoted by \(\mathbf{X}\leq_{\text{st}}\mathbf{Y}\), if \[\Pr(\mathbf{X}\in\mathcal{U})\leq\Pr(\mathbf{Y}\in\mathcal{U})\text{ for all upper sets }\mathcal{U}\subseteq\mathbb{R}^{n}. \tag{2}\] **Definition 3**.: _Stochastic Ordering of Stochastic Processes [50]:_ Let \(\{X(t),t\in[0,\infty)\}\) and \(\{Y(t),t\in[0,\infty)\}\) be two stochastic processes, \(\{X(t),t\in[0,\infty)\}\) is said to be _stochastically smaller_ than \(\{Y(t),t\in[0,\infty)\}\), denoted by \(\{X(t),t\in[0,\infty)\}\leq_{\text{st}}\{Y(t),t\in[0,\infty)\}\), if for all integer \(n\) and \(0\leq t_{1}<t_{2}<\ldots<t_{n}\), it holds that \[(X(t_{1}),X(t_{2}),\ldots,X(t_{n}))\leq_{\text{st}}(Y(t_{1}),Y(t_{2}),\ldots, Y(t_{n})). \tag{3}\] A functional is a mapping from functions to real numbers. A functional \(\phi\) is termed _non-decreasing_ if \(\phi(\{X(t),t\in[0,\infty)\})\leq\phi(\{Y(t),t\in[0,\infty)\})\) whenever \(X(t)\leq Y(t)\) for \(t\in[0,\infty)\). We remark that \(\{X(t),t\in[0,\infty)\}\leq_{\text{st}}\{Y(t),t\in[0,\infty)\}\) if, and only if, [50] \[\mathbb{E}[\phi(\{X(t),t\in[0,\infty)\})]\leq\mathbb{E}[\phi(\{Y(t),t\in[0, \infty)\})] \tag{4}\] holds for all non-decreasing functional \(\phi\), provided that the expectations in (4) exist. ### _Queueing System Model_ Consider the status updating system illustrated in Fig. 1, where \(N\) flows of status update packets are sent through a queue with an infinite buffer and \(M\) servers. Let \(s_{n}\) and \(d_{n}\) denote the source and destination nodes of flow \(n\), respectively. It is possible for multiple flows to share either the same source node or the same destination node. A scheduler assigns packets from the transmitter's queue to servers over time. The queue contains packets from different flows, and each packet can be assigned to any available server. Each server is capable of transmitting only one packet at a time. Different servers are not allowed to simultaneously transmit packets from the same flow. The packet transmission times are independent and identically distributed (_i.i.d._) across both servers and packets, with a finite mean \(1/\mu\). The packet transmissions are susceptible to _i.i.d._ errors with an error probability \(q\in[0,1)\), occurring at the end of the packet transmission time intervals. The scheduler is made aware of transmission errors once they occur. In the event of such a error, the packet is promptly returned to the queue, where it awaits the next transmission opportunity. if \(q=0\), then there is no transmission errors. The system starts to operate at time \(t=0\). The \(i\)-th packet of flow \(n\) is generated at the source node \(s_{n}\) at time \(S_{n,i}\), arrives at the queue at time \(A_{n,i}\), and is delivered to the destination \(d_{n}\) at time \(D_{n,i}\) such that \(0\leq S_{n,1}\leq S_{n,2}\leq\ldots\) and \(S_{n,i}\leq A_{n,i}\leq D_{n,i}\).2 We consider the following class of _synchronized_ packet generation and arrival processes: Footnote 2: This paper allows \(S_{n,i}\leq A_{n,i}\), which is more general than the conventional assumption \(S_{n,i}=A_{n,i}\) adopted in related literature. **Definition 4**.: _Synchronized Packet Generations and Arrivals:_ The packet generation and arrival processes are said to be _synchronized_ across the \(N\) flows, if there exist two sequences \(\{S_{1},S_{2},\ldots\}\) and \(\{A_{1},A_{2},\ldots\}\) such that for all \(i=1,2,\ldots\), and \(n=1,\ldots,N\) \[S_{n,i}=S_{i},\ A_{n,i}=A_{i}. \tag{5}\] We note that the sequences \(\{S_{1},S_{2},\ldots\}\) and \(\{A_{1},A_{2},\ldots\}\) in (5) are _arbitrary_. Hence, _out-of-order arrivals_, e.g., \(S_{i}<S_{i+1}\) but \(A_{i}>A_{i+1}\), are allowed. In the special case that the system has a single flow (\(N=1\)), the packet generation times \(S_{n,1}\) and arrival times \(A_{n,1}\) of this flow are arbitrarily given without any constraint. Age-optimal scheduling in this special case has been previously studied in [14, 15, 16, 17]. Let \(\pi\) represent a scheduling policy that determines how to assign packets from the queue to servers over time. Let \(\Pi\) denote the set of all _causal_ scheduling policies in which the scheduling decisions are made based on the history and current states of the system. A scheduling policy is said to be _preemptive_ if a busy server can stop the transmission of the current packet and start sending another packet at any time; the preempted packet is stored back to the queue, waiting to be sent at a later time. A scheduling policy is said to be _non-preemptive_ if each server must complete the transmission of the current packet before initiating the service of another packet. A scheduling policy is said to be _work-conserving_ if all servers remain busy whenever the queue contains packets waiting to be processed. We use \(\Pi_{np}\) to denote the set of non-preemptive and causal scheduling policies, where \(\Pi_{np}\subset\Pi\). Let \[\mathcal{I}=\{S_{i},A_{i},\ i=1,2,\ldots\} \tag{6}\] denote the synchronized packet generation and arrival times of the flows. We assume that the packet generation/arrival times \(\mathcal{I}\), the packet transmission times, and the transmission errors are governed by three _mutually independent_ stochastic processes, none of which are influenced by the scheduling policy. ### _Age Metrics_ Among the packets that have been delivered to the destination \(d_{n}\) of flow \(n\) by time \(t\), the freshest packet was generated at time \[U_{n}(t)=\max_{i}\{S_{n,i}:D_{n,i}\leq t\}. \tag{7}\] _Age of information_, or simply _age_, for flow \(n\) is defined as [2, 3] \[\Delta_{n}(t)=t-U_{n}(t)=t-\max_{i}\{S_{n,i}:D_{n,i}\leq t\}, \tag{8}\] which is the time difference between the current time \(t\) and the generation time \(U_{n}(t)\) of the freshest packet currently available at destination \(d_{n}\). Because \(S_{n,i}\leq D_{n,i}\), one can get \(\Delta_{n}(t)\geq 0\) for all flow \(n\) and time \(t\). Let \(\boldsymbol{\Delta}(t)=(\Delta_{1}(t),\ldots,\Delta_{N}(t))\in[0,\infty)^{N}\) be the age vector of the \(N\) flows at time \(t\). We introduce an _age penalty function_\(p(\boldsymbol{\Delta})=p\circ\boldsymbol{\Delta}\) to represent the level of dissatisfaction for having aged information at the \(N\) destinations, where \(p:[0,\infty)^{N}\to\mathbb{R}\) can be any _non-decreasing_ function of the \(N\)-dimensional age vector \(\boldsymbol{\Delta}\). Some examples of the age penalty function are: 1. The _average age_ of the \(N\) flows is \[p_{\text{avg}}(\boldsymbol{\Delta})=\frac{1}{N}\sum_{n=1}^{N}\Delta_{n}.\] (9) 2. The _maximum age_ of the \(N\) flows is \[p_{\text{max}}(\boldsymbol{\Delta})=\max_{n=1,\ldots,N}\Delta_{n}.\] (10) 3. The _mean square age_ of the \(N\) flows is \[p_{\text{ms}}(\boldsymbol{\Delta})=\frac{1}{N}\sum_{n=1}^{N}(\Delta_{n})^{2}.\] (11) 4. The _l-norm of the age vector_ of the \(N\) flows is \[p_{l\text{-norm}}(\boldsymbol{\Delta})=\left[\sum_{n=1}^{N}(\Delta_{n})^{l} \right]^{\frac{1}{l}},\ l\geq 1.\] (12) 5. The _sum of per-flow age penalty functions_ is \[p_{\text{sum-penalty}}(\boldsymbol{\Delta})=\sum_{n=1}^{N}g(\Delta_{n}),\] (13) where \(g:[0,\infty)\to\mathbb{R}\) is a _non-decreasing_ function. Practical applications of non-decreasing age functions can be found in [32, 33, 34, 36, 35]. In this paper, we consider a class of _symmetric_ and _non-decreasing_ age penalty functions, i.e., \[\mathcal{P}_{\text{sym}}\!=\!\{p:[0,\infty)^{N}\to\mathbb{R}\text{ is symmetric and non-decreasing}\}.\] This is a fairly large class of age penalty functions, where the function \(p\) can be discontinuous, non-convex, or non-separable. It is easy to see \[\{p_{\text{avg}},p_{\text{max}},p_{\text{ms}},p_{l\text{-norm}},p_{\text{ sum-penalty}}\}\subset\mathcal{P}_{\text{sym}}. \tag{14}\] In this paper, we consider both continuous-time and discrete-time status updating systems. In the continuous-time setting, time \(t\in[0,\infty)\) can take any positive value and the packet transmission times are _i.i.d._ continuous random variables. On the other hand, in the discrete-time setting, time is quantized into multiples of a fundamental time unit \(T_{s}\), i.e., \(t\in\{0,T_{s},2T_{s},\ldots\}\), and each packet's transmission time is fixed and equal to \(T_{s}\). Consequently, the variables \(S_{n,i},A_{n,i},D_{n,i},t,U_{n}(t),\Delta_{n}(t)\) are all multiples of \(T_{s}\). In realistic discrete-time systems, service preemption is not allowed. Let \(\Delta_{n,\pi}(t)\) denote the age of flow \(n\) achieved by scheduling policy \(\pi\) and \(\boldsymbol{\Delta}_{\pi}(t)=(\Delta_{1,\pi}(t),\ldots,\Delta_{N,\pi}(t))\). In the continuous-time case, we assume that the initial age \(\boldsymbol{\Delta}_{\pi}(0^{-})\) at time \(t=0^{-}\) remains the same for all scheduling policies \(\pi\in\Pi\), where \(t=0^{-}\) is the moment right before \(t=0\). In the discrete-time case, we assume that the initial age \(\boldsymbol{\Delta}_{\pi}(0)\) at time \(t=0\) remains the same for all scheduling policies \(\pi\in\Pi\). The results in this paper remain true even if the age penalty function \(p_{t}\) varies over time \(t\). For example, it is allowed that \(p_{t}=p_{\text{avg}}\) for \(0\leq t\leq 100\) and \(p_{t}=p_{\text{max}}\) for \(100<t\leq 200\). In the continuous-time case, we use \(\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t\in[0,\infty)\}\) to represent the age-penalty stochastic process formed by the _time-dependent_ penalty function \(p_{t}\) of the age vector \(\boldsymbol{\Delta}_{\pi}(t)\) under scheduling policy \(\pi\). In the discrete-time case, the age-penalty stochastic process is denoted by \(\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t=0,T_{s},2T_{s},\ldots\}\). ## IV Multi-flow Status Update Scheduling: The Continuous-time Case In this section, we investigate multi-flow scheduling in continuous-time status updating systems. We first consider a system setting with multiple servers and exponential transmission times, where an age-optimal scheduling result is established. Next, we study a more general system setting with multiple servers and NBU transmission times. In the second setting, age optimality is inherently difficult to achieve and we present a near age-optimal scheduling result. ### _Multiple Flows, Multiple Servers, Exponential Service Times_ To address the multi-flow scheduling problem, we consider a flow selection discipline called _Maximum Age First (MAF)_[6, 22, 23], in which _the flow with the maximum age is served first, with ties broken arbitrarily_. For multi-flow single-server systems, a scheduling policy is defined by combining the Preemptive, MAF, and LGFS service disciplines as follows: **Definition 5**.: _Preemptive, Maximum Age First, Last Generated First Served (P-MAF-LGFS) policy:_ This is a work-conserving scheduling policy for multiple-server, continuous-time systems with synchronized packet generations and arrivals. It operates as follows: 1. If the queue is not empty, a server is assigned to process the most recently generated packet from the flow with the maximum age, with ties broken arbitrarily. 2. The next server is assigned to process the most recently generated packet from the flow with the second maximum age, with ties broken arbitrarily. 3. This process continues until either (i) the most recently generated packet of every flow is under service or has been delivered, or (ii) all servers are busy. 4. If the most recently generated packet of every flow is under service or has been delivered, the remaining servers can be arbitrarily assigned to send the remaining packets in the queue, until the queue becomes empty. 5. When fresher packets arrive, the scheduler can preempt the packets that are currently under service and assign the new packets to servers following Steps 1-4 above. The preempted packets are then returned to the queue, where they await their turn to be transmitted at a later time. The following observation provides useful insights into the operations of the P-MAF-LGFS policy: Due to synchronized packet generations and arrivals, when the most recently generated packet of flow \(n\) is successfully delivered in the P-MAF-LGFS policy, flow \(n\) must have the _minimum_ age among the \(N\) flows. Conversely, if flow \(n\) does not have the _minimum_ age among all the flows, its most recently generated packet must be undelivered. Hence, in the P-MAF-LGFS policy, the most recently generated packet from a flow that does not have the _minimum_ age is always available to be scheduled. The above P-MAF-LGFS policy is suitable for use in both single-server and multiple-server systems. It extends the original single-server P-MAF-LGFS policy introduced in [1] to encompass the more general multi-server scenario. The age optimality of the P-MAF-LGFS policy is established in Theorem 1 and Corollary 1 below. **Theorem 1**.: (Continuous-time, multiple flows, multiple servers, exponential transmission times with transmission errors) In continuous-time status updating systems, if (i) the transmission errors are _i.i.d._ with an error probability \(q\in[0,1)\), (ii) the packet generation and arrival times are synchronized across the \(N\) flows, and (iii) the packet transmission times are exponentially distributed and _i.i.d._ across packets, then it holds that for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all \(\pi\in\Pi\) \[[\{p_{t}\circ\mathbf{\Delta}_{\text{P-MAF-LGFS}}(t),t\in[0, \infty)\}|\mathcal{I}]\] \[\leq_{\text{st}}[\{p_{t}\circ\mathbf{\Delta}_{\pi}(t),t\in[0, \infty)\}|\mathcal{I}], \tag{15}\] or equivalently, for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all non-decreasing functional \(\phi\) \[\mathbb{E}\left[\phi(\{p_{t}\circ\mathbf{\Delta}_{\text{P-MAF-LGFS }}(t),t\in[0,\infty)\})|\mathcal{I}\right]\] \[= \min_{\pi\in\Pi}\mathbb{E}\left[\phi(\{p_{t}\circ\mathbf{\Delta }_{\pi}(t),t\in[0,\infty)\})|\mathcal{I}\right], \tag{16}\] provided that the expectations in (16) exist. Proof.: See Appendix A. According to Theorem 1, for any age penalty function in \(\mathcal{P}_{\text{sym}}\), any number of flows \(N\), any number of servers \(M\), any synchronized packet generation and arrival times in \(\mathcal{I}\), and regardless the presence of _i.i.d._ transmission errors or not, the P-MAF-LGFS policy minimizes the stochastic process \([\{p_{t}\circ\mathbf{\Delta}_{\pi}(t),t\in[0,\infty)\}|\mathcal{I}]\) among all causal policies in terms of stochastic ordering. Theorem 1 is more general than [1, Theorem 1], as the latter was established for the special case of single-server systems without transmission errors. By considering a mixture over the different realizations of \(\mathcal{I}\), it can be readily deduced from Theorem 1 that **Corollary 1**.: Under the conditions of Theorem 1, it holds that for all \(p_{t}\in\mathcal{P}_{\text{sym}}\) and all \(\pi\in\Pi\) \[\{p_{t}\circ\mathbf{\Delta}_{\text{P-MAF-LGFS}}(t),t\in[0, \infty)\}\leq_{\text{st}}\{p_{t}\circ\mathbf{\Delta}_{\pi}(t),t\in[0,\infty)\}, \tag{17}\] or equivalently, for all \(p_{t}\in\mathcal{P}_{\text{sym}}\) and all non-decreasing functional \(\phi\) \[\mathbb{E}\left[\phi(\{p_{t}\circ\mathbf{\Delta}_{\text{P-MAF- LGFS}}(t),t\in[0,\infty)\})\right]\] \[= \min_{\pi\in\Pi}\mathbb{E}\left[\phi(\{p_{t}\circ\mathbf{\Delta} _{\pi}(t),t\in[0,\infty)\})\right], \tag{18}\] provided that the expectations in (18) exist. Corollary 1 states that the P-MAF-LGFS policy minimizes the stochastic process \(\{p_{t}\circ\mathbf{\Delta}_{\pi}(t),t\in[0,\infty)\}\) in a stochastic ordering sense, outperforming all other causal policies. #### Iii-A1 Status Update Scheduling with Packet Replications As discussed in Section III-B, our study has been centered on a scenario where different servers are not allowed to simultaneously transmit packets from the same flow. In this context, we have demonstrated the age-optimality of the P-MAF-LGFS policy in Theorem 1. However, in situations where multiple servers can transmit packets from the same flow, and packet replication is permitted, it becomes possible to create multiple copies of the same packet and transmit these copies concurrently across multiple servers. The packet is considered delivered once any one of these copies is successfully delivered; at that point, the other copies are canceled to release the servers. If the packet service times follow an _i.i.d._ exponential distribution with a service rate of \(\mu\), the \(N\) servers can be equivalently viewed as a single, faster server with exponential service times and a higher service rate of \(N\mu\). Additionally, this fast server exhibits _i.i.d._ transmission errors with an error probability \(q\). Our study also addresses this scenario. **Definition 6**.: _Preemptive, Maximum Age First, Last Generated First Served policy with packet Replications (P-MAF-LGFS-R): In this policy, the last generated packet from the flow with the maximum age is served the first among all packets of all flows, with ties broken arbitrarily. This packet is replicated into \(N\) copies, which are transmitted concurrently over the \(N\) servers. The packet is considered delivered once any one of these \(N\) copies is successfully delivered; at that point, the other copies are canceled to release the servers._ By applying Theorem 1 to this particular scenario with a single, faster server, we derive the following corollary. **Corollary 2**.: Under the conditions of Theorem 1, if packet replication is allowed, then it holds that for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all \(\pi\in\Pi\) \[[\{p_{t}\circ\boldsymbol{\Delta}_{\text{P-MAF-LGFS-R}}(t),t\in[0,\infty)\}|\mathcal{I}]\] \[\leq_{\text{st}}[\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t\in[0,\infty)\}|\mathcal{I}], \tag{19}\] or equivalently, for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all non-decreasing functional \(\phi\) \[\mathbb{E}\left[\phi(\{p_{t}\circ\boldsymbol{\Delta}_{\text{P- MAF-LGFS-R}}(t),t\in[0,\infty)\})|\mathcal{I}\right]\] \[= \min_{\pi\in\Pi}\mathbb{E}\left[\phi(\{p_{t}\circ\boldsymbol{ \Delta}_{\pi}(t),t\in[0,\infty)\})|\mathcal{I}\right], \tag{20}\] provided that the expectations in (20) exist. ### _Multiple Flows, Multiple Servers, NBU Service Times_ Next, we consider a more general system setting with multiple servers and a class of New-Better-than-Used (NBU) transmission time distributions that include exponential distribution as a special case. **Definition 7**.: _New-Better-than-Used Distributions: Consider a non-negative random variable \(X\) with complementary cumulative distribution function (CCDF) \(\bar{F}(x)=\Pr[X>x]\). Then, \(X\) is said to be New-Better-than-Used (NBU) if for all \(t,\tau\geq 0\)_ \[\bar{F}(\tau+t)\leq\bar{F}(\tau)\bar{F}(t). \tag{21}\] Examples of NBU distributions include deterministic distribution, exponential distribution, shifted exponential distribution, geometric distribution, gamma distribution, and negative binomial distribution. In the scheduling literature, optimal scheduling results were successfully established for delay minimization in single-server queueing systems, e.g., [51, 52], but it can become inherently difficult in the multi-server cases. In particular, minimizing the average delay in deterministic scheduling problems with more than one servers is NP-hard [53]. Similarly, delay-optimal stochastic scheduling in multi-class, multi-server queueing systems is deemed to be quite difficult [54, 55, 56]. The key challenge in multi-class, multi-server scheduling is that one cannot combine the capacities of all the servers to jointly process the most important packet. Due to the same reason, age-optimal scheduling in multi-flow, multi-server systems is quite challenging. In the sequel, we consider a relaxed goal to seek for near age-optimal scheduling of multiple information flows, where our proposed scheduling policy is shown to be within a small additive gap from the optimum age performance. To establish near age optimality, we introduce another age metric named _age of served information_, denoted as \(\Xi_{n}(t)\), which is a lower bound for age of information \(\Delta_{n}(t)\): Let \(V_{n,i}\) be the time that the \(i\)-th packet of flow \(n\) starts its service by a server, i.e., the service starting time of the \(i\)-th packet of flow \(n\). It holds that \(S_{n,i}\leq A_{n,i}\leq V_{n,i}\leq D_{n,i}\), as illustrated in Fig. 2. _Age of served information_ for flow \(n\) is defined as \[\Xi_{n}(t)=t-\max_{i}\{S_{n,i}:V_{n,i}\leq t\}, \tag{22}\] which is the time difference between the current time \(t\) and the generation time of the freshest packet that has started service by time \(t\). Let \(\boldsymbol{\Xi}(t)=(\Xi_{1}(t),\ldots,\Xi_{N}(t))\) be the age of served information vector at time \(t\). Age of served information \(\Xi_{n}(t)\) reflects the staleness of the packets that has begun service, whereas \(\Delta_{n}(t)\) represents the staleness of the packets that has been successfully delivered to their destination. As depicted in Fig. 3, it is evident that \(\Xi_{n}(t)\leq\Delta_{n}(t)\). In non-preemptive policies, the discrepancy between \(\Xi_{n}(t)\) and \(\Delta_{n}(t)\) solely arises from the _i.i.d._ packet transmission times. Consequently, the age of served information \(\Xi_{n}(t)\) closely approximates the age \(\Delta_{n}(t)\). We propose a new flow selection discipline called Maximum Age of Served Information First (MASIF), in which the flow with the maximum Age of Served Information is served first, with ties broken arbitrarily. Using this discipline, we define another scheduling policy: **Definition 8**.: _Non-Preemptive, Maximum Age of Served Information first, Last Generated First Served (NP-MASIF-LGFS) policy: This is a non-preemptive, work-conserving scheduling policy for multi-server systems. It operates as follows:_ 1. _When the queue is not empty and there are idle servers, an idle server is assigned to process the most recently generated packet from the flow with the maximum age of served information, with ties broken arbitrarily._ 2. _After a packet from flow_ \(n\) _is assigned to an idle server, the server transitions into a busy state and will complete the transmission of the current packet from flow_ \(n\) _before serving any other packet. The age of served information_ \(\Xi_{n}(t)\) _of flow_ \(n\) _decreases. As a result, flow_ \(n\) _may no longer retain the maximum age of served information, allowing the remaining idle servers to be allocated to process other flows. The next idle server is assigned to process the most recently generated packet from the flow with the maximum age of served information, with ties broken arbitrarily. 3. This procedure continues until either all servers are busy or the queue becomes empty. Next, we will establish the near-age optimality of the NP-MASIF-LGFS policy. The following theorem shows that the age of served information obtained by the NP-MASIF-LGFS policy serves as a lower bound (in terms of stochastic ordering) for the age of all other non-preemptive and causal policies. **Theorem 2**.: (Continuous-time, multiple flows, multiple servers, NBU transmission times with no errors) In continuous-time status updating systems, if (i) there is no transmission errors (i.e., \(q=0\)), (ii) the packet generation and arrival times are synchronized across the \(N\) flows, and (iii) the packet transmission times are NBU and _i.i.d._ across both servers and packets, then it holds that for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all \(\pi\in\Pi_{np}\)3 Footnote 3: Recall that \(\Pi_{np}\) is the set of non-preemptive and causal scheduling policies. \[[\{p_{t}\circ\boldsymbol{\Xi}_{\text{NP-MASIF-LGFS}}(t),t\in[0, \infty)\}|\mathcal{I}]\] \[\leq_{\text{st}}[\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t\in[0, \infty)\}|\mathcal{I}], \tag{23}\] or equivalently, for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all non-decreasing functional \(\phi\) \[\mathbb{E}\left[\phi(\{p_{t}\circ\boldsymbol{\Xi}_{\text{NP- MASIF-LGFS}}(t),t\in[0,\infty)\})|\mathcal{I}\right]\] \[\leq\min_{\pi\in\Pi_{np}}\mathbb{E}\left[\phi(\{p_{t}\circ \boldsymbol{\Delta}_{\pi}(t),t\in[0,\infty)\})|\mathcal{I}\right]\] \[\leq \tag{24}\] provided that the expectations in (24) exist. Proof idea.: In the NP-MASIF-LGFS policy, if a packet from flow \(n^{*}\) begins service, it implies that flow \(n^{*}\) possesses the _maximum_ age of served information before the service starts. If the packet generation and arrival times are synchronized across the flows, flow \(n^{*}\) also exhibits the _minimum_ age of served information after the service starts. The proof of Theorem 2 relies on this property and a sample-path argument that is developed for NBU service time distributions. See Appendix B for the details. Considering the close approximation between the age of served information \(\boldsymbol{\Xi}_{\text{NP-MASIF-LGFS}}(t)\) and the age of information \(\boldsymbol{\Delta}_{\text{NP-MASIF-LGFS}}(t)\) in (24), we can conclude that the NP-MASIF-LGFS policy is near age-optimal. Furthermore, in the case of the average age metric as defined in (9) (i.e., \(p_{t}=p_{\text{avg}}\) for all \(t\)), we can derive the following corollary: **Corollary 3**.: Under the conditions of Theorem 2, it holds that for all \(\mathcal{I}\) \[\min_{\pi\in\Pi_{np}}[\bar{\Delta}_{\pi}|\mathcal{I}]\!\leq\![\bar{\Delta}_{ \text{NP-MASIF-LGFS}}|\mathcal{I}]\!\leq\!\min_{\pi\in\Pi_{np}}[\bar{\Delta}_ {\pi}|\mathcal{I}]\!+\!\frac{1}{\mu}, \tag{25}\] where \[[\bar{\Delta}_{\pi}|\mathcal{I}]=\lim_{T\to\infty}\frac{1}{T}\mathbb{E}\left[ \int_{0}^{T}p_{\text{avg}}\circ\boldsymbol{\Delta}_{\pi}(t)dt\Bigg{|}\mathcal{ I}\right] \tag{26}\] is the expected time-average of the average age of the \(N\) flows, and \(1/\mu\) is the mean packet transmission time. Proof.: The proof of Corollary 3 is the same as that of Theorem 12 in [15] and hence is omitted here. By Corollary 3, the average age of the NP-MASIF-LGFS policy is within an additive gap from the optimum, where the gap \(1/\mu\) is invariant of the packet arrival and generation times \(\mathcal{I}\), the number of flows \(N\), and the number of servers \(M\). Similar to Corollary 1, by taking a mixture over the different realizations of \(\mathcal{I}\), one can remove the condition \(\mathcal{I}\) from (23), (24), (25), and (26). The sampling-path argument utilized in the proof of Theorem 2 can effectively handle multiple flows, multiple servers, and _i.i.d._ NBU transmission time distributions. This is achieved by establishing a coupling between the start time of packet transmissions in the NP-MASIF-LGFS policy and the completion time of packet transmissions in any other work-conserving policy from \(\Pi_{np}\). However, extending this sampling-path argument to encompass the scenario of _i.i.d._ transmission errors poses additional challenges that are currently difficult to overcome. ## V Multi-flow Status Update Scheduling: The Discrete-time Case In this section, we investigate age-optimal scheduling in discrete-time status updating systems, where the variables \(S_{n,i},A_{n,i},D_{n,i},t,U_{n}(t),\Delta_{n}(t)\) are all multiples of the period \(T_{s}\), the transmission time of each packet is fixed as \(T_{s}\), and the packet submissions are subject to _i.i.d._ errors with an error probability \(q\in[0,1)\). Service preemption is not allowed in discrete-time systems. For multiple-server, discrete-time systems, a scheduling policy is defined by combining the MAF and LGFS service disciplines as follows: **Definition 9**.: _Discrete Time, Maximum Age First, Last Generated First Served (DT-MAF-LGFS) policy:_ This is a work-conserving scheduling policy for multiple-server, discrete-time systems with synchronized packet generations and arrivals. It operates as follows: 1. When time \(t\) is a multiple of period \(T_{s}\), if the queue is not empty, an idle server is assigned to process the Fig. 3: The age of served information \(\Xi_{n}(t)\) as a lower bound of age \(\Delta_{n}(t)\). most recently generated packet from the flow with the maximum age, with ties broken arbitrarily. 2. The next idle server is assigned to process the most recently generated packet from the flow with the second maximum age, with ties broken arbitrarily. 3. This process continues until either (i) the most recently generated packet of each flow is under service or has been delivered, or (ii) all servers are busy. 4. If the most recently generated packet of each flow is under service or has been delivered, and there are additional idle servers, then these servers can be arbitrarily assigned to send the remaining packets in the queue, until the queue becomes empty. One can observe that the DT-MAF-LGFS policy for discrete-time systems is similar to the P-MAF-LGFS policy designed for continuous-time systems. The age optimality of the DT-MAF-LGFS policy is obtained in the following theorem. **Theorem 3**.: (Discrete-time, multiple flows, multiple servers, constant transmission times with transmission errors) In discrete-time status updating systems, if (i) the transmission errors are _i.i.d._ with an error probability \(q\in[0,1)\), (ii) the packet generation and arrival times are synchronized across the \(N\) flows, and (iii) the packet transmission times are fixed as \(T_{s}\), then it holds that for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all \(\pi\in\Pi_{np}\) \[[\{p_{t}\circ\boldsymbol{\Delta}_{\text{DT-MAF-LGFS}}(t),t=0,T_{s},2T_{s},\ldots\}|\mathcal{I}]\] \[\leq_{\text{st}}[\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t=0,T_{ s},2T_{s},\ldots\}|\mathcal{I}], \tag{27}\] or equivalently, for all \(\mathcal{I}\), all \(p_{t}\in\mathcal{P}_{\text{sym}}\), and all non-decreasing functional \(\phi\) \[\mathbb{E}\left[\phi(\{p_{t}\circ\boldsymbol{\Delta}_{\text{DT- MAF-LGFS}}(t),t=0,T_{s},2T_{s},\ldots\})|\mathcal{I}\right]\] \[=\min_{\pi\in\Pi_{np}}\mathbb{E}\left[\phi(\{p_{t}\circ \boldsymbol{\Delta}_{\pi}(t),t=0,T_{s},2T_{s},\ldots\})|\mathcal{I}\right], \tag{28}\] provided that the expectations in (28) exist. Proof.: See Appendix C. According to Theorem 3, the DT-MAF-LGFS policy minimizes the stochastic process \([\{p_{t}\circ\boldsymbol{\Delta}_{\pi}(t),t=0,T_{s},2T_{s},\ldots\})|\mathcal{I}]\) in terms of stochastic ordering within discrete-time status updating systems. This optimality result holds for any age penalty function in \(\mathcal{P}_{\text{sym}}\), any number of flows \(N\), any number of servers \(M\), any synchronized packet generation and arrival times in \(\mathcal{I}\), and regardless the existence of _i.i.d._ transmission errors. Theorem 3 generalizes [22, Theorem 1], by allowing for multiple servers and a broader range of age penalty functions. Similar to Corollary 1, one can remove the condition \(\mathcal{I}\) from (27) and (28). ## VI Numerical Results In this section, we evaluate the age performance of several multi-flow scheduling policies. These scheduling policies are defined by combining the flow selection disciplines {MAF, MASIF, RAND} and the packet selection disciplines {FCFS, LGFS}, where RAND represents randomly choosing a flow among the flows with un-served packets. The packet generation times \(S_{i}\) follow a Poisson process with rate \(\lambda\), and the time difference \((A_{i}-S_{i})\) between packet generation and arrival is equal to either 0 or \(4/\lambda\) with equal probability. The mean transmission time of each server is set as \(\mathbb{E}[X]=1/\mu=1\). Hence, the traffic intensity is \(\rho=\lambda N/M\), where \(N\) is the number of flows and \(M\) is the number of servers. Figure 4 illustrates the expected time-average of the maximum age \(p_{\text{max}}(\boldsymbol{\Delta}(t))\) of 3 flows in a system with a single server and _i.i.d._ exponential transmission times. One can see that the P-MAF-LGFS policy has the best age performance and its age is quite small even for \(\rho>1\), in which case the queue is actually unstable. On the other hand, both the RAND and FCFS disciplines have much higher age. Note that there is no need for preemptions under the FCFS discipline. Figure 5 plots the expected time-average of the average age \(p_{\text{avg}}(\boldsymbol{\Delta}(t))\) of 50 flows in a system with 3 servers and _i.i.d._ NBU transmission times. In particular, the transmission time \(X\) follows the following shifted exponential distribution: \[\Pr[X>x]=\left\{\begin{array}{ll}1,&\text{if }x<\frac{1}{3};\\ \exp[-\frac{3}{2}(x-\frac{1}{3})],&\text{if }x\geq\frac{1}{3}.\end{array}\right. \tag{29}\] Fig. 4: Expected time-average of the maximum age of 3 flows in a system with a single server and _i.i.d._ exponential transmission times. Fig. 5: Expected time-average of the average age of 50 flows in a system with 3 servers and _i.i.d._ NBU service times. One can observe that the NP-MASIF-LGFS policy is better than the other policies, and is quite close to the age lower bound where the gap from the lower bound is no more than the mean transmission time \(\mathbb{E}[X]=1\). One interesting observation is that the NP-MASIF-LGFS policy is better than the NP-MAF-LGFS policy for NBU transmission times. The reason behind this is as follows: When multiple servers are idle, the NP-MAF-LGFS policy will assign these servers to process multiple packets from the flow with the maximum age (say flow \(n\)). This reduces the age of flow \(n\), but at a cost of postponing the service of the flows with the second and third maximum ages. On the other hand, in the NP-MASIF-LGFS policy, once a packet from the flow with the maximum age of served information (say flow \(m\)) is assigned to a server, the age of served information of flow \(m\) drops greatly, and the next server will be assigned to process the flow with the second maximum age of served information. As shown in [57, 58], using multiple parallel servers to process different flows is beneficial for NBU service times. ## VII Conclusion We have proposed causal scheduling policies and developed a unifying sample-path approach to prove that these scheduling policies are (near) optimal for minimizing age of information in continuous-time and discrete-time status updating systems with multiple flows, multiple servers, and transmission errors. ## Acknowledgement We appreciate Elif Uysal's support throughout this endeavor. Additionally, we thank the anonymous reviewers for their valuable comments.
2310.20193
FedRec+: Enhancing Privacy and Addressing Heterogeneity in Federated Recommendation Systems
Preserving privacy and reducing communication costs for edge users pose significant challenges in recommendation systems. Although federated learning has proven effective in protecting privacy by avoiding data exchange between clients and servers, it has been shown that the server can infer user ratings based on updated non-zero gradients obtained from two consecutive rounds of user-uploaded gradients. Moreover, federated recommendation systems (FRS) face the challenge of heterogeneity, leading to decreased recommendation performance. In this paper, we propose FedRec+, an ensemble framework for FRS that enhances privacy while addressing the heterogeneity challenge. FedRec+ employs optimal subset selection based on feature similarity to generate near-optimal virtual ratings for pseudo items, utilizing only the user's local information. This approach reduces noise without incurring additional communication costs. Furthermore, we utilize the Wasserstein distance to estimate the heterogeneity and contribution of each client, and derive optimal aggregation weights by solving a defined optimization problem. Experimental results demonstrate the state-of-the-art performance of FedRec+ across various reference datasets.
Lin Wang, Zhichao Wang, Xi Leng, Xiaoying Tang
2023-10-31T05:36:53Z
http://arxiv.org/abs/2310.20193v1
# FedRec+: Enhancing Privacy and Addressing Heterogeneity in Federated Recommendation Systems ###### Abstract Preserving privacy and reducing communication costs for edge users pose significant challenges in recommendation systems. Although federated learning has proven effective in protecting privacy by avoiding data exchange between clients and servers, it has been shown that the server can infer user ratings based on updated non-zero gradients obtained from two consecutive rounds of user-uploaded gradients. Moreover, federated recommendation systems (FRS) face the challenge of heterogeneity, leading to decreased recommendation performance. In this paper, we propose FedRec+, an ensemble framework for FRS that enhances privacy while addressing the heterogeneity challenge. FedRec+ employs optimal subset selection based on feature similarity to generate near-optimal virtual ratings for pseudo items, utilizing only the user's local information. This approach reduces noise without incurring additional communication costs. Furthermore, we utilize the Wasserstein distance to estimate the heterogeneity and contribution of each client, and derive optimal aggregation weights by solving a defined optimization problem. Experimental results demonstrate the state-of-the-art performance of FedRec+ across various reference datasets. ## I Introduction Recommender systems have experienced significant advancements in recent years, enabling personalized recommendations for users [28]. However, traditional centralized recommender systems raise concerns about privacy leakage and data integration limitations, as they rely on a central server to store user data [21, 17]. On the other hand, federated learning (FL) is a distributed learning scheme that ensures privacy preservation by allowing participants to collaboratively train a machine learning model without sharing data [14]. The combination of federated learning and recommendation systems gives rise to federated recommendation systems (FRS), offering a promising solution for privacy-preserving recommendations [22]. FRS addresses privacy and data security concerns by decentralizing the recommendation process. User data remains localized on individual devices or servers, and models are trained locally without sharing data. This decentralized approach enhances user privacy and fosters trust. Various approaches, such as federated matrix factorization [1, 12], federated collaborative filtering [4, 5], and federated deep learning [15], distribute the training process across each local parity and aggregate gradients on a central server. However, privacy preservation remains a major challenge in FRS. Although data decentralization reduces privacy risks compared to conventional data-center training, transmitted gradients between parties can still leak user privacy [26]. To address this, various privacy protection mechanisms, including pseudo items [10], homomorphic encryption [2, 11], secret sharing [11], and differential privacy [4, 26], have been incorporated into FRS. Pseudo-item method, in particular, has gained attention due to its low computation and communication costs. By uploading gradients of both interacted and randomly sampled unrated items, Pseudo items prevent the server from inferring user interactions, as shown in Figure 2. However, existing pseudo-item methods suffer from limitations such as introducing significant noise or imposing high communication burdens [10, 9]. Another challenge in FRS is the heterogeneity across local datasets and models, which complicates the aggregation of local recommendations into a coherent global recommendation [6]. Therefore, in this work, we are primarily interested in addressing two challenges in FRS: _(1) Design an effective pseudo items method that is low noise as well as low communication cost. (2) Design an aggregation algorithm to address the heterogeneity challenge in FRS._ To effectively address these challenges, we propose an innovative framework called **FedRec+**, which includes an improved pseudo items method that uses feature similarity to select a subset for virtual rate assignment and an optimal aggregation strategy based on the Wasserstein Distance, as illustrated in Figure 1. FedRec+ effectively preserves client privacy with low computation and communication costs and alleviates the heterogeneity problem in FRS. FedRec+ guarantees convergence with a controllable noise term. The contributions of this paper are summarized as follows: * We propose FedRec+, a privacy-enhancing FRS algorithm with explicit feedback. FedRec+ utilizes feature similarity to generate low-noise pseudo items and incorporates an optimal aggregation strategy derived from the Wasserstein distance between the global and local models to address the statistical heterogeneity problem. * We provide a convergence analysis of FedRec+, demonstrating a convergence rate of \(\mathcal{O}(\frac{1}{\sqrt{T}}+\frac{1}{T})\). This analysis explicitly highlights the impact of the pseudo-item method and the Wasserstein Distance based aggregation method on the convergence results. * We evaluate FedRec+'s performance using public datasets and find that it excels in recommendation performance. Additionally, our ablation study explores the impact of the number of pseudo items. ### _Related Work_ Several works have explored the use of federated learning in the context of recommendation systems. [1] propose a federated collaborative filtering method for recommendation systems. Other works that follow this line of research include [4; 15; 5]. Additionally, deep learning-based FedRS models have been proposed to leverage user data while ensuring privacy compliance [26]. FRS with Pseudo ItemsTo address privacy concerns in FRS, the use of pseudo items has been proposed. [10] Introduce the concept of pseudo items to protect users' interacted information. However, the vanilla approach of randomly selecting unrated items as pseudo items introduces significant noise. [9] Divide clients into different groups, where one group records the gradients of unrated items uploaded by another group, effectively reducing the noise caused by unrated items. However, this approach requires additional communication and storage costs between users, which can lead to privacy leakage issues [13]. [11] Combine secret sharing and pseudo items mechanisms to provide stronger privacy guarantees, while [26] combine pseudo items and Local Differential Privacy (LDP) mechanisms to protect user interaction behaviors and ratings in FRS. However, none of these methods effectively address the challenge of large noise from pseudo items while maintaining a low communication cost. In this paper, we propose FedRec+ that leverages each client's own data information to select optimal unrated items, minimizing noise without requiring communication between users. FRS with AggregationWhile aggregation algorithms for federated learning (FL) have been extensively studied for various purposes such as convergence acceleration [24; 3], fairness enhancement [25], and robustness improvement [19], limited research has been conducted on aggregation algorithms specifically tailored for FRS. [18] Propose FedFast, a federated recommendation model with improved aggregation and update policies. However, there has been no dedicated work addressing the heterogeneity problem in FRS from an aggregation perspective. In this paper, we propose an aggregation algorithm for FRS that utilizes Wasserstein Distance to constrain the objective, effectively tackling the heterogeneity challenge. ## II System Model and Algorithm In this section, we first state the problem setup (Sec II-A), and after explaining the FedRec+ algorithm 1 (Sec II-B and Sec II-C), we present our theoretical result along with the underlying assumptions (Sec III). **Notations:** Following the commonly used notations in probabilistic matrix factorization [7], the rating of a user \(u\) to an item \(i\) is calculated as the inner product of their latent feature vectors, i.e., \(\hat{r}_{ui}=\mathbf{U}_{u}\mathbf{V}_{i}^{\top}\), where \(\mathbf{U}_{u}\in\mathbb{R}^{1\times d}\) and \(\mathbf{V}_{i}\in\mathbb{R}^{1\times d}\) are the latent feature vectors of user \(u\) and item \(i\), respectively. The ground-truth rating of item \(i\) by user \(u\) is denoted as \(r_{ui}\). The sets of rated and unrated items for user \(u\) are represented as \(\mathcal{I}_{u}\) and \(\mathcal{I}_{u}^{\prime}\), respectively. The local and global learning rates are denoted as \(\eta_{L}\) and \(\eta\), respectively. \(b\in[0,B]\) and \(k\in[0,K]\) are local batch and local epoch respectively. Boldface characters are used to represent vectors. ### _Problem Setup_ Before presenting our approach, we provide an overview of the federated matrix factorization (FedMF) algorithm. In a recommender system, the goal is to fill in missing values of a rating matrix \(\boldsymbol{R}\in\mathbb{R}^{n\times m}\). Matrix factorization (MF) is a widely used approach that decomposes the matrix into two low-rank matrices. The rating \(r_{ui}\) that user \(u\) gives to item \(i\) can be approximated as: \[\hat{r}_{ui}=\boldsymbol{U}_{u}\boldsymbol{V}_{i}^{\top}, \tag{1}\] where \(\boldsymbol{V}_{i}\) represents the latent factors of item \(i\), and \(\boldsymbol{U}_{u}\) represents the latent factors of user \(u\). The latent factors are learned by minimizing a loss function that incorporates the known ratings and regularization terms: \[\min_{\boldsymbol{V}_{i},\boldsymbol{U}_{u}}\frac{1}{2}\sum_{(u,i)\in \mathcal{I}_{u,i}}\left(r_{ui}-\boldsymbol{U}_{u}\boldsymbol{V}_{i}^{\top} \right)^{2}+\lambda\left(\left\|\boldsymbol{V}_{i}\right\|_{2}^{2}+\left\| \boldsymbol{U}_{u}\right\|_{2}^{2}\right)\,, \tag{2}\] Fig. 1: **Framework of FedRec+.** FedRec+ consists of a privacy-preserving component and a dynamic aggregation component. Specifically, (1) it incorporates an enhanced pseudo-items method to safeguard the privacy of interacted items, and (2) it employs an optimal aggregation strategy to address the heterogeneity challenge. Fig. 2: **Illustration of pseudo-item method.** To maintain privacy during rating data gradient uploads, the gradients of rated and unrated items are mixed to prevent privacy leaks, safeguarding sensitive information and ensuring privacy protection. where \(\mathcal{I}_{u,i}\) represents the set of user-item pairs with known ratings, and \(\lambda\) is the regularization coefficient. Stochastic gradient descent is utilized to update each parameter: \[\boldsymbol{V}_{i}\leftarrow\boldsymbol{V}_{i}-\eta_{L}\cdot\left(\lambda \cdot\boldsymbol{V}_{i}-e_{ui}\cdot\boldsymbol{U}_{u}\right)\,, \tag{3}\] \[\boldsymbol{U}_{u}\leftarrow\boldsymbol{U}_{u}-\eta_{L}\cdot\left(\lambda \cdot\boldsymbol{U}_{u}-e_{ui}\cdot\boldsymbol{V}_{i}\right)\,, \tag{4}\] where \(e_{ui}=r_{ui}-\boldsymbol{U}_{u}\boldsymbol{V}_{i}^{\top}\) is the prediction error, and \(\eta_{L}\) is the local learning rate. The vanilla FedMF algorithm [1] extends MF to a federated setting. In FedMF, the item latent factors \(\left\{\boldsymbol{V}_{i}\right\}_{i\in\mathcal{I}}\) are stored on the central server, while each user's latent factors \(\boldsymbol{U}_{u}\) are kept on the local party. The training process consists of the following steps, which are repeated until the convergence of model parameters: (1) The local party downloads item \(i\)'s latent factors \(\boldsymbol{V}_{i}\) from the server. (2) The local party updates the user's latent factors \(\boldsymbol{U}_{u}\) using its private local data \(\boldsymbol{r}_{u}\). (3) The local party computes the gradients of each item's latent factors \(\boldsymbol{g}_{ui}=\lambda\cdot\boldsymbol{V}_{i}-e_{ui}\cdot\boldsymbol{U} _{u}\) with \(\boldsymbol{r}_{u}\) and the updated \(\boldsymbol{U}_{u}\). (4) The local party sends \(\boldsymbol{g}_{ui}\) to the server. (5) The server aggregates the gradients \(\sum_{u\in\mathcal{U}}\boldsymbol{g}_{ui}\) and updates \(\boldsymbol{V}_{i}\). However, the vanilla FedMF algorithm suffers from privacy leakage due to the transmitted gradients. The server continuously receives the gradients of the item \(i\)'s latent vector from user \(u\) at step \(t-1\) and step \(t\): \[\boldsymbol{g}_{u,i}^{t-1}=\lambda\cdot\boldsymbol{V}_{i}^{t-1}-e_{ui}^{t-1} \cdot\boldsymbol{U}_{u}^{t-1}\,, \tag{5}\] \[\boldsymbol{g}_{u,i}^{t}=\lambda\cdot\boldsymbol{V}_{i}^{t}-e_{ui}^{t}\cdot \boldsymbol{U}_{u}^{t}\,, \tag{6}\] where \(\boldsymbol{V}_{i}^{t-1}\) and \(\boldsymbol{V}_{i}^{t}\) represent the item \(i\)'s latent factors at step \(t-1\) and step \(t\) respectively, and \(\boldsymbol{U}_{u}^{t-1}\) and \(\boldsymbol{U}_{u}^{t}\) represent the user \(u\)'s latent factors at step \(t-1\) and step \(t\) respectively. The server also knows the update rule for the user's latent factors: \[\boldsymbol{U}_{u}^{t}=\boldsymbol{U}_{u}^{t-1}+\gamma\cdot\sum_{i\in \mathcal{I}_{u}}\left(\lambda\cdot\boldsymbol{U}_{u}^{t-1}-e_{ui}^{t}\cdot \boldsymbol{V}_{i}^{t}\right)\,, \tag{7}\] where \(\mathcal{I}_{u}\) represents the set of items that user \(u\) has rated. Combining these equations, the server can solve for the unknown variables, revealing private raw ratings of each user [8]. To address the gradient leakage problem of vanilla FedMF, several secure FedMF algorithms have been proposed. One such algorithm is FedRec [10], which introduces a hybrid filling (HF) strategy to randomly sample unrated items and mix them with rated items. The stochastic gradient descent of FedRec is as follows: \[\nabla V^{\mathrm{HF}}(u,i)=\left\{\begin{array}{l}\left(U_{u}\cdot V_{i} ^{T}-r_{ui}\right)U_{u}+\lambda V_{i},y_{ui}=1,\\ \left(U_{u}\cdot V_{i}^{T}-r_{ui}^{\prime}\right)U_{u}+\lambda V_{i},y_{ui}=0.\end{array}\right. \tag{8}\] where \(r_{ui}\) and \(r_{ui}^{\prime}\) are the true observed rating and the virtual rating of user \(u\) to item \(i\), respectively. While FedRec ensures privacy protection in rating prediction, the random sampling of items in the hybrid filling strategy introduces noise to the recommendation model, leading to potential performance impacts. This serves as the motivation to develop a lossless version of FedRec, which is crucial for practical deployment in real-world applications. ### _Feature similarity for pseudo items_ While FedRec ensures privacy protection in rating prediction, the hybrid filling strategy, which involves randomly sampling items, introduces noise that impacts performance. To address this, we aim to design a low-noise scheme for assigning rates to unrated items. Inspired by feature selection techniques utilizing feature similarity [16], we aim to select items with characteristics most similar to the rated items for assigning virtual rates. Feature similarity enables the exploration of hidden relationships in the feature space among recommended items [20]. For instance, assuming that some unrated items in the dataset share similar features with rated items having a specific score, the virtual scores of these similar unrated items would be close to that specific score. To reduce noise while maintaining privacy protection, we selectively choose pseudo items that closely align with a user's existing ratings for hybrid filling. To learn user and item features, we employ an encoder. For instance, let \(E_{r}=Encoder(V_{i}^{r})\) represent the feature of a rated item and \(E_{un}=Encoder(V_{i}^{un})\) represent the feature of an unrated item. We calculate the cosine similarity between these features. Considering \(E_{r}=[x_{1},x_{2},\ldots,x_{n}]\) and \(E_{un}=[y_{1},y_{2},\ldots,y_{n}]\), the cosine similarity \(\theta\) measures the angle between the two vectors and is defined as follows: \[\text{Sim}(E_{r},E_{un})=\cos\theta=\frac{\sum_{i=1}^{n}(x_{i}\cdot y_{i})}{ \sqrt{\sum_{i=1}^{m}x_{i}^{2}\cdot\sum_{i=1}^{n}y_{i}^{2}}}. \tag{9}\] By selecting the top-k unrated items with the most similar features to the scored items, we obtain low-noise pseudo items. These virtual rates, based on latent relationships in the item feature space, introduce less noise compared to randomly assigned scores or randomly averaged virtual scores. ### _Wasserstein Distance for Aggregation_ In this section, we present the derivation of aggregation weights based on Wasserstein distance to address the challenge of statistical heterogeneity in FRS, as depicted in Figure 3. Wasserstein distance [23] is a metric on probability distributions inspired by the problem of optimal transport. It is particularly suitable for measuring high-dimensional distributions, even in the absence of overlap. It quantifies the dissimilarity between local models and the global model. Wasserstein distance of two distribution \(\boldsymbol{\mu}\) and \(\boldsymbol{\nu}\) is defined as: \[\boldsymbol{W}_{p}(\boldsymbol{\mu},\boldsymbol{\nu})=\inf_{\boldsymbol{\gamma} \in\boldsymbol{\Gamma}(\boldsymbol{\mu},\boldsymbol{\nu})}\mathbb{E}_{( \boldsymbol{x},\boldsymbol{y})\sim\gamma}\left[\|\boldsymbol{x}-\boldsymbol{y }\|_{p}\right]\,, \tag{10}\] which generally lacks a closed-form solution. However, if we consider the L2-norm as the geometric metric and simplify the problem to a Gaussian distribution, an analytic solution for the distance can be obtained: \[d^{2}=\left\|\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{2}\right\|_{2}^{2}+\mathrm{tr }\left(\left(\boldsymbol{\Sigma}_{1}^{\frac{1}{2}}-\boldsymbol{\Sigma}_{2}^{ \frac{1}{2}}\right)^{2}\right)\,. \tag{11}\] Here, \(\boldsymbol{\mu}_{1}\), \(\boldsymbol{\Sigma}_{1}\) represent the mean and variance of the first distribution, while \(\boldsymbol{\mu}_{2}\), \(\boldsymbol{\Sigma}_{2}\) represent the mean and variance of the second distribution. Assuming that each client's aggregation weight is denoted by \(p_{u}\), the server aggregates local models \(\boldsymbol{w}_{t+1}^{1},\cdots,\boldsymbol{w}_{t+1}^{|U|}\) to compute the global model using the following equation: \[\overline{\boldsymbol{w}}=\sum_{u=1}^{m}p_{u}\boldsymbol{w}_{t+1}^{u}\,. \tag{12}\] where \(|U|=m\) represents the number of users in FRS. The distance between the aggregated model and the true global model can be formalized as: \[D=\boldsymbol{W}_{2}\left(p_{\overline{\boldsymbol{w}}}(\boldsymbol{w}),p_{ \boldsymbol{w}_{g}}(\boldsymbol{w})\right)\,, \tag{13}\] where \(\boldsymbol{w}_{g}\) represents the optimal model parameters based on the global distribution, i.e., the distribution of data gathered from all participants. We derive an upper bound for equation (13): \[\begin{split}& D=\boldsymbol{W}_{2}\left(p_{\overline{ \boldsymbol{w}}}(\boldsymbol{w}),p_{\boldsymbol{w}_{g}}(\boldsymbol{w}) \right)\\ &=\inf_{\boldsymbol{\gamma}\in\boldsymbol{\Gamma}(\boldsymbol{ p}_{\boldsymbol{w}_{g}},\boldsymbol{w}_{g})}\mathbb{E}_{(\boldsymbol{x}, \boldsymbol{y})\sim\gamma}\left[\|\boldsymbol{x}-\boldsymbol{y}\|_{2}\right] \\ &\leq m\sum_{u=1}^{m}p_{u}^{2}\inf_{\gamma_{u}\in\boldsymbol{ \Gamma}\left(p_{\boldsymbol{w}_{g}^{*}},p_{\boldsymbol{w}_{g}}\right)}\mathbb{ E}_{(\boldsymbol{x}_{u},\boldsymbol{y}_{u})\sim\gamma_{u}}\left[\|\boldsymbol{x}_{u}- \boldsymbol{y}_{u}\|_{2}\right]\,.\end{split} \tag{14}\] In the above equation, we split \(\|\boldsymbol{x}-\boldsymbol{y}\|_{2}\) as \(\|\sum_{u=1}^{m}p_{k}\left(\boldsymbol{x}_{k}-\boldsymbol{y}_{k}\right)\|_{2}\), where each pair \((\boldsymbol{x}_{k},\boldsymbol{y}_{k})\) is supported on \(\boldsymbol{\Gamma}\left(p_{\boldsymbol{w}_{K}},p_{\boldsymbol{w}_{g}}\right)\). The inequality relies on the Cauchy-Schwarz inequality and the independence of \(\boldsymbol{w}_{K}^{u}\). By bounding the distance, we establish an upper bound for the gap between the aggregated model and the true global model. Consequently, minimizing the above upper bound effectively approximates the objective of minimizing equation (13). **Lemma II.1**.: _Let \(\xi_{u,k}\) be a sample from a local dataset uniformly at random. For a sufficiently large batch size \(B\), the finite-dimensional vector \(g=g_{1},g_{2},\cdots,g_{K}\) converges to a joint distribution approximately according to the Central Limit Theorem, where \(g_{k}=\frac{1}{B}\sum_{b=1}^{B}\nabla_{w}F(w_{k},\xi_{k,b})\) for \(k\in\{1,2,\cdots,K\}\). This implies that, with mini-batch stochastic gradient descent, the sum of all local updates converges to a Gaussian distribution._ Proof.: For any constant of the local epoch \(K\), we can rewrite gradient vector \(\boldsymbol{g}\) as \[\boldsymbol{g}=\frac{1}{B}\sum_{b=1}^{B}\left(\nabla_{\boldsymbol{w}}F\left( \boldsymbol{w}_{1},\boldsymbol{\xi}_{1,b}\right),\cdots,\nabla_{\boldsymbol{w} }F\left(\boldsymbol{w}_{K},\boldsymbol{\xi}_{K,b}\right)\right)\,, \tag{15}\] let \(\tilde{\boldsymbol{g}}_{b}=\left(\nabla_{\boldsymbol{w}}F\left(\boldsymbol{w} _{1},\boldsymbol{\xi}_{1,b}\right),\cdots,\nabla_{\boldsymbol{w}}F\left( \boldsymbol{w}_{K},\boldsymbol{\xi}_{K,b}\right)\right)\), then we have: \[\boldsymbol{g}=\frac{1}{B}\sum_{k=1}^{B}\tilde{\boldsymbol{g}}_{b}\,. \tag{16}\] As long as the gradient norm is upper bounded and \(K\) is finite, \(\tilde{\boldsymbol{g}}_{b}\) follows some complex distribution with bounded covariance matrix. Since \(\boldsymbol{\xi}_{k,b}\) is sampled independently from the same distribution, \(\boldsymbol{g}\) is the mean vector of \(\tilde{\boldsymbol{g}}_{1},\cdots,\tilde{\boldsymbol{g}}_{B}\), which are independent and identically distributed (i.i.d.) random vectors. Therefore, according to the Central Limit Theorem, \(\boldsymbol{g}\) converges to \(\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) in distribution. Lemma II.1 implies that, with mini-batch stochastic gradient descent, the sum of all local updates converges to a Gaussian distribution, i.e.: \[\overline{\boldsymbol{g}}=\boldsymbol{w}_{K}-\boldsymbol{w}_{1}=\sum_{k=1}^{K} \boldsymbol{g}_{k}=\boldsymbol{1}^{T}\boldsymbol{g}\,, \tag{17}\] where \(\overline{\boldsymbol{g}}\) is a linear transformation of a joint Gaussian vector, thus it conforms to a Gaussian distribution. As the Gaussian distribution is determined by the mean vector and the covariance matrix, our next goal is to estimate these variables. Fig. 3: **Illustration of aggregation weights under heterogeneous data. The aggregation weight is determined inversely proportional to the Wasserstein distance. As the distributions of the local model and global model become closer, the aggregation weight increases accordingly.** Based on (17), we know that the total gradient distribution is approximated by the sum of independent random vectors, i.e. \(\overline{\mathbf{g}}=\mathbf{w}_{K}-\mathbf{w}_{1}=\sum_{k=1}^{K}\eta\mathbf{g}_{k}=\eta_{L} \mathbf{1}^{T}\mathbf{g}\), where \(\eta_{L}\) is the local learning rate. Therefore, the corresponding parameters can be estimated as \(\mathbf{\mu}=\eta_{L}\sum_{k=1}^{K}\mathbb{E}\left[\mathbf{g}_{k}\right]\in K\eta_{L} \mathbb{E}\left[\mathbf{g}_{1}\right]\) and \(\mathbf{\Sigma}=\eta_{L}^{2}\sum_{k=1}^{K}\mathbf{\Sigma}_{k}\doteq K\eta_{L}^{2}\mathbf{ \Sigma}_{1}\). Here, index 1 represents a random user, and \(E[\mathbf{g}_{1}]\) and \(\mathbf{\Sigma}_{1}\) represent the average gradients and average variance of client 1. In particular, based on the relationship between the covariance matrix, correlation matrix, and the mean vector, we can obtain \(\Sigma_{k}=\mathbb{E}[g_{k}g_{k}^{T}]-\mathbb{E}[g_{k}]\mathbb{E}[g_{k}^{T}]\), where \(g_{k}=\frac{1}{B}\sum_{b=1}^{B}\nabla F(x,\xi_{k,b})\). Furthermore, implied by Lemma II.1, the global gradient distribution converges to \(\mathcal{N}(\mathbf{\mu}_{g},\frac{\mathbf{\Sigma}_{g}}{B})\). Therefore, to minimize the distance between the aggregated model and the global model, we can formulate an optimization problem as follows: \[\min_{p_{k}} D=\sum_{u=1}^{m}p_{u}^{2}\left(\left\|\eta\mathbf{\mu}_{g}-\eta_{L}K \mathbf{\mu}_{u}\right\|^{2}+\mathrm{tr}\left(\mathbf{M}^{2}\right)\right)\] \[\text{s.t. }\mathbf{M}=\left(\frac{\eta^{2}\mathbf{\Sigma}_{g}}{B} \right)^{\frac{1}{2}}-\left(\frac{K^{2}\eta_{L}^{2}\mathbf{\Sigma}_{u}}{B}\right) ^{\frac{1}{2}},\sum_{u=1}^{m}p_{u}=1,p_{u}\geq 0\,. \tag{18}\] **Proposition II.2**.: _The optimal server aggregation weights that minimize the distribution distance between the aggregated model and the ideal global model, using Wasserstein Distance, are given by:_ \[p_{u}^{*}=\frac{\frac{1}{\left\|K\eta_{L}\mathbf{\mu}_{u}-\eta\mathbf{\mu}_{g}\right\| ^{2}+\mathrm{tr}\left(\mathbf{M}^{2}\right)}}{\sum_{u=1}^{m}\frac{1}{\left\|K\eta _{L}\mathbf{\mu}_{u}-\eta\mathbf{\mu}_{g}\right\|^{2}+\mathrm{tr}\left(\mathbf{M}^{2} \right)}}. \tag{19}\] Proof.: It can be seen that (18) is a convex optimization problem, which we use the Karush-Kuhn-Tucker (KKT) conditions to solve. Introducing Lagrange multipliers \(\lambda_{u}\in\mathbb{R}\) for the inequality constraints \(p_{u}\geq 0\), and a multiplier \(\nu\in\mathbb{R}\) for the equality constraint \(\sum_{u}p_{u}=1\), we have \[\lambda_{u}\geq 0,\lambda_{u}p_{u}=0,\sum_{u\in[m]}p_{u}=1,\quad p _{u}\geq 0,\] \[2\left(\left\|\eta\mathbf{\mu}_{g}-\eta_{L}K\mathbf{\mu}_{u}\right\|^{2}+ \mathrm{tr}\left(\mathbf{M}^{2}\right)\right)p_{u}-\lambda_{u}+\nu=0. \tag{20}\] Since \(\sum_{u\in[m]}p_{u}=1\), there exists \(u_{0}\) such that \(p_{u_{0}}>0\). Thus we have \(\lambda_{u_{0}}=0\), which yields \(\nu=-2\left(\left\|\eta\mathbf{\mu}_{g}-\eta_{L}K\mathbf{\mu}_{u}\right\|^{2}+\mathrm{ tr}\left(\mathbf{M}^{2}\right)\right)p_{u_{0}}<0\). Therefore, \(p_{u}>0\) always holds because if \(p_{u}=0\), it leads to \(2\left(\left\|\eta\mathbf{\mu}_{g}-\eta_{L}K\mathbf{\mu}_{u}\right\|^{2}+\mathrm{tr} \left(\mathbf{M}^{2}\right)\right)p_{u}-\lambda_{u}+\nu<0\) which violates the condition in (20). As a result, we have \(\lambda_{u}=0,\forall u\in[m]\). Furthermore, \[p_{u}=-\frac{\nu}{2\left(\left\|\eta\mathbf{\mu}_{g}-\eta_{L}K\mathbf{\mu}_{u}\right\|^ {2}+\mathrm{tr}\left(\mathbf{M}^{2}\right)\right)},\quad\forall u\in[m]. \tag{21}\] By plugging (21) into \(\sum_{u\in[m]}p_{u}=1\), we have \[\nu=-\frac{2}{\sum_{u}1/\left(\left\|\eta\mathbf{\mu}_{g}-\eta K\mathbf{\mu}_{u} \right\|^{2}+\mathrm{tr}\left(\mathbf{M}^{2}\right)\right)}. \tag{22}\] Plugging (22) back into (21) completes the proof. However, considering the increased communication traffic and computational complexity introduced by the covariance matrix, we need to simplify the procedure. Note that \[\lim_{B\rightarrow+\infty}p_{u}^{*}=\frac{\frac{1}{\left\|K\eta_{L}\mathbf{\mu}_{u }-\eta\mathbf{\mu}_{g}\right\|^{2}}}{\sum_{u}\frac{1}{\left\|K\eta_{L}\mathbf{\mu}_{u }-\eta\mathbf{\mu}_{g}\right\|^{2}}}\,. \tag{23}\] Hence, with a sufficiently large batch size, we can use (23) to estimate the optimal aggregation probability. Thus, we have derived an optimal aggregation strategy using the Wasserstein distance to address the challenge of heterogeneity in FRS. ## III Theoretical analysis In this section, to ease the theoretical analysis, we redefine some notations: the parameter of the model is \(\mathbf{x}\) instead of \(\mathbf{U}\) and \(\mathbf{V}\), and use index \(i\in[1,\cdots,m],k\in[1,K],t\in[0,T]\) to indicate user, local epoch, and communication round. The optimization objective of FRS is formulated as follows: \[\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x})=\mathbb{E}_{i\sim\mathcal{P}}\left[F_{i} (\mathbf{x})\right]\,, \tag{24}\] where \(F_{i}(\mathbf{x})\triangleq\mathbb{E}_{\xi\sim P_{i}}[F_{i}(\mathbf{x},\xi)]\). Here, \(\mathcal{P}\) represents the overall data distribution of entire client distribution, \(\mathbf{x}\in\mathbb{R}^{d}\) is the model parameter, \(F_{i}(\mathbf{x})\) represents the local loss function at client \(i\) and \(P_{i}\) is the underlying distribution of local dataset at client \(i\). In general, \(P_{i}\neq P_{j}\) if \(i\neq j\) due to data heterogeneity. However, the loss function \(F(\mathbf{x})\) or full gradient \(\nabla F(\mathbf{x})\) can not be directly computed as the exact distribution of data is unknown in general. Hence, one often consider the following empirical risk minimization (ERM) problem in the form of finite-sum instead: \[\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x})=\sum_{i\in S_{t}}p_{i}F_{i}(\mathbf{x})\,, \tag{25}\] where \(F_{i}(\mathbf{x})=\frac{1}{\left|D_{i}\right|}\sum_{\xi\in D_{i}}F_{i}(\mathbf{x},\xi)\). Here, \(S_{t}\) is the selected client set in each round and \(p_{i}\) is the aggregation weights of clients. To ease the theoretical analysis of our work, we use the following widely used assumptions: **Assumption 1** (L-Smooth).: _There exists a constant \(L>0\), such that \(\left\|\nabla F_{i}(x)-\nabla F_{i}(y)\right\|\leq L\left\|x-y\right\|, \forall x,y\in\mathbb{R}^{d}\), and \(i=1,2,\ldots,m\)._ **Assumption 2** (Unbiased Local Gradient Estimator and Local Variance).: _Let \(\xi_{t}^{i}\) be a random local data sample in the round \(t\) at client \(\mathbb{E}\left[\nabla F_{i}(\mathbf{x}_{t},\xi_{t}^{i})\right]=\nabla F_{i}(\mathbf{x}_ {t}),\forall i\in[m]\). There exists a constant bound \(\sigma_{L}>0\), satisfying \(\mathbb{E}\left[\left\|\nabla F_{i}(\mathbf{x}_{t},\xi_{t}^{i})-\nabla F_{i}(\mathbf{x}_ {t})\right\|^{2}\right]\leq\sigma_{L}^{2}\)._ **Assumption 3** (Bound Gradient Dissimilarity).: _For any set of weights \(\{w_{i}\geq 0\}_{i=1}^{m}\) with \(\sum_{i=1}^{m}w_{i}=1\), there exist constants \(\sigma_{G}^{2}\geq 0\) and \(A\geq 0\) such that \(\sum_{i=1}^{m}w_{i}\left\|\nabla F_{i}(\mathbf{x})\right\|^{2}\leq(A^{2}+1)\left\| \sum_{i=1}^{m}w_{i}\nabla F_{i}(\mathbf{x})\right\|^{2}+\sigma_{G}^{2}\)._ The above three assumptions are commonly used in both non-convex optimization and FL literature, see e.g. [6, 27]. For Assumption 3, if all local loss functions are identical, then we have \(A=0\) and \(\sigma_{G}=0\). Since there are both rated items and pseudo items, \(\nabla F_{i}(\mathbf{x}_{t,k}^{i})=\frac{1}{B}\sum_{b\in B}\nabla F(\mathbf{x}_{t,k}^{i}, \xi_{b})=\alpha\frac{1}{B^{r}}\sum_{b^{\prime}\in B^{r}}\nabla F(\mathbf{x}_{t,k}^{i },\xi_{b^{\prime}})+(1-\alpha)\frac{1}{B^{r}}\sum_{b^{\prime}\in B^{u}}\nabla F (\mathbf{x}_{t,k}^{i},\xi_{b^{\prime}})=\alpha\nabla\overline{F}(\mathbf{x}_{t,k}^{i} )+(1-\alpha)\tilde{F}(\mathbf{x}_{t,k}^{i})\), where \(B^{r}\) and \(B^{u}\) represent the rated items and unrated items, respectively. \(B=B^{r}\cup B^{u}\) represents total items. \(\alpha=\frac{B^{r}}{B}\) is the relative ratio of rated items in all user's items. **Assumption 4** (Gradient Difference Bound).: _In each round, we assume that the gradient of the pseudo item is denoted as \(\nabla\tilde{F}(\mathbf{x}_{t,k}^{i})\), while its true gradient is denoted as \(\nabla\overline{F}(\mathbf{x}_{t,k}^{i})\). The gap of the approximation satisfies the following conditions: \(\mathbb{E}\|\nabla\overline{F}(\mathbf{x}_{t,k}^{i})-\nabla\tilde{F}(\mathbf{x}_{t,k}^ {i})\|^{2}\leq\rho^{2}\), \(\forall i,t,k\)._ **Theorem III.1** (Convergence rate).: _Under Assumption 1- 4, and let constant local and global learning rate \(\eta_{L}\) and \(\eta\) be chosen such that \(\eta_{L}<min\left(1/(8LK),C\right)\), where \(C\) is obtained from the condition that \(\frac{1}{2}-10L^{2}\frac{1}{m}\sum_{i=1}^{m}K^{2}\eta_{L}^{2}(A^{2}+1)(\chi_{ p\parallel w}^{2}A^{2}+1)>C>0\), and \(\eta\leq 1/(\eta_{L}L)\). The expected gradient norm of FedRec+ is bounded as follows:_ \[\underset{t\in[T]}{\min}\mathbb{E}\|\nabla f(\mathbf{x}_{t})\|^{2} \leq 2\left[\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}A^{2}+1\right](\frac{f_{0}-f_{*}}{c \eta_{L}KT}+\Phi)\] \[+2\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\sigma_{G}^{2}\,, \tag{26}\] _where \(f_{0}=f(x_{0})\), \(f_{*}=f(x_{*})\), and_ \[\Phi =\frac{1}{c}\left[\frac{5\eta_{L}^{2}KL^{2}}{2}(\sigma_{L}^{2}+6 K\sigma_{G}^{2}+6K(1-\alpha)^{2}\rho^{2})\right.\] \[\left.+\frac{\eta_{\eta_{L}L}}{2}(\sigma_{L}^{2}+(1-\alpha)^{2} \rho^{2})+20L^{2}K^{2}(A^{2}+1)\eta_{L}^{2}\chi_{\mathbf{w}\parallel\mathbf{p}}^{2} \sigma_{G}^{2}\right]\,.\] _where \(\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}=\sum_{i=1}^{m}\left(w_{i}-p_{i}\right)^{2}/p_{i}\) represents the chi-square divergence between vectors \(\mathbf{w}=\left[\frac{1}{m},\ldots,\frac{1}{m}\right]\) and \(\mathbf{p}=[p_{1},\ldots,p_{m}]\). Observe that when all clients have uniform data distribution, we have \(\mathbf{p}=\mathbf{w}\) such that \(\chi^{2}=0\)._ **Corollary III.2**.: _Suppose \(\eta_{L}\) and \(\eta\) are \(\eta_{L}=\mathcal{O}\left(\frac{1}{\sqrt{KL}}\right)\) and \(\eta=\mathcal{O}\left(\sqrt{Km}\right)\) such that the conditions mentioned above are satisfied. Then for sufficiently large T, the iterates of FedRec+ satisfy:_ \[\min_{t\in[T]}\|\nabla f\left(\mathbf{x}_{t}\right)\|^{2}\leq\mathcal{O} \left(\frac{(f^{0}-f^{*})}{\sqrt{mKT}}\right)\] \[+\mathcal{O}\left(\frac{\sqrt{m}(\sigma_{L}^{2}+(1-\alpha)^{2} \rho^{2})}{2\sqrt{KT}}\right)+\mathcal{O}\left(\frac{20(A^{2}+1)\chi_{\mathbf{w} \parallel\mathbf{p}}^{2}\sigma_{G}^{2}}{T}\right)\] \[+\mathcal{O}\left(\frac{5(\sigma_{L}^{2}+6K\sigma_{G}^{2}+6K(1- \alpha)^{2}\rho^{2})}{2KT}\right)+2\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\sigma_{G}^{ 2}\,. \tag{27}\] **Remark III.3** (Effects of pseudo items and reweight aggregation).: _The noise error introduced by pseudo items is denoted by \(\rho\). It is observed that a larger value of \(1-\alpha\) corresponds to a larger noise, implying that using more pseudo items leads to increased noise. The non-vanishing term \(\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\sigma_{G}^{2}\) represents the aggregation error arising from an unbiased aggregation distribution. In other words, there is always an error term present in the convergence rate as long as the aggregation algorithm exhibits bias._ _Proof Sketch:_ Using the following two lemmas, we can finish the proof. In particular, based on the Lemma III.4, we bound the gradient norm of \(\nabla f(\mathbf{x}_{t})\) by the norm of \(\nabla\tilde{f}(\mathbf{x}_{t})\), and then utilize Lemma III.5 we derive the upper bound for \(\nabla\tilde{f}(\mathbf{x}_{t})\). **Lemma III.4** (Gradient distance between pseudo items and rated items).: _For any model parameter \(\mathbf{x}\), the difference between the gradients of \(f(\mathbf{x})\) and \(\tilde{f}(\mathbf{x})\) can be bounded as follows:_ \[\|\nabla f(\mathbf{x})-\nabla\tilde{f}(\mathbf{x})\|^{2}\leq\chi_{\mathbf{w}\parallel\mathbf{p} }^{2}\left[A^{2}\|\nabla\tilde{f}(\mathbf{x})\|^{2}+\kappa^{2}\right]\,, \tag{28}\] \(f(x)\) _is the true objective with \(f(x)=\sum_{i=1}^{m}w_{i}f_{i}(x)\) where \(\mathbf{w}\) is usually average of all clients, i.e., \(\mathbf{w}=\frac{1}{m}\). \(\tilde{f}(x)=\sum_{i=1}^{m}p_{i}f_{i}(x)\) is the surrogate objective with the reweight aggregation probability \(\mathbf{p}\)._ _Proof._ \[\nabla f(\mathbf{x})-\nabla\tilde{f}(\mathbf{x})=\sum_{i=1}^{m}\frac{w_{i}-p_{i}}{\sqrt{ \tilde{p}}_{i}}\cdot\sqrt{\tilde{p}}_{i}\left(\nabla f_{i}(\mathbf{x})-\nabla \tilde{f}(\mathbf{x})\right)\,. \tag{29}\] Applying Cauchy-Schwarz inequality, it follows that \[\|\nabla f(\mathbf{x})-\nabla\tilde{f}(\mathbf{x})\|^{2} \tag{30}\] \[\leq\left[\sum_{i=1}^{m}\frac{(w_{i}-p_{i})^{2}}{p_{i}}\right] \left[\sum_{i=1}^{m}p_{i}\left\|\nabla F_{i}(x)-\nabla\tilde{f}(\mathbf{x})\right\|^{2}\right]\] \[\leq\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\left[A^{2}\|\nabla\tilde{f}( \mathbf{x})\|^{2}+\sigma_{G}^{2}\right]\,,\] where the last inequality uses Assumption 3. Note that \[\|\nabla f(\mathbf{x})\|^{2}\leq 2\|\nabla f(\mathbf{x})-\nabla\tilde{f}(\mathbf{x})\|^{2 }+2\|\nabla\tilde{f}(\mathbf{x})\|^{2} \tag{31}\] \[\leq 2\left[\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}A^{2}+1\right]\| \nabla\tilde{f}(\mathbf{x})\|^{2}+2\chi_{\mathbf{p}\parallel\mathbf{w}^{2}}^{2}\sigma_{G}^{2}\,.\] As a result, we obtain: \[\min_{t\in[T]}\|\nabla f\left(\mathbf{x}_{t}\right)\|^{2}\leq\frac{1}{ \tau}\sum_{t=0}^{T-1}\|\nabla f\left(\mathbf{x}_{t}\right)\|^{2} \tag{32}\] \[\leq 2\left[\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}A^{2}+1\right]\frac{1}{ \tau}\sum_{t=0}^{T-1}\left\|\nabla\tilde{f}\left(\mathbf{x}_{t}\right)\right\|^{2}+2 \chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\sigma_{G}^{2}\] \[\leq 2\left[\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}A^{2}+1\right]\epsilon_{ \rm opt}+2\chi_{\mathbf{w}\parallel\mathbf{p}}^{2}\sigma_{G}^{2}\,,\] where \(\epsilon_{\rm opt}\) denotes the optimization error. **Lemma III.5** (Local updates bound.).: _For any step-size satisfying \(\eta_{L}\leq\frac{1}{8LK}\), we can have the following results:_ \[\mathbb{E}\|x_{t,k}^{i}-x_{t}\|^{2}\leq 5K\left(\eta_{L}^{2} \sigma_{L}^{2}+6K\eta_{L}^{2}(1-\alpha)^{2}\rho^{2}+6K\eta_{L}^{2} Proof.: \[\mathbb{E}_{t}\|x_{t,k}^{i}-x_{t}\|^{2}\] \[\mathbb{E}_{t}\|x_{t,k-1}^{i}-x_{t}-\eta_{L}((1-\alpha)\nabla F(x_{ t,k-1}^{i};\xi_{b^{a}})\] \[+\alpha\nabla F(x_{t,k-1}^{i};\xi_{b^{r}})-(1-\alpha)\nabla\overline {F}_{i}(x_{t,k-1}^{i})-\alpha\nabla\overline{F}_{i}(x_{t,k-1}^{i})\] \[+(1-\alpha)\nabla\tilde{F}_{i}(x_{t,k-1}^{i})-(1-\alpha)\nabla \tilde{F}_{i}(x_{t,k-1}^{i})+\nabla F_{i}(x_{t,k-1}^{i})\] \[-\nabla F_{i}(x_{t})+\nabla F_{i}(x_{t}))\|^{2}\] \[\leq(1+\frac{1}{2K-1})\mathbb{E}_{t}\|x_{t,k-1}^{i}-x_{t}\|^{2}\] \[+(1-\alpha)^{2}\eta_{L}^{2}\mathbb{E}\|\nabla F(x_{t,k-1}^{i}; \xi_{b^{a}})-\nabla\tilde{F}_{i}(x_{t,k-1}^{i})\|^{2}\] \[+\alpha^{2}\eta_{L}^{2}\mathbb{E}\|\nabla F(x_{t,k-1}^{i};\xi_{b^ {r}})-\nabla\overline{F}_{i}(x_{t,k-1}^{i})\|^{2}\] \[+6K(1-\alpha)^{2}\eta_{L}^{2}\mathbb{E}\|\nabla\tilde{F}_{i}(x_{t,k-1}^{i})-\overline{F}_{i}(x_{t,k-1}^{i})\|^{2}\] \[+6K\eta_{L}^{2}\mathbb{E}\|\nabla F_{i}(x_{t,k-1}^{i})-\nabla F_{ i}(x_{t})\|^{2}+6K\eta_{L}^{2}\mathbb{E}\|\nabla F_{i}(x_{t})\|^{2}\] \[\leq(1+\frac{1}{K-1})\mathbb{E}_{t}\|x_{t,k-1}^{i}-x_{t}\|^{2}+ \eta_{L}^{2}\sigma_{L}^{2}+6K\eta_{L}^{2}(1-\alpha)^{2}\rho^{2}\] \[+6K\eta_{L}^{2}\sigma_{G}^{2}+6K\eta_{L}^{2}(A^{2}+1)\|\nabla f(x_ {t})\|^{2}\,.\] (33) Unrolling the recursion, we obtain: \[\mathbb{E}_{t}\|x_{t,k}^{i}-x_{t}\|^{2}\leq\sum_{p=0}^{k-1}(1+ \frac{1}{K-1})^{p}\left[\eta_{L}^{2}\sigma_{L}^{2}+6K\eta_{L}^{2}(1-\alpha)^{ 2}\rho^{2}\right.\] \[\left.+6K\eta_{L}^{2}\sigma_{G}^{2}+6K(A^{2}+1)\|\eta_{L}\nabla f (x_{t})\|^{2}\right]\] \[\leq(K-1)\left[(1+\frac{1}{K-1})^{K}-1\right]\left[\eta_{L}^{2} \sigma_{L}^{2}\right.\] \[\left.+6K\eta_{L}^{2}(1-\alpha)^{2}\rho^{2}+6K\eta_{L}^{2}\sigma_ {G}^{2}+6K(A^{2}+1)\|\eta_{L}\nabla f(x_{t})\|^{2}\right]\] \[\leq 5K(\eta_{L}^{2}\sigma_{L}^{2}+6K\eta_{L}^{2}(1-\alpha)^{2} \rho^{2}+6K\eta_{L}^{2}\sigma_{G}^{2})\] \[+30K^{2}(A^{2}+1)\eta_{L}^{2}\|\nabla f(x_{t})\|^{2}\,. \tag{34}\] ## IV Numerical results In this section, we present simulation results to validate the performance of the FedRec+ and compare it with the vanilla pseudo method (FedRec). We use two widely used benchmark datasets for recommendation including ML-100K and ML-1M. ML100K contains 100, 000 ratings of 1, 682 movies from 943 users; ML1M contains 1, 000, 209 ratings of 3, 952 movies from 6, 040 users, while both have rating levels of \([1,2,\cdots,5]\). We process each dataset follows the setting in [9]: (i) randomly dividing the dataset into five equal parts, (ii) using four parts for training and one part for testing, (iii) repeating this process four times to obtain five distinct sets of training and test data. Our experimental analysis is based on these five datasets, and we report the average performance across all five. In our experiment, we use the PMF [17] as the backbone model and we use three commonly used evaluation metrics, i.e., MAE, RMSE, and NMSE, for performance evaluation. Table I demonstrates the superior performance of FedRec+ over FedRec across all three recommendation metrics. This indicates our proposed algorithm is effective. Figure 4 illustrates the impact of varying numbers of pseudo items on ML1M. It indicates that higher numbers of pseudo items result in increased noise and consequently worse performance, aligning with our theoretical analysis. ## V Conclusion and future works This paper focuses on the problem of privacy-aware federated recommendation with explicit feedback. We propose FedRec+, a privacy-preserving framework that addresses this issue. FedRec+ utilizes feature similarity to generate low-noise pseudo items without client communication. Furthermore, we employ the Wasserstein Distance to optimize the aggregation probability, which helps handle the heterogeneity of the federated recommendation system. Convergence analysis is conducted to demonstrate the impact of pseudo items and aggregation probability. FedRec+ is a versatile solution that can be combined with other privacy-aware recommendation methods, such as differential privacy [4]. Our experimental results, based on public datasets, validate the effectiveness of FedRec+. ## VI Acknowledgements This work is supported in part by the National Natural Science Foundation of China under Grant No. 62001412, in part by the funding from Shenzhen Institute of Artificial Intelligence and Robotics for Society, in part by the Shenzhen Key Lab of Crowd Intelligence Empowered Low-Carbon Energy Network (Grant No. ZDSYS20220606100601002), and in part by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001).
2309.04146
NESTLE: a No-Code Tool for Statistical Analysis of Legal Corpus
The statistical analysis of large scale legal corpus can provide valuable legal insights. For such analysis one needs to (1) select a subset of the corpus using document retrieval tools, (2) structure text using information extraction (IE) systems, and (3) visualize the data for the statistical analysis. Each process demands either specialized tools or programming skills whereas no comprehensive unified "no-code" tools have been available. Here we provide NESTLE, a no-code tool for large-scale statistical analysis of legal corpus. Powered by a Large Language Model (LLM) and the internal custom end-to-end IE system, NESTLE can extract any type of information that has not been predefined in the IE system opening up the possibility of unlimited customizable statistical analysis of the corpus without writing a single line of code. We validate our system on 15 Korean precedent IE tasks and 3 legal text classification tasks from LexGLUE. The comprehensive experiments reveal NESTLE can achieve GPT-4 comparable performance by training the internal IE module with 4 human-labeled, and 192 LLM-labeled examples.
Kyoungyeon Cho, Seungkum Han, Young Rok Choi, Wonseok Hwang
2023-09-08T06:23:25Z
http://arxiv.org/abs/2309.04146v2
# Nestle: a No-Code Tool for Statistical Analysis of Legal Corpus ###### Abstract The statistical analysis of large scale legal corpus can provide valuable legal insights. For such analysis one needs to (1) select a subset of the corpus using document retrieval tools, (2) structuralize text using information extraction (IE) systems, and (3) visualize the data for the statistical analysis. Each process demands either specialized tools or programming skills whereas no comprehensive unified "no-code" tools have been available. Especially for IE, if the target information is not predefined in the ontology of the IE system, one needs to build their own system. Here we provide Nestle, a no code tool for large-scale statistical analysis of legal corpus. With Nestle, users can search target documents, extract information, and visualize the structured data all via the chat interface with accompanying auxiliary GUI for the fine-level control. Nestle consists of three main components: a search engine, an end-to-end IE system, and a Large Language Model (LLM) that glues the whole components together and provides the chat interface. Powered by LLM and the end-to-end IE system, Nestle can extract any type of information that has not been predefined in the IE system opening up the possibility of unlimited customizable statistical analysis of the corpus without writing a single line of code. The use of the custom end-to-end IE system also enables faster and low-cost IE on large scale corpus. We validate our system on 15 Korean precedent IE tasks and 3 legal text classification tasks from LexGLUE. The comprehensive experiments reveal Nestle can achieve GPT-4 comparable performance by training the internal IE module with 4 human-labeled, and 192 LLM-labeled examples. The detailed analysis provides the insight on the trade-off between accuracy, time, and cost in building such system. ## 1 Introduction Legal documents include a variety of semi-structured information stemming from diverse social disputes. For instance, precedents include a factual information (such as blood alcohol level in a driving under the influence (DUI) case or loss in an indemnification case) as well as a decision from court (fine, imprisonment period, claimed money from plaintiff, approved money by court, etc). While each document contains detailed information about specific legal events among a few individuals, community-level insights can be derived only by analyzing a substantial collection of these documents. For instance, the consequence of the subtle modification to the statute might only become evident through a comprehensive statistical analysis of the related legal corpus. Indeed a recent study shows that how the revision of the Road Traffic Act has changed the average imprisonment period in drunk driving cases by analyzing 24k Korean precedents [11]. Conducting a comprehensive statistical analysis on legal corpus in large scale may entail following three key steps: Figure 1: Illustration of Nestle. (1) choosing a subset of the corpus using retrieval tools, (2) structuralizing the documents using information extraction (IE) systems, and (3) visualizing the data for the statistical analysis. Each step requires either specialized tools or programming knowledge impeding analysis for the majority of legal practitioners. Especially during text text texturalization, if the target information is not predefined in the ontology of the IE system, one needs to build their own system. To overcome such limitation, we develop Nestle1, a no-code tool for statistical analysis of legal corpus. With Nestle, users can search target documents, extract information, and visualize statistical information of the structured data all within the chat interface, accompanied by an auxiliary GUI for the fine-level control (hyper parameter selection, ontology modification, data labeling etc). Nestle consists of three key components: a search engine to select the relevant sub-corpus via keyword matching, an end-to-end IE system to structuralize legal texts, and Large Language Model (LLM) to enable all the operations via chat interface. The LLM module is also used for augmenting the training data of the IE system. Powered by LLM and the end-to-end IE system, Nestle can extract any type of information that has not been predefined in the IE system. This opens up the possibility of unlimited user-driven custom statistical analysis of the corpus without writing a single line of code. The use of the custom end-to-end IE system also enables faster and low-cost IE on large scale corpus. Footnote 1: No code tool for Statistical analysis of Legal corpus We validate Nestle on three legal AI tasks: (1) 4 Korean Legal IE tasks (Hwang et al., 2022), (2) 11 new Korean Legal IE tasks derived from LBoxOpen dataset (Hwang et al., 2022), and (3) 3 English legal text classification tasks from LexGLUE (Chalkidis et al., 2022; Chalkidis, 2023; Tuggener et al., 2020; Lippi et al., 2018). The comprehensive experiments reveal Nestle can achieves GPT-4 comparable performance with just 4 human-labeled, and 192 LLM-labeled examples. The accompanying analysis focusing on three real-world metrics-accuracy, time, and cost-reveals that Nestle becomes an order of magnitude cheaper and faster compared to commercial LLMs when applied to industrial scale corpus. In summary, our contributions are as below. * We develop Nestle, a no-code tool for statistical analysis of legal corpus that can assist users to perform large scale statistical analysis of legal corpus. * We extensively validate our systems on 15 Korean precedent IE tasks and 3 English legal text classification from LexGLUE(Chalkidis et al., 2022) while focusing on three real world metrics: accuracy, speed, and cost. The demo and the part of the datasets will be available from [https://github.com/lbox-kr/nestle](https://github.com/lbox-kr/nestle) ## 2 Related Works ### Large Language Model as an Agent With rapid popularization of LLM (OpenAI, 2023; Touvron et al., 2023; Anil et al., 2023; Anthropic, 2023; Taori et al., 2023; Zheng et al., 2023), many recent studies examine the capability of LLM as an agent that can utilize external tools (Liang et al., 2023; Li et al., 2023; Liu et al., 2023; Wang et al., 2023; Song et al., 2023; Zhuang et al., 2023; Tang et al., 2023; Patil et al., 2023; Qin et al., 2023). There are few studies focusing on the capability of LLM as a data analysis agent. Zhang et al. develop Data-Copilot that can help users to interact with various data sources via chat interface. Ma et al. examines the capability of GPT-3 (CODEX, code-davinci-002) as few-shot information extractor on eight NER and relation extraction tasks and propose to use LLM as a reranker of the outputs from small language models. Ding et al. evaluate the capability of GPT-3 as a data annotators on SST2 text classification task and Cross-NER tasks reporting that GPT-3 shows good performance on SST2. He et al. propose 'explain-then-annotate' framework to enhance LLM's annotation capability. Under their approach, GPT-3.5 achieves either super-human or human-comparable scores on three binary classification tasks. Our work is different from previous works in that we focus on building a no-code tool for "statistical analysis" of "corpus" where efficient, accurate, yet customizable methods of structuralization of large-scale documents are necessary. Our work is also different in that we focus on information extraction tasks from legal texts. Finally, rather than performing all IE via LLM, we focus on hybridization between commercial LLM and open-sourced small language model (SLM) by distilling knowledge of LLM to SLM. In this way, the API cost of using LLM does not increase linearly with the size of corpus enabling Nestle to be applied to industrial scale corpus. ### Information Extraction from Legal Texts Previous studies build IE systems for legal texts using tagging-based methods (Cardellino et al., 2017; Mistica et al., 2020; Hendrycks et al., 2021; Habernal et al., 2022; Chen et al., 2020; Pham, Pham, and Meyers, 2021; Hong, Chong, and Manning, 2021; Yao et al., 2022) or generative methods (Pires et al., 2022; Hwang et al., 2022). Our system is similar to (Hwang et al., 2022) in that we use an end-to-end IE system and focus on statistical analysis of legal information. However our work is unique in that we present a no-code tool and explore hybridization of commercial LLM and open-sourced SLM to expand the scope of analysis to a large-scale corpus while focusing on three real world metrics: accuracy, time, and cost. ## 3 System Nestle consists of three key components, a search engine for document retrieval, an end-to-end IE systems for text structuralization, and LLM to provide chat interface and label data. The interactive interface is constructed using the Gradio library (Abid et al., 2019). Through conversations with the LLM, users can search, retrieve, and label data from the uploaded dataset. After labeling few examples, users can structuralize entire corpus through Nestle's IE module. Finally, users can ask about statistical information such as averages, maximums, and minimums over the structured data as well as their visualization. The overall workflow is depicted in Fig. 2. Additional technical details can be found in Appendix. ### Search Engine The search engine selects a portion of the corpus for statistical analysis from given user queries. Utilizing LLM like ChatGPT, we first extract potential keywords or sentences from user queries than forward them to Elasticsearch for further refinement and selection. Elasticsearch is used for handling large volumes of data efficiently. ### IE Module After selecting a subset of corpus, users generate a small set of seed examples based on their custom ontology using either a chat interface or GUI for fine-level control. The LLM employs these seed examples and label other documents via few-shot learning. The IE module receives these examples that consists of pairs of input text and parses. Each parse consists of parse name and value. We use open-sourced language model multilingual T5 (mt5) [13] as backbone for the IE module. mt5 is selected as (1) it provides checkpoints of various scale up to to 13B, and (2) previous studies shows Transformers with encoder-decoder architecture show better performance compared to decoder-only models [10, 12]. After the end of the training, the model is used to parse remaining documents selected from the search engine at the beginning. The structured data is visualized via custom python functions that can be called via "function calling" capability of ChatGPT. ## 4 Experiments All experiments are performed on Nvidia A6000 GPU except the experiments with mt5-xxl where eight A100 GPUs are used. The IE module of Nestle are fine-tuned with batch size \(12\) with learning rate 0.0004 using AdamW optimzer. Nestle-L sometimes shows unstable loss curve with 0.0004 learning rate. In this case, we decreases the learning rate to 0.0003. High learning rate is purposely chosen for the fast training. The training are stopped after 60 epochs (Nestle-S), or after 80 epochs (Nestle-L, Nestle-L+). In case of Nestle-XL, the learning rate is set to 0.0002 and the model is trained for 20 epochs with batchsize 8 using deepspeed stage 3 offload [14]. For efficient training, LoRA is employed in all experiments [15] using PEFT library from Hugging face [12]. In all evaluation, the checkpoint from the last epoch is used. For the data labeling, we use ChatGPT version gpt-3.5-turbo-16k-0613 and GPT-4 version gpt-4-0613. In all other operation with LLM, we use the same version of ChatGPT except during normalization of numeric strings such as imprisonment period and fines where gpt-3.5-turbo-0613 are used. We set temperature 0 to minimize the randomness as IE tasks do not Figure 2: The workflow of Nestle require versatile outputs. The default values are used for the other hyperparameters. During few shot learning, we feed LLM with the examples whose the half of them include all fields defined in the ontology while the remaining half are selected randomly. The detailed prompt strategy is explained in Appendix. ## 5 Results We validate Nestle on 15 Korean precedent IE tasks and 3 English legal text classification tasks. 15 Korean precedent IE tasks are further divided into KorPrec-IE that consists of 4 tasks from criminal cases that is studied previously [10]and LBoxOpen-IE that is generated from LBoxOpen using factual description from 7 criminal cases and 4 civil cases. In all tasks, a model needs to extract a legally important information from factual description or ruling of cases. The representative examples are blood alcohol level, fraud loss, fine, and imprisonment period in rulings, the duration of required hospital treatment for injuries, etc. Three classification tasks are EURLEX, LEDGAR, and UNFAIR-ToS from LexGLUE [11, 12, 13]. EURLEX dataset consists of a pair of European Union legislation (input) and corresponding legal concepts (output) from the EuroVoc Theseurus. In LEDGAR task, a model needs to classify the paragraphs from contracts originated from US Securities and Exchange Commission fillings. Similarly, UNFAIR-ToS is a task of predicting 8 types of unfair contractual terms for given individual sentences from 50 Terms of Service. These 3 classification tasks are considered to demonstrate Nestle on common (English) legal AI benchmark and also to show Nestle can be applied to general AI tasks that can be represented in text-to-text format [14]. ### Nestle shows competent performance with only four examples We first validate Nestle on KorPrec-IE. KorPrec-IE consists of four tasks; Drunk driving with three target fields: blood alcohol level (BAC), driving distance (Dist), and the existence of previous criminal record); Embezzlement with loss; Fraud with loss and loss from aiding and abetting; Ruling-criminal with fine, imprisonment, suspension, education period, and community service period. We first prepare four seed examples that are manually labeled and feed them to ChatGPT. Using these seed examples 92 documents are labeled by LLM via few-shot learning. The resulting 96 examples are used to train mt5-small [20]. The result shows that our method already achieves + 4.2 \(F_{1}\) on average with only four examples compared to the case trained with 50 manually labeled examples (Table 1, 1st vs 3rd rows, 5th column). This demonstrates that with Nestle, users can structure corpus following their own requirements with just four examples. \begin{table} \begin{tabular}{l c c c c|c c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Name} & \multirow{2}{*}{LLM} & IE module & \# of & \# of & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ & module & backbone & training & LLM-labeled & AVG & Drunk driving & Embz & Fraud & \multicolumn{3}{c}{Ruling-criminal} \\ & & size & examples & examples & & & & & & & & & & \\ \hline \hline & & & (per task) & (per task) & \(F_{1}\) & BAC & Dist & Rec & Loss & Loss-A & Fine & Imp & Susp & Educ & Comm \\ \hline \hline mt5-small1a & - & 0.3B & 50 & - & 58.0 & 95.8 & 93.0 & 90.1 & 72.2 & 42.9 & 0 & 79.4 & 89.4 & 85.7 & 60.4 & 34.1 \\ mt5-largea & - & 1.2B & 50 & - & 63.9 & 98.0 & 96.4 & 93.6 & 87.5 & 64.8 & 0 & 84.7 & 82.1 & 96.7 & 68.1 & 27.0 \\ \hline Nestle-S\({}_{0}\) & ChatGPT & 0.3B & 4b & 92 & 62.2 & 98.0 & 95.3 & 93.0 & 70.1 & 52.2 & 0.0 & 71.2 & 96.5 & 93.6 & 76.7 & 37.5 \\ Nestle-S & ChatGPT & 0.3B & 4 & 92 & 64.7 & 98.0 & 95.3 & 89.8 & 77.3 & 56.5 & 0.0 & 77.4 & 96.5 & 98.9 & 57.1 & 54.2 \\ \hline Nestle-L\({}_{0}\) & ChatGPT & 1.2B & 4 & 92 & 71.8 & 97.4 & 94.7 & 93.0 & 84.9 & 65.3 & 0.0 & 86.7 & 97.9 & 98.9 & 82.4 & 57.9 \\ Nestle-L & ChatGPT & 1.2B & 4 & 192 & 77.3 & 98.0 & 95.3 & 91.7 & 87.0 & 68.0 & 11.8 & 88.9 & 97.9 & 97.8 & 94.5 & 72.7 \\ Nestle-L+ & GPT-4 & 1.2B & 4 & 192 & 83.6 & - & - & - & 90.5 & 71.2 & 38.1 & 89.2 & 95.8 & 98.9 & 96.4 & 88.9 \\ Nestle-XXL+ & GPT-4 & 12.9B & 4 & 192 & 80.4 & - & - & - & 92.5 & 72.6 & 28.6 & 92.3 & 96.6 & 96.8 & 88.9 & 75.0 \\ \hline \hline ChatGPT & - & - & 4 & - & 79.6 & 99.0 & 95.3 & 95.2 & 87.5 & 75.2 & 34.8 & 87.1 & 97.8 & 96.5 & 94.7 & 63.4 \\ ChatGPT + aux. inst. & - & - & 4 & - & - & - & - & - & 75.6 & 41.7 & 88.5 & 98.6 & 98.8 & 96.4 & 72.7 \\ GPT-4 & - & - & 4 & - & 88.7 & 98.5 & 97.8 & 92.1 & 93.5 & 82.3 & 59.3 & 93.9 & 97.1 & 98.9 & 92.6 & 92.3 \\ \hline Isla & - & 1.2B & -1,000 & - & 90.3 & 99.5 & 97.4 & 99.0 & 91.7 & 80.3 & 69.6 & 95.5 & 95.7 & 98.9 & 98.2 & 92.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of various models on KorPrec-IE task. The \(F_{1}\) scores of individual fields are shown; BAC (blood alcohol level), Dist (travel distance), Vehicle (type of the vehicle), Rec (previous criminal record on drunk driving), Loss, Loss-A (losses from aiding and abetting), Fine (amount of fine), Imp (imprisonment type and period), Susp (suspension of execution period), Educ (education period), Comm (community service period). The average scores (AVG) are calculated from Embezzlement, Fraud, and Ruling-criminal three tasks. Drunk driving task was excluded as all models achieve high scores. All scores are computed using the test sets that consist of 100 examples per task. ### Nestle can achieve GPT-4 comparable performance To improve the accuracy of Nestle, we scale both the quantity of labeled examples by LLM and the size of the backbone of Nestle's end-to-end IE module. With a greater quantity of LLM-labeled examples (from 92 to 192), Nestle achieves +2.5 \(F_{1}\) on average (3th vs 4th rows) while the labeling time increases (for example, from 2.4 minutes to 10.6 minutes in Fraud task). With a larger backbone (from mt5-small (0.3B) to mt5-large (1.2B)), Nestle's shows +9.6 \(F_{1}\) (3rd vs 5th rows). With both, Nestle shows +15.1 \(F_{1}\) (3rd vs 6th rows). However, both the labeling time and the training time increase (for example, from 15 minutes to 170 minutes in Fraud task). If the accuracy of teacher model (ChatGPT) is low, the performance of student (mt5) may be bounded by it. To check the upper bound of the achievable accuracies, we measure the few-shot performance of ChatGPT. Nestle-L and ChatGPT shows only 2.3 \(F_{1}\) difference on average (6th vs 9th rows, 5th column) indicating the student models may approach the upper bound. To improve Nestle further, we replace ChatGPT to GPT-4. Although the labeling time and cost increases roughly by 10 times, the average scores increases by +6.3 \(F_{1}\) (Table 1 6th vs 7th rows). Notably, this score is higher than ChatGPT by +4.0 \(F_{1}\) (7th vs 9th rows). \begin{table} \begin{tabular}{c c c|c c|c c c|c c c} \hline \hline \multirow{2}{*}{Name} & \multicolumn{2}{c}{\# of} & \multicolumn{2}{c}{\# of} & \multicolumn{2}{c}{\# of} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ & training & LLM-labeled & \multicolumn{2}{c}{EURLEX} & \multicolumn{2}{c}{LEDGAR} & \multicolumn{2}{c}{UNFAIR-ToS} \\ & examples & examples & & & & & & & & \\ & (per task) & (per task) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) \\ \hline \hline ChatGPT\({}^{a}\) & 8 & - & 24.8 & 13.2 & 62.1 & 51.1 & 64.7 & 32.5 \\ ChatGPT\({}^{b}\) & 32 & - & 33.0 & 18.3 & 68.3 & 55.6 & 88.3 & 57.2 \\ Nestle-L & 32 & 192 & 34.1 & 16.7 & 58.8 & 41.5 & 91.5 & 51.4 \\ \hline \hline \multicolumn{10}{l}{\(a\): opt-3.5-turbo-0301. From (Chalkidis 2023).} \\ \multicolumn{10}{l}{\(b\): opt-3.5-turbo-16k-0613.} \\ \end{tabular} \end{table} Table 4: \(F_{1}\) scores of ChatGPT and Nestle-L on EURLEX, LEDGAR, and UNFAIR-ToS from LexGLUE. 1,000 random samples from the original test sets were used for the evaluation following (Chalkidis 2023). \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c|c c|c c|c c|c c|c c|c c} \hline \hline Name & AVG & \multicolumn{4}{c}{IndementAc.\({}^{1}\)} & \multicolumn{4}{c}{Obstruction\({}^{2}\)} & \multicolumn{4}{c}{Traffic injuries \({}^{3}\)} & \multicolumn{4}{c}{Drunk driving \({}^{4}\)} & \multicolumn{4}{c}{Fraud \({}^{5}\)} & Injuries \({}^{6}\) & \multicolumn{4}{c}{Velence \({}^{7}\)} \\ & \(F_{1}\) & nRec & nRec-A & Waiver & nRec & nRec-A & nRec & nRec-A & Waiver & Injury & nRec & nRec-A & BAC & Dist & Loss & Injury & Gender & nRec & nRec-A & Gender \\ \hline \hline GPT-4 & 81.1 & 88.2 & 85.7 & 83.1 & 78.7 & 82.6 & 55.6 & 66.7 & 68.4 & 96.0 & 88.2 & 88.2 & 100 & 99.0 & 94.9 & 94.1 & 81.6 & 47.1 & 61.3 & 81.6 \\ \hline Nestle-L & 78.1 & 88.9 & 76.5 & 52.9 & 71.8 & 57.1 & 73.4 & 78.0 & 71.9 & 95.8 & 71.8 & 64.9 & 100 & 96.9 & 81.0 & 96.9 & 75.0 & 64.9 & 71.8 & 93.6 \\ \hline \multicolumn{10}{l}{1: Indement act by compulsion (\(\mathcal{Y}^{a}\))), 2: Obstruction of performance of official duties (\(\mathcal{Y}^{a}\))), 3: Bodily injuries from traffic accident (\(\mathcal{Y}^{a}\)), \(\mathcal{Y}^{a}\)), \(\mathcal{Y}^{a}\)), \(\mathcal{Y}^{a}\)), \(\mathcal{Y}^{a}\)), 4: Drunk driving (\(\mathcal{Y}^{a}\)), 5: Fraud (\(\mathcal{Y}^{a}\))), 6: Drink driving (\(\mathcal{Y}^{a}\))), 5: Fraud (\(\mathcal{Y}^{a}\))), 6: Drink driving (\(\mathcal{Y}^{a}\)), 7: Violence (\(\mathcal{Y}^{a}\)))} \\ \end{tabular} \end{table} Table 2: Performance of GPT-4 and Nestle-L on the seven criminal IE tasks from LBoxOpen-IE. The numbers indicate \(F_{1}\) scores of individual fields; nRec (the number of criminal records on identical crimes), nRec-A (the number of criminal records on all crimes), Waiver (the victims expression of intention to waive punishment), Injury (the extent of injuries), and Gender (the victims gender). Figure 3: Comparison of various models on Fraud task with three real world metrics: (a) accuracy, (b) cost, and (c) time \begin{table} \begin{tabular}{c|c|c c c|c c|c c|c c c|c c|c c c} \hline \hline Name & \multicolumn{2}{c}{AVG} & \multicolumn{4}{c}{Indementification\({}^{1}\)} & \multicolumn{4}{c}{Lon\({}^{2}\)} & \multicolumn{4}{c}{Unfair profits\({}^{3}\)} & \multicolumn{4}{c}{Lawsuit for damages\({}^{4}\)} \\ & \(F_{1}\) & Domain & Contract & Expense & Loan & Relat. & Domain & Contract & Relat. & Domain & Contrat & Relat. \\ \hline \hline GPT-4 & 83.1 & 97.0 & 90.4 & 95.8 & 73.2 & 93.3 & 93.9 & 64.9 & 59.4 & 92.8 & 73.9 & 79.1 \\ \hline Nestle-L & 71.5 & 73.4 & 63.9 & 82.9 & 59.2 & 30.5 & 82.4 & 78.0 & 83.7 & 87.4 & 64.4 & 81.0 \\ \hline \multicolumn{10}{l}{1: Price of indemification (\(\mathcal{Y}^{a}\)), 2: Loan (\(\mathcal{Y}^{a}\)), 3: Unfair profits (\(\mathcal{Y}^{a}\)), 4: Lawsuit for damages (\(\mathcal{Y}^{a}\))} \\ \end{tabular} \end{table} Table 3: Performance of GPT-4 and Nestle-L on the four civil IE tasks from LBoxOpen-IE. The numbers indicate \(F_{1}\) scores of individual fields; Domain (the domain of events such as real estate, fire incident, etc), Contract (the type of contract), Expense (the amount of money that plaintiffs spent), Loan (the sum of money borrowed by defendant), and Relat. (the relation between plaintiff and defendant) To improve Nestle even further without exacerbating the waiting time constraints imposed by the rate limit of the GPT-4 API, we try scaling the backbone of the IE module from mt5-large to mt5-xxl (12.9B). Unlike commercial LLMs, the IE module can be trained on multiple GPUs for efficient training and indeed the total training time decreases by 70 minutes by changing GPU from a single NVIDIA A6000 to eight NVIDIA A100. However, we could not observe noticeable improvement in \(F_{1}\). ## Nestle can be generalized to other datasets Although we have validated Nestle on KorPrec-IE, the dataset mainly consists of numeric fields from criminal cases. To further validate Nestle, we build LBoxOpen-IE from LBoxOpen(Hwang et al., 2022). LBoxOpen-IE consists of 11 IE tasks; 7 from criminal cases: Inceent Act by composition (\(\mathcal{Y}\)) [24], \(\mathcal{Y}\) [25], \(\mathcal ence time to dollars based on Lambdalabs GPU cloud pricing. Note that the API cost increases linearly with the size of the corpus when using commercial LLM. On the other hand, in Nestle, only the inference cost increases linearly with the size of corpus. The results show that, for 10,000 documents, the overall cost of Nestle-L is only 4% of ChatGPT and 0.4% of GPT-4 (Fig. 4(b)). For 1 million documents, the overall cost of Nestle-L is 0.5% of ChatGPT and 0.05% of GPT-4 ((Fig. 4(b)). This highlights the efficiency of Nestle. The more detailed analysis is shown in Table 5. Similarly, the estimation of overall inference time for 1 million documents reveals Nestle-L takes 83% or 99% less time compared to ChatGPT or GPT-4 respectively. ## 7 Conclusion We develop Nestle, a no-code tool for statistical analysis of legal corpus. To find target corpus, structuralize them, and visualize the structured data we combine search engine, custom end-to-end IE module, and LLM. Powered by LLM and the end-to-end IE module, Nestle enables unrestricted personalized statistical analysis of the corpus all through the chat interface and the auxiliary GUI. We extensively validate Nestle on 15 Korean precedent IE tasks and 3 English legal text classification tasks while focusing on three real world metrics accuracy, time, and cost. Finally, we want to emphasize that although Nestle is specialized for legal IE tasks, the tool can be easily generalized to other domains and other NLP tasks that can be represented as text-to-text format. ## Acknowledgments We thank Gene Lee for his critical reading of the manuscript, Minjoon Seo for his insightful comments, and Paul Koo for his assistant in preparing the figures.
2309.03629
Power variations and limit theorems for stochastic processes controlled by fractional Brownian motions
In this paper we establish limit theorems for power variations of stochastic processes controlled by fractional Brownian motions with Hurst parameter $H\leq 1/2$. We show that the power variations of such processes can be decomposed into the mix of several weighted random sums plus some remainder terms, and the convergences of power variations are dominated by different combinations of those weighted sums depending on whether $H<1/4$, $H=1/4$, or $H>1/4$. We show that when $H\geq 1/4$ the centered power variation converges stably at the rate $n^{-1/2}$, and when $H<1/4$ it converges in probability at the rate $n^{-2H}$. We determine the limit of the mixed weighted sum based on a rough path approach developed in \cite{LT20}.
Yanghui Liu, Xiaohua Wang
2023-09-07T10:52:48Z
http://arxiv.org/abs/2309.03629v1
Power variations and limit theorems for stochastic processes controlled by fractional Brownian motions ###### Abstract. In this paper we establish limit theorems for power variations of stochastic processes controlled by fractional Brownian motions with Hurst parameter \(H\leq 1/2\). We show that the power variations of such processes can be decomposed into the mix of several weighted random sums plus some remainder terms, and the convergences of power variations are dominated by different combinations of those weighted sums depending on whether \(H<1/4\), \(H=1/4\), or \(H>1/4\). We show that when \(H\geq 1/4\) the centered power variation converges stably at the rates \(n^{-1/2}\), and when \(H<1/4\) it converges in probability at the rate \(n^{-2H}\). We determine the limit of the mixed weighted sum based on a rough path approach developed in [32]. Key words and phrases:Power variation, discrete rough integral, fractional Brownian motion, controlled rough path, limit theorems, estimation of volatility ## 1. Introduction In this paper we establish limit theorems for power variations of low-regularity processes in a general rough path framework. Recall that for a stochastic process \((y_{t},t\in[0,1])\) the power variation of order \(p>0\) (\(p\)-variation for short) is defined as \[\sum_{k=0}^{n-1}\big{|}y_{t_{k+1}}-y_{t_{k}}\big{|}^{p}, \tag{1.1}\] where \(0=t_{0}<t_{1}<\cdots<t_{n}=1\) is a partition of the time interval \([0,1]\). The power variation has been widely used in quantitative finance for the estimation of volatility and related parameters; see [1, 5, 6, 7, 8, 14] and references therein. When \(y\) is a semimartingale the power variation has been discussed in [4, 6, 9, 25, 28, 29, 41, 42]. The case of stationary Gaussian was treated in [24, 30]. When \(y\) is a Young integral (see [43]) driven by fractional Gaussian processes the power variation has been investigated in [3, 16, 34]. The study of power variations (1.1) in the non-semimartingale case is closely related to the limits of weighted random sums. For example, a key step in [3, 16, 34] is to observe that when \(y\) is a Young integral of the form: \(y_{t}=\int_{0}^{t}z_{s}dx_{s}\) and the integrator \(x\) is Holder continuous of order greater than \(1/2\), the increment \(y_{t_{k+1}}-y_{t_{k}}\) in (1.1) can be replaced by its first-order approximation \(z_{t_{k}}(x_{t_{k+1}}-x_{t_{k}})\). We refer the reader to [12, 13, 15, 32, 31, 35, 36, 37, 38, 40] for discussions about limit theorems of weighted random sums. Recently, empirical evidence was found that security volatility actually has much lower regularity than semimartingales (see [21]). The statement is further supported by other empirical work based on both return data (see [11, 20, 22]) and option data (see [10, 19, 33]). Motivated by these advances in quantitative finance, it is then natural to ask the following question: Is there a limit theorem for power variations when the process is "rougher" than semimartingale, and if so, under what conditions does the limit theorem hold? A main difficulty in the low-regularity case is that the aforementioned relation between \(y\) and its first-order approximation is no longer true. In fact, we will see that the difference between the power variations of a low-regularity process \(y\) and that of its first-order approximation has nonzero contribution to the limit of power variation. A second difficulty is that the weighted sums corresponding to (1.1) involve functionals of the forms \(|x|^{p}\) and \(|x|^{p}\cdot\operatorname{sign}(x)\), where \(x\) is the underlying Gaussian process for \(y\) (see Definition 2.1 for our definition of the processes of \(x\) and \(y\)). Both functionals are in the infinite sum of chaos when \(p\) is non-integer and so a direct approach of Malliavin integration by parts for the weighted sums is not possible (Recall that the integration by parts is a crucial step in the study of weighted sums in [12, 35, 36, 37, 38, 40]). In this paper we show that a limit theorem of power variation does hold under the assumption that \(y\) is a process "controlled" by a fractional Brownian motion (fBm for short). The controlled process is a main concept in the theory of rough path, and it is broad enough to contain two important models of stochastic processes we have in mind: the rough integrals and the rough differential equations (see Example 2.3). Our result generalizes [16] to fBms with any Hurst parameter \(H\in(0,1)\). Our main result can be informally stated as follows. The reader is referred to Theorem 4.2 for a more precise statement. **Theorem 1.1**.: _Let \(x\) be a fBm with Hurst parameter \(H\leq 1/2\) and \((y,y^{\prime},\ldots,y^{(\ell-1)})\) be a process controlled by \(x\) almost surely (see Definition 2.1) for some \(\ell\in\mathbb{N}\). Define the function \(\phi(x)=|x|^{p}\), \(x\in\mathbb{R}\) for some constant \(p>0\), and denote \(\phi^{\prime}\) and \(\phi^{\prime\prime}\) the derivatives of \(\phi\), and \(c_{p}\) and \(\sigma\) are constants given in (3.56) and (3.61), respectively. Let_ \[U_{t}^{n}=n^{pH-1}\sum_{0\leq t_{k}<t}\phi(y_{t_{k+1}}-y_{t_{k}})-c_{p}\int_{0 }^{t}\phi(y^{\prime}_{u})du\qquad t\in[0,1].\] _Let \(W\) be a standard Brownian motion independent of \(x\). Then (i) For \(1/4<H\leq 1/2\), \(p\in[3,\infty)\cup\{2\}\) and \(\ell\geq 4\) we have the convergence in law:_ \[n^{1/2}U_{1}^{n}\to\sigma\int_{0}^{1}\phi(y^{\prime}_{u})dW_{u},\qquad\text{ as }n\to\infty.\] _(ii) For \(H=1/4\), \(p\in[5,\infty)\cup\{2,4\}\) and \(\ell\geq 6\) the following convergence in law holds:_ \[n^{1/2}U_{1}^{n}\to\sigma\int_{0}^{1}\phi(y^{\prime}_{u})dW_{u}-\frac{c_{p}}{ 8}\int_{0}^{1}\phi^{\prime\prime}(y^{\prime}_{u})(y^{\prime\prime}_{u})^{2}du +\frac{(p-2)c_{p}}{24}\int_{0}^{1}\phi^{\prime}(y^{\prime}_{u})y^{\prime \prime\prime}_{u}du\] _as \(n\to\infty\). (iii) For \(H<1/4\), \(p\in[5,\infty)\cup\{2,4\}\) and \(\ell\geq 6\) we have the convergence in probability:_ \[n^{2H}U_{1}^{n}\to-\frac{c_{p}}{8}\int_{0}^{1}\phi^{\prime\prime}(y^{\prime}_ {u})(y^{\prime\prime}_{u})^{2}du+\frac{(p-2)c_{p}}{24}\int_{0}^{1}\phi^{\prime }(y^{\prime}_{u})y^{\prime\prime\prime}_{u}du\qquad\text{as }n\to\infty.\] As mentioned previously, the limit of power variation in the low-regularity case is not solely determined by the first-order approximation of \(y\). A first step of our proof is thus to consider the higher-order approximation of \(y\) and to estimate the corresponding weighted random sums and remainder terms. The convergences of mixed weighted sums and power variation are based on a rough path approach developed in [32]. In particular, we will see that the rough path approach allows us to avoid the application of Malliavin integration by parts for functionals of infinite chaos. The paper is structured as follows: In Section 2 we introduce the concept of discrete rough paths and discrete rough integrals and recall some basic results of the rough paths theory. In Section 3 we derive some useful estimates and limit theorem results for weighted random sums related to fBm. In Section 4 we prove the limit theorem of power variation for processes controlled by fBm. ### Notation Let \(\pi:0=t_{0}<t_{1}<\dots<t_{n}=1\) be a partition on \([0,1]\). For \(s,t\in[0,1]\) such that \(s<t\), we write \(\llbracket s,t\rrbracket\) for the discrete interval that consists of \(t_{k}\)'s such that \(t_{k}\in[s,t]\) and the two endpoints \(s\) and \(t\). Namely, \(\llbracket s,t\rrbracket=\{t_{k}:s\leq t_{k}\leq t\}\cup\{s,t\}\). For \(N\in\mathbb{N}=\{1,2,\dots\}\) we denote the discrete simplex \(\mathcal{S}_{N}(\llbracket s,t\rrbracket)=\{(u_{1},\dots,u_{N})\in\llbracket s,t\rrbracket^{N}:u_{1}<\dots<u_{N}\}\). Similarly, we denote the continuous simplex: \(\mathcal{S}_{N}([s,t])=\{(u_{1},\dots,u_{N})\in[s,t]^{N}:u_{1}<\dots<u_{N}\}\). Throughout the paper we work on a probability space \((\Omega,\mathcal{F},P)\). If \(X\) is a random variable, we denote by \(|X|_{L_{p}}\) the \(L_{p}\)-norm of \(X\). The letter \(K\) stands for a constant independent of any important parameters which can change from line to line. We write \(A\lesssim B\) if there is a constant \(K>0\) such that \(A\leq KB\). We denote \([a]\) the integer part of \(a\). ## 2. Preliminary results In this section, we introduce the concept of discrete rough paths and discrete rough integrals, and recall some basic results of the rough paths theory. In the second part of the section we recall the elements of Wiener chaos expansion and fractional Brownian motion. ### Controlled rough paths and algebraic properties This subsection is devoted to introducing the main rough paths notations which will be used in the sequel. The reader is referred to [17, 18] for an introduction to the rough path theory. Recall that the continuous simplex \(\mathcal{S}_{k}([0,1])\) is defined in Section 1.1. We denote by \(\mathcal{C}_{k}\) the set of functions \(g:\mathcal{S}_{k}([0,1])\to\mathbb{R}\) such that \(g_{u_{1}\cdots u_{k}}=0\) whenever \(u_{i}=u_{i+1}\) for some \(i\leq k-1\). Such a function will be called a \((k-1)\)-_increment_. We define the operator \(\delta\) as follows: \[\delta:\mathcal{C}_{k}\to\mathcal{C}_{k+1},\qquad(\delta g)_{u_{1}\cdots u_{ k+1}}=\sum_{i=1}^{k+1}(-1)^{i}g_{u_{1}\cdots\hat{u}_{i}\cdots u_{k+1}}\,,\] where \(\hat{u}_{i}\) means that this particular argument is omitted. For example, for \(f\in\mathcal{C}_{1}\) and \(g\in\mathcal{C}_{2}\) we have \[\delta f_{st}=f_{t}-f_{s}\quad\text{ and }\quad\delta g_{su}=g_{st}-g_{su}-g_{ut}. \tag{2.1}\] Let us now introduce a general notion of controlled rough process which will be used throughout the paper. **Definition 2.1**.: _Let \(x\) and \(y,y^{\prime},y^{\prime\prime},\ldots,y^{(\ell-1)}\) be real-valued continuous processes on \([0,1]\) and assume that the initial values of \(y^{(i)}\) equal to zero, namely, \(y^{(i)}_{0}=0\), \(i=0,\ldots,\ell-1\). Denote the 2-increments: \(x^{i}_{st}=(\delta x_{st})^{i}/i!\), \((s,t)\in\mathcal{S}_{2}([0,1])\), \(i=0,1,\ldots,\ell-1\). For convenience, we also write \(y^{(0)}=y\), \(y^{(1)}=y^{\prime}\), \(y^{(2)}=y^{\prime\prime}\),..., and \(\mathbf{y}=(y^{(0)},\ldots,y^{(\ell-1)})\). We define the remainder processes_ \[r^{(\ell-1)}_{st} = \delta y^{(\ell-1)}_{st}\] \[r^{(k)}_{st} = \delta y^{(k)}_{st}-y^{(k+1)}_{s}x^{1}_{st}-\cdots-y^{(\ell-1)}_{s }x^{\ell-k-1}_{st},\qquad k=0,1,\ldots,\ell-2, \tag{2.2}\] _for \((s,t)\in\mathcal{S}_{2}([0,1])\). We call \(\mathbf{y}\) a rough path controlled by \((x,\ell,\alpha)\) almost surely for some constant \(\alpha\in(0,1)\) if for any \(\varepsilon>0\) there is a finite random variable \(G_{\mathbf{y}}\equiv G_{\mathbf{y},\varepsilon}\) (that is, \(G_{\mathbf{y}}<\infty\) almost surely) such that \(|r^{(k)}_{st}|\leq G_{\mathbf{y}}(t-s)^{(\ell-k)\alpha-\varepsilon}\) for all \((s,t)\in\mathcal{S}_{2}([0,1])\) and \(k=0,1,\ldots,\ell-1\). We call \(\mathbf{y}\) a rough path controlled by \((x,\ell,\alpha)\) in \(L_{p}\) for some \(p>0\) if there exist constants \(K>0\), \(\alpha\in(0,1)\) such that \(|r^{(k)}_{st}|_{L_{p}}\leq K(t-s)^{(\ell-k)\alpha}\) for all \((s,t)\in\mathcal{S}_{2}([0,1])\) and \(k=0,\ldots,\ell-1\)._ _Remark 2.2_.: In some of our computations below we will rephrase (2.2) for \(k=0\) as the following identity for \((s,t)\in\mathcal{S}_{2}([0,1])\): \[y_{t} = \sum_{i=0}^{\ell-1}y^{(i)}_{s}x^{i}_{st}+r^{(0)}_{st}, \tag{2.3}\] where we take \(x^{0}\equiv 1\) by convention. In the following we give some examples of controlled rough paths defined in Definition 2.1. **Example 2.3**.: _Take \(\alpha\in(0,1)\). Let \(x_{t}\), \(t\in[0,1]\) be a real-valued continuous process whose sample paths are \((\alpha-\varepsilon)\)-Holder continuous almost surely for any \(\varepsilon>0\). For a continuous function \(V\) defined on \(\mathbb{R}\), we define the differential operator \(\mathcal{L}_{V}\) such that for any differentiable function \(f\) we have \(\mathcal{L}_{V}f=Vf^{\prime}\). Denote \(\mathcal{L}_{V}^{i}=\mathcal{L}_{V}\circ\cdots\circ\mathcal{L}_{V}\) the \(i\)th iteration of \(\mathcal{L}_{V}\)._ _(i) Let_ \(V\) _be a sufficiently smooth function on_ \(\mathbb{R}\)_. Set_ \(z^{(i)}_{t}=\mathcal{L}_{V}^{i}V(x_{t})\) _for_ \(i=0,\ldots,\ell-1\)_. Then_ \((z,z^{\prime},\ldots,z^{(\ell-1)})\) _is a rough path controlled by_ \((x,\ell,\alpha)\) _almost surely._ _(ii) Let_ \(y\) _be the solution of the differential equation:_ \(dy_{t}=b(y_{t})dt+V(y_{t})dx_{t}\) _in the sense of_ _[_18_, Theorem 12.10]__, and assume that the coefficient functions_ \(b\) _and_ \(V\) _are sufficiently smooth. Note that when_ \(x\) _is a Brownian motion the differential equation coincides the classical Stratonovich-type SDE. Let_ \(y^{\prime}_{t}=V(y_{t})\) _and_ \(y^{(i)}_{t}=\mathcal{L}_{V}^{i-1}V(y_{t})\)_,_ \(i=2,\ldots,\ell-1\)_. Then_ \((y,y^{\prime},\ldots,y^{(\ell-1)})\) _is a rough path controlled by_ \((x,\ell,\alpha)\) _almost surely._ _(iii) Let_ \(\ell=[1/\alpha]\) _and let_ \((z,z^{\prime},\ldots,z^{(\ell-1)})\) _be a rough path controlled by_ \((x,\ell,\alpha)\) _almost surely. Let_ \(y\) _be the rough integral_ \(y_{t}:=\int_{0}^{t}z_{s}dx_{s}\)_,_ \(t\in[0,1]\) _in the sense of_ _[_23_]__. An explicit example of rough integral is_ \(y_{t}=\int_{0}^{t}V(x_{s})dx_{s}\)_. Denoting_ \(y^{\prime}=z\)_,...,_ \(y^{(\ell)}=z^{(\ell-1)}\)_, then_ \((y,y^{\prime},\ldots,y^{(\ell)})\) _is a rough path controlled by_ \((x,\ell+1,\alpha)\) _almost surely._ By Definition 2.1 it is easy to show that the partial sequence of \(\mathbf{y}\) and the functions of \(y\) are both controlled rough paths. We state this fact and omit the proof for sake of conciseness: **Lemma 2.4**.: _Let \(\mathbf{y}\) be a rough path controlled by \((x,\ell,\alpha)\) almost surely (resp., in \(L_{p}\) for some \(p>0\)). Then_ _(i) For any_ \(i=0,\ldots,\ell-1\)_,_ \((y^{(i)},\ldots,y^{(\ell-1)})\) _is a rough path controlled by_ \((x,\ell-i,\alpha)\) _almost surely (resp., in_ \(L_{p}\)_)._ _(ii) Let_ \(f:\mathbb{R}\to\mathbb{R}\) _be a continuous function which has derivatives up to order_ \((L-1)\) _and the_ \((L-1)\)_th derivative_ \(f^{(L-1)}\) _is Lipschitz. Let_ \(z_{s}^{(0)}=f(y_{s})\) _and_ \[z_{s}^{(r)}=\sum_{i=1}^{r}\frac{f^{(i)}(y_{s})}{i!}\sum_{\begin{subarray}{c}1 \leq j_{1},\ldots,j_{i}\leq(L\wedge\ell)-1\\ j_{1}+\cdots+j_{i}=r\end{subarray}}\frac{r!}{j_{1}!\cdots j_{i}!}y_{s}^{(j_{1}) }\cdots y_{s}^{(j_{i})}\] _for_ \(s\in[0,1]\) _and_ \(r=0,\ldots,(L\wedge\ell)-1\)_. For example, we have_ \(z^{(1)}=f^{\prime}(y_{s})y_{s}^{\prime}\) _and_ \(z^{(2)}=f^{\prime\prime}(y_{s})(y_{s}^{\prime})^{2}+f^{\prime}(y_{s})y_{s}^{ \prime\prime}\)_. Then_ \((z^{(0)},\ldots,z^{((L\wedge\ell)-1)})\) _is a rough path controlled by_ \((x,L\wedge\ell,H)\) _almost surely (resp., in_ \(L_{p}\)_)._ Let us also recall an algebraic result from [32, Lemma 2.5]. **Lemma 2.5**.: _Let \(x\), \(\mathbf{y}\) and \(r^{(i)}\), \(i=0,\ldots,\ell-1\) be continuous processes satisfying (2.2). Then we have the following relation:_ \[\delta r_{\text{sut}}^{(0)} = \sum_{i=1}^{\ell-1}r_{\text{sut}}^{(i)}x_{\text{ut}}^{i}\,.\] ### Discrete rough integrals We introduce some "discrete" integrals defined as Riemann type sums. Namely, let \(f\) and \(g\) be functions on \(\mathcal{S}_{2}([0,1])\). Let \(\mathcal{D}_{n}=\{0=t_{0}<\cdots<t_{n}=1\}\) be a generic partition of \([0,1]\). We define the discrete integral of \(f\) with respect to \(g\) as: \[\mathcal{J}_{s}^{t}(f,g) := \sum_{s\leq t_{k}<t}f_{st_{k}}\otimes g_{t_{k}t_{k+1}},\qquad(s,t) \in\mathcal{S}_{2}([0,1]), \tag{2.4}\] where we use the convention that \(\mathcal{J}_{s}^{t}(f,g)=0\) whenever \(\{t_{k}:s\leq t_{k}<t\}=\emptyset\). We highlight that \(f\) in (2.4) is a function of two variables. Similarly, if \(f\) is a path on \([0,1]\), then we define the discrete integral of \(f\) with respect to \(g\) as: \[\mathcal{J}_{s}^{t}(f,g) := \sum_{s\leq t_{k}<t}f_{t_{k}}\otimes g_{t_{k}t_{k+1}},\qquad(s,t) \in\mathcal{S}_{2}([0,1]). \tag{2.5}\] ### Chaos expansion and fractional Brownian motions Let \(d\gamma(x)=(2\pi)^{-1/2}e^{-x^{2}/2}dx\) be the standard Gaussian measure on the real line, and let \(f\in L_{2}(\gamma)\) be such that \(\int_{\mathbb{R}}f(x)d\gamma(x)=0\). It is well-known that the function \(f\) can be expanded into a series of Hermite polynomials as follows: \[f(x) = \sum_{q=d}^{\infty}a_{q}H_{q}(x),\] where \(d\geq 1\) is some integer and \(H_{q}(x)=(-1)^{q}e^{\frac{x^{2}}{2}}\frac{d^{q}}{dx^{q}}e^{-\frac{x^{2}}{2}}\) is the Hermite polynomial of order \(q\). Recall that we have the iteration formula: \(H_{q+1}(t)=xH_{q}(x)-H_{q}^{\prime}(x)\). If \(a_{d}\neq 0\), then \(d\) is called the _Hermite rank_ of the function \(f\). Since \(f\in L_{2}(\gamma)\), we have \(\|f\|_{L_{2}(\gamma)}^{2}=\sum_{q=d}^{\infty}|a_{q}|^{2}q!<\infty\). Let \(x\) be a standard fractional Brownian motion (fBm for short) with Hurst parameter \(H\in(0,1)\), that is \(x\) is a continuous Gaussian process such that \[\mathbb{E}[x_{s}x_{t}]=\frac{1}{2}(|s|^{2H}+|t|^{2H}-|s-t|^{2H}).\] The fBm \(x\) is almost surely \(\gamma\)-Holder continuous for all \(\gamma<H\). Define the covariance function \(\rho\) by \[\rho(k)=\mathbb{E}(\delta x_{01}\delta x_{k,k+1}). \tag{2.6}\] Then, whenever \(H<\frac{1}{2}\), we have \(\sum_{k\in\mathbb{Z}}\rho(k)=0\). Let \(\mathcal{H}\) be the completion of the space of indicator functions with respect to the inner product \(\langle\mathbf{1}_{[u,v]},\mathbf{1}_{[s,t]}\rangle_{\mathcal{H}}=\mathbb{E} (\delta x_{uv}\delta x_{st})\). The following result shows that given two sequences of stable convergence and convergence in probability, respectively, their joint sequence is also of stable convergence. Recall that \(X_{n}\) is called convergent to \(X\) stably if \((X_{n},Z)\to(X,Z)\) in distribution as \(n\to\infty\) for any \(Z\in\mathcal{F}\). The reader is referred to [2, 27] for an introduction to stable convergence. **Lemma 2.6**.: _Let \(Y_{n}^{(1)}\), \(Y_{n}^{(2)}\), \(n\in\mathbb{N}\) be two sequences of random variables and denote the \(\sigma\)-field: \(\mathcal{F}^{Y}=\sigma\{Y_{n}^{(i)},i=1,2,n\in\mathbb{N}\}\). Let \(Y^{(1)}\) be a random variable such that the stable convergence \((Y_{n}^{(1)},Z)\to(Y^{(1)},Z)\) as \(n\to\infty\) holds for any \(Z\in\mathcal{F}^{Y}\). Suppose that we have the convergence in probability \(Y_{n}^{(2)}\to Y^{(2)}\) as \(n\to\infty\) for some random variable \(Y^{(2)}\). Then the stable convergence \((Y_{n}^{(1)},Y_{n}^{(2)},Z)\to(Y^{(1)},Y^{(2)},Z)\) as \(n\to\infty\) holds for any \(Z\in\mathcal{F}^{Y}\). In particular, we have the stable convergence \((Y_{n}^{(1)}+Y_{n}^{(2)},Z)\to(Y^{(1)}+Y^{(2)},Z)\) as \(n\to\infty\) for any \(Z\in\mathcal{F}^{Y}\)._ Proof.: Since \(Y_{n}^{(2)}-Y^{(2)}\to 0\) in probability it follows that the two sequences \((Y_{n}^{(1)},Y^{(2)}+(Y_{n}^{(2)}-Y^{(2)}),Z)\) and \((Y_{n}^{(1)},Y^{(2)},Z)\) have the same limit. On the other hand, by the stable convergence of \(Y_{n}^{(1)}\) and the fact that \(Y^{(2)}\in\mathcal{F}^{Y}\) we have the convergence \((Y_{n}^{(1)},Y^{(2)},Z)\to(Y^{(1)},Y^{(2)},Z)\). We conclude that the convergence \((Y_{n}^{(1)},Y_{n}^{(2)},Z)\to(Y^{(1)},Y^{(2)},Z)\) as \(n\to\infty\) holds. This completes the proof. ## 3. Upper-bound estimate and limit theorem for weighted random sums In this section we derive some useful estimates and limit theorem results for weighted random sums related to fBm. ### Upper-bound estimate of weighted random sums We prove a general upper-bound estimate result for weighted random sums. In the second part of the subsection we apply this estimate result to weighted sums involving fBms. Recall that for a continuous process \(x_{t}\), \(t\in[0,1]\) and an integer \(i\in\mathbb{N}\) we denote the \(2\)-increment: \(x_{st}^{i}=(\delta x_{st})^{i}/i!\), \((s,t)\in\mathcal{S}_{2}([0,1])\). **Proposition 3.1**.: _Let \(x\) be a continuous process on \([0,1]\). Let \(\mathbf{y}=(y^{(0)},\ldots,y^{(\ell-1)})\) be a rough path on \([0,1]\) controlled by \((x,\ell,\alpha)\) in \(L_{2}\) for some \(\alpha>0\) and \(\ell\in\mathbb{N}\), and let \((r^{(i)},i=0,\ldots,\ell-1)\) be the remainder processes of \(\mathbf{y}\) defined in Definition 2.1. Let \(h\) be a \(1\)-increment defined on \(\mathcal{S}_{2}(\llbracket 0,1\rrbracket)\). Let \(\beta_{i}\in[0,1]\), \(i=0,1,\ldots,\ell-1\) be some constants such that_ \[\beta:=\min_{i=0,\ldots,\ell-1}\{(\ell-i)\alpha+\beta_{i}\}>1. \tag{3.1}\] _Suppose that there exists a constant \(K>0\) such that_ \[|\mathcal{J}_{s}^{t}(x^{i},h)|_{L_{2}}\leq K(t-s)^{\beta_{i}} \tag{3.2}\] _for any \((s,t)\in\mathcal{S}_{2}([0,1])\) satisfying \(t-s\geq 1/n\). Then we can find a constant \(K>0\) independent of \(n\) such that the following estimates hold:_ \[|\mathcal{J}_{s}^{t}(r^{(0)},h)|_{L_{1}}\leq K(t-s)^{\beta}\qquad\text{and} \qquad|\mathcal{J}_{s}^{t}(y,h)|_{L_{1}}\leq K(t-s)^{\beta_{0}} \tag{3.3}\] _for \((s,t)\in\mathcal{S}_{2}([0,1])\) such that \(t-s\geq 1/n\)._ Proof.: Denote \(R_{st}:=\mathcal{J}_{s}^{t}(r^{(0)},h)\) for \((s,t)\in\mathcal{S}_{2}([0,1])\). Recall that the operator \(\delta\) for \(2\)-increment is defined in (2.1). So, for \((s,u,t)\in\mathcal{S}_{3}([0,1])\), we have \[\delta R_{sut} = \mathcal{J}_{s}^{t}(r^{(0)},h)-\mathcal{J}_{s}^{u}(r^{(0)},h)- \mathcal{J}_{u}^{t}(r^{(0)},h) \tag{3.4}\] \[= \sum_{u\leq t_{k}<t}(r^{(0)}_{st_{k}}-r^{(0)}_{ut_{k}})h_{t_{k}t _{k+1}}.\] Note that by definition of \(\delta r^{(0)}\) we have the relation \(r^{(0)}_{st_{k}}-r^{(0)}_{ut_{k}}=\delta r^{(0)}_{sut_{k}}+r^{(0)}_{su}\). Substituting this into (3.4) and then invoking Lemma 2.5 we obtain \[\delta R_{sut} = r^{(0)}_{su}\mathcal{J}_{u}^{t}(1,h)+\sum_{i=1}^{\ell-1}r^{(i)}_ {su}\mathcal{J}_{u}^{t}(x^{i},h). \tag{3.5}\] We can now bound \(\delta R\) as follows: Taking the \(L_{1}\)-norm on both sides of (3.5) gives \[|\delta R_{sut}|_{L_{1}}\leq\sum_{i=0}^{\ell-1}|r^{(i)}_{su}|_{L_{2}}\cdot| \mathcal{J}_{u}^{t}(x^{i},h)|_{L_{2}}. \tag{3.6}\] Applying condition (3.2) to \(|\mathcal{J}_{u}^{t}(x^{i},h)|_{L_{2}}\) in (3.6) and invoking the relation \(|r^{(i)}_{st}|_{L_{p}}\leq K(t-s)^{(\ell-i)\alpha}\) given in Definition 2.1 we get \[|\delta R_{sut}|_{L_{1}}\lesssim\sum_{i=0}^{\ell-1}(u-s)^{(\ell-i)\alpha}(t-u) ^{\beta_{i}}\lesssim(t-s)^{\beta} \tag{3.7}\] for \((s,u,t)\in\mathcal{S}_{3}([0,1])\) such that \(t-u\geq 1/n\), where \(\beta\) is defined in (3.1). Take \((s,t)\in\mathcal{S}_{2}([0,1])\) such that \(t-s\geq 1/n\). Consider the partition \(\llbracket s,t\rrbracket\) of the interval \([s,t]\): \(s<t_{k}<\cdots<t_{k^{\prime}}<t\), where \(k\) and \(k^{\prime}\) are such that \(t_{k-1}\leq s<t_{k}\) and \(t_{k^{\prime}}<t\leq t_{k^{\prime}+1}\). In the following we show that (3.7) holds for all \((u_{1},u_{2},u_{3})\in\mathcal{S}_{3}(\llbracket s,t\rrbracket)\). In view of (3.7) it remains to show that the estimate (3.7) holds for \(|\delta R_{ut_{k^{\prime}}t}|_{L_{1}}\), \(u\in\llbracket s,t\rrbracket:u\leq t_{k^{\prime}}\). Indeed, by definition (2.4) we have \(\mathcal{J}_{t_{k^{\prime}}}^{t}(x^{i},h)=0\) and \[|\mathcal{J}_{t_{k^{\prime}}}^{t}(1,h)|_{L_{2}}=|h_{t_{k^{\prime}}t_{k^{\prime }+1}}|_{L_{2}}\leq(1/n)^{\beta_{0}}\leq(t-s)^{\beta_{0}}.\] Applying these estimates to the right-hand side of (3.6) we obtain the estimate (3.7) for \(|\delta R_{ut_{k^{\prime}}t}|_{L_{1}}\). By (2.4) it is clear that for any two consecutive partition points \(u,v\) in \([\![s,t]\!]\) and \(u<v\) we have \(R_{uv}=0\). Applying the discrete sewing lemma [32, Lemma 2.5] to \(R\) on the partition \([\![s,t]\!]\) and then invoking the estimate (3.7) of \(\delta R\) on \(\mathcal{S}_{3}([\![s,t]\!])\) we obtain \[|R_{st}|_{L_{1}}\lesssim(t-s)^{\beta}.\] This proves the first estimate in (3.3). Note that by substituting the expression (2.3) into \(\mathcal{J}_{s}^{t}(y,h)\) we get the relation \[\mathcal{J}_{s}^{t}(y,h)=\sum_{i=0}^{\ell-1}y_{s}^{(i)}\mathcal{J}_{s}^{t}(x^{ i},h)+\mathcal{J}_{s}^{t}(r^{(0)},h). \tag{3.8}\] Applying (3.2) and the first estimate in (3.3) to the right-hand side of (3.8) we obtain the desired estimate of \(\mathcal{J}_{s}^{t}(y,h)\) in (3.3). In the next result we apply Proposition 3.1 to weighted sums which involve fBms. **Proposition 3.2**.: _Let \(x\) be a one-dimensional fBm with Hurst parameter \(H\leq 1/2\). Suppose that \((y,y^{\prime},\ldots,y^{(\ell-1)})\), \(\ell\in\mathbb{N}\) is a process controlled by \((x,\ell,H-\varepsilon)\) in \(L_{2}\) for some sufficiently small \(\varepsilon>0\). Let \(f=\sum_{q=d}^{\infty}a_{q}H_{q}\in L_{2}(\mathbb{R},\gamma)\) with Hermite rank \(d>0\) and \(f\) belongs to the Soblev space \(W^{2(\ell-1),2}(\mathbb{R},\gamma)\), where \(\gamma\) denotes the standard Gaussian measure on the real line; see e.g. Page 28 in [39]. We define a family of increments \(\{h^{n};n\geq 1\}\) by:_ \[h^{n}_{st}:=\sum_{s\leq t_{k}<t}f(n^{H}\delta x_{t_{k}t_{k+1}}),\qquad(s,t) \in\mathcal{S}_{2}([\![0,1]\!]). \tag{3.9}\] _(i) Suppose that \(d>\frac{1}{2H}\) and that \(\ell\) is the least integer such that \(\ell H+\frac{1}{2}>1\), that is \(\ell=[\frac{1}{2H}]+1\). Then there is a constant \(K\) independent of \(n\) such that_ \[|\mathcal{J}_{s}^{t}(y,h^{n})|_{L_{1}} \leq Kn^{1/2}(t-s)^{1/2} \tag{3.10}\] _for all \((s,t)\in\mathcal{S}_{2}([0,1])\) satisfying \(t-s\geq 1/n\). (ii) Suppose that \(d\leq\frac{1}{2H}\) and that \(\ell=d+1\). Then there is a constant \(K\) independent of \(n\) such that_ \[|\mathcal{J}_{s}^{t}(y,h^{n})|_{L_{1}} \leq Kn^{1-dH}(t-s)^{1-dH} \tag{3.11}\] _for all \((s,t)\in\mathcal{S}_{2}([0,1])\) satisfying \(t-s\geq 1/n\)._ Proof.: We assume that \(d>\frac{1}{2H}\). In the following we prove (i) by applying Proposition 3.1. We first recall the estimate in [32, equation (4.24)]: \[|\mathcal{J}_{s}^{t}(x^{i},h^{n})|_{L_{2}}\leq Kn^{1/2}(t-s)^{iH+1/2},\qquad(s,t)\in\mathcal{S}_{2}([0,1]):t-s\geq 1/n, \tag{3.12}\] for all \(i=0,\ldots,[\frac{1}{2H}]\). The estimate (3.12) implies that relation (3.2) holds for \(h:=h^{n}/\sqrt{n}\) and \(\beta_{i}:=iH+1/2\). Take \(\alpha=H-\varepsilon\) and recall that \(\ell\) is the least integer such that \(\ell H+1/2>1\), or \(\ell H>1/2\). It is thus readily checked that condition (3.1) is satisfied. We conclude that (3.10) holds. We turn to the case when \(d\leq\frac{1}{2H}\). As before, our estimate will be an application of Proposition 3.1. We first derive an estimate of \(|\mathcal{J}_{s}^{t}(x^{i},h^{n})|_{L_{2}}\). For this purpose we consider the following decomposition \[h^{n}_{st}=h^{n,(1)}_{st}+h^{n,(2)}_{st}, \tag{3.13}\] where \[h^{n,(1)}_{st}=\sum_{s\leq t_{k}<t}f_{1}(n^{H}\delta x_{t_{k}t_{k+1}}),\qquad h^{ n,(2)}_{st}=\sum_{s\leq t_{k}<t}f_{2}(n^{H}\delta x_{t_{k}t_{k+1}})\] and \[f_{1}(x)=\sum_{q=[\frac{1}{2H}]+1}^{\infty}a_{q}H_{q}(x),\qquad \qquad f_{2}(x)=\sum_{q=d}^{[\frac{1}{2H}]}a_{q}H_{q}(x).\] Note that the Hermite rank of \(f_{1}\) is greater than \(\frac{1}{2H}\). So we can apply (3.12) to get the estimate \[|\mathcal{J}^{t}_{s}(x^{i},h^{n,(1)})|_{L_{2}}\leq Kn^{1/2}(t-s)^{iH+1/2} \tag{3.14}\] for \((s,t)\in\mathcal{S}_{2}([0,1]):t-s\geq 1/n\) and \(i=0,\ldots,d\). By assumption we have \(1/2-dH>0\). It follows that \[1\leq n^{1/2-dH}(t-s)^{1/2-dH}, \tag{3.15}\] and therefore we can enlarge the bound in (3.14) to be: \[|\mathcal{J}^{t}_{s}(x^{i},h^{n,(1)})|_{L_{2}}\leq Kn^{1-dH}(t-s)^{1+iH-dH}. \tag{3.16}\] Let us turn to the estimate of \(|\mathcal{J}^{t}_{s}(x^{i},h^{n,(2)})|_{L_{2}}\). We first have the bound \[|\mathcal{J}^{t}_{s}(x^{i},h^{n,2})|_{L_{2}}\leq\sum_{q=d}^{[\frac{1}{2H}]}|a_ {q}|\cdot|\mathcal{J}^{t}_{s}(x^{i},h^{n,q})|_{L_{2}},\qquad(s,t)\in\mathcal{ S}_{2}([0,1]). \tag{3.17}\] Recall the estimate in [32, Lemma 4.11 (ii)]: \[|\mathcal{J}^{t}_{s}(x^{i},h^{n,q})|_{L_{2}}\lesssim\begin{cases}n^{1-qH}(t-s )^{1+iH-qH}&\text{when $q\leq i$}\\ n^{1/2}(t-s)^{iH+1/2}&\text{when $q>i$}\end{cases} \tag{3.18}\] for \((s,t)\in\mathcal{S}_{2}([0,1]):t-s\geq 1/n\), \(i=0,\ldots,d\) and \(q<\frac{1}{2H}\). Substituting (3.18) into the right-hand side of (3.17) and then applying the relation \(1\leq n(t-s)\) we obtain that \[|\mathcal{J}^{t}_{s}(x^{i},h^{n,2})|_{L_{2}}\leq Kn^{1-dH}(t-s)^{1+iH-dH}, \qquad(s,t)\in\mathcal{S}_{2}([0,1]) \tag{3.19}\] for \(i=0,\ldots,d\). Combining the two estimates (3.16) and (3.19) and taking into account the decomposition (3.13), we obtain that (3.2) holds for \(\beta_{i}:=1+iH-dH\) and \(h:=h^{n}/n^{1-dH}\). It is readily checked that condition (3.1) is satisfied for \(\ell=d+1\). Applying Proposition 3.1 we thus conclude the desired estimate (3.11). ### Convergence of Riemann sum Let \(y\) be a continuous process controlled by the fBm \(x\). This subsection is devoted to the convergence of Riemann sum for the regular integral \(\int_{0}^{t}y_{u}du\). For convenience we will consider the uniform partition of \([0,1]\): \(t_{i}=i/n\), \(i=0,1,\ldots,n\). We start by proving the following weighted limit theorem result: **Lemma 3.3**.: _Let \(x\) be a one-dimensional fBm with Hurst parameter \(H<1/2\). Let \((y,y^{\prime})\) be a rough path controlled by \((x,2,H)\) almost surely. Define the increment_ \[h_{st}^{n}=\sum_{s\leq t_{k}<t}\int_{t_{k}}^{t_{k+1}}x_{t_{k}u}^{1}du\qquad \text{for }(s,t)\in\mathcal{S}_{2}(\llbracket 0,1\rrbracket). \tag{3.20}\] _Then for each \((s,t)\in\mathcal{S}_{2}([0,1])\) we have the convergence in probability:_ \[n^{2H}\mathcal{J}_{s}^{t}(y,h^{n})\to-\frac{1}{4H+2}\int_{s}^{t}y^{\prime}_{u }du\qquad\text{as }n\to\infty. \tag{3.21}\] Proof.: The proof is divided into several steps. By localization (cf. [26, Lemma 3.4.5]) we can and will assume that \((y,y^{\prime})\) is controlled by \((x,2,H-\varepsilon)\) in \(L_{2}\) for any \(\varepsilon>0\). _Step 1: Estimate of \(h^{n}\)._ By the self-similarity of the fBm we have \(\mathbb{E}[x_{t_{k}u}^{1}x_{t_{k^{\prime}}u^{\prime}}^{1}]=n^{-2H}\mathbb{E}[ x_{k,nu}^{1}x_{k^{\prime},nu^{\prime}}^{1}]\). Applying this relation and then the change of variable \(nu^{\prime}\to u^{\prime}\) and \(nu\to u\) we get \[\mathbb{E}[|h_{st}^{n}|^{2}] = n^{-2H-2}\sum_{ns\leq k,k^{\prime}<nt}\int_{k}^{k+1}\int_{k^{ \prime}}^{k^{\prime}+1}\mathbb{E}[x_{ku}^{1}x_{k^{\prime}u^{\prime}}^{1}]du^{ \prime}du,\qquad(s,t)\in\mathcal{S}_{2}(\llbracket 0,1\rrbracket).\] Applying the estimate \(|\mathbb{E}[x_{ku}^{1}x_{k^{\prime}u^{\prime}}^{1}]|\lesssim|k-k^{\prime}|^{2H -2}\) for \(k\neq k^{\prime}\) we obtain \[\mathbb{E}[|h_{st}^{n}|^{2}]\lesssim n^{-2H-2}\sum_{\begin{subarray}{c}ns\leq k,k^{\prime}<nt\\ k\neq k^{\prime}\end{subarray}}|k-k^{\prime}|^{2H-2}\lesssim n^{-2H-1}(t-s), \qquad(s,t)\in\mathcal{S}_{2}(\llbracket 0,1\rrbracket). \tag{3.22}\] _Step 2: A decomposition of \(|\mathcal{J}_{s}^{t}(x^{1},h^{n})|_{L_{2}}^{2}\)._ Let \((s,t)\in\mathcal{S}_{2}([0,1])\) such that \(t-s>1/n\). By definition (2.5) we can express \(|\mathcal{J}_{s}^{t}(x^{1},h^{n})|_{L_{2}}^{2}\) as \[|\mathcal{J}_{s}^{t}(x^{1},h^{n})|_{L_{2}}^{2} = \sum_{s\leq t_{k},t_{k^{\prime}}<t}\mathbb{E}\int_{t_{k}}^{t_{k+1 }}\int_{t_{k^{\prime}}}^{t_{k^{\prime}+1}}x_{st_{k}}^{1}x_{st_{k^{\prime}}}^{1 }x_{t_{k}u}^{1}x_{t_{k^{\prime}}u^{\prime}}^{1}du^{\prime}du. \tag{3.23}\] Applying the integration by part to the integrand in (3.23) we obtain \[\mathbb{E}\left(x_{st_{k}}^{1}x_{st_{k^{\prime}}}^{1}x_{t_{k}u}^{1}x_{t_{k^{ \prime}}u^{\prime}}^{1}\right)=A_{1}+A_{2}, \tag{3.24}\] where \[A_{1} = \mathbb{E}\left(x_{st_{k}}^{1}x_{st_{k^{\prime}}}^{1}\right) \langle\mathbf{1}_{[t_{k},u]},\mathbf{1}_{[t_{k^{\prime}},u^{\prime}]} \rangle_{\mathcal{H}} \tag{3.25}\] \[A_{2} = \langle D^{2}(x_{st_{k}}^{1}x_{st_{k^{\prime}}}^{1}),\mathbf{1}_{ [t_{k},u]}\otimes\mathbf{1}_{[t_{k^{\prime}},u^{\prime}]}\rangle_{\mathcal{H} ^{\otimes 2}}. \tag{3.26}\] _Step 3: Estimate of \(A_{1}\)._ It is clear that \(\left|\mathbb{E}\left(x_{st_{k}}^{1}x_{st_{k^{\prime}}}^{1}\right)\right| \leq(t-s)^{2H}\). On the other hand, similar to the estimate of \(\mathbb{E}[\delta x_{t_{k}u}\delta x_{t_{k^{\prime}}u^{\prime}}]\) in Step 1 we have \(|\langle\mathbf{1}_{[t_{k},u]},\mathbf{1}_{[t_{k^{\prime}},u^{\prime}]} \rangle_{\mathcal{H}}|\lesssim n^{-2H}|k-k^{\prime}|^{2H-2}\). Substituting these two estimates into (3.25) we obtain \[|A_{1}|\lesssim(t-s)^{2H}|k-k^{\prime}|^{2H-2}n^{-2H}.\] The above estimate for \(|A_{1}|\) together with the relation \[\sum_{s\leq t_{k},t_{k^{\prime}}<t}|k-k^{\prime}|^{2H-2}\lesssim n(t-s)\] shows that \[\sum_{s\leq t_{k},t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}}\int_{t_{k^{\prime}}}^{ t_{k^{\prime}+1}}A_{1}du^{\prime}du\lesssim(t-s)^{2H+1}n^{-1-2H}. \tag{3.27}\] _Step 4: Estimate of \(A_{2}\)._ Recall that \(A_{2}\) is defined in (3.26). We first note that \[D^{2}(x^{1}_{st_{k}}x^{1}_{st_{k^{\prime}}})=\mathbf{1}_{[s,t_{k}]}\otimes \mathbf{1}_{[s,t_{k^{\prime}}]}+\mathbf{1}_{[s,t_{k^{\prime}}]}\otimes \mathbf{1}_{[s,t_{k}]}. \tag{3.28}\] Substituting relation (3.28) into (3.26) we obtain the decomposition \[\sum_{s\leq t_{k},t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}}\int_{t_{k^{\prime}}} ^{t_{k^{\prime}+1}}A_{2}du^{\prime}du=A_{21}+A_{22}\,, \tag{3.29}\] where \[A_{21} = \sum_{s\leq t_{k},t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}}\int_{t_ {k^{\prime}}}^{t_{k^{\prime}+1}}\langle\mathbf{1}_{[s,t_{k^{\prime}}]}\otimes \mathbf{1}_{[s,t_{k}]},\mathbf{1}_{[t_{k},u]}\otimes\mathbf{1}_{[t_{k^{\prime} },u^{\prime}]}\rangle_{\mathcal{H}^{\otimes 2}}du^{\prime}du\] \[A_{22} = \sum_{s\leq t_{k},t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}}\int_{t_ {k^{\prime}}}^{t_{k^{\prime}+1}}\langle\mathbf{1}_{[s,t_{k}]}\otimes\mathbf{1} _{[s,t_{k^{\prime}}]},\mathbf{1}_{[t_{k},u]}\otimes\mathbf{1}_{[t_{k^{\prime} },u^{\prime}]}\rangle_{\mathcal{H}^{\otimes 2}}du^{\prime}du. \tag{3.30}\] In the following we bound \(A_{21}\) and \(A_{22}\), which together will give us the estimate of \(A_{2}\). We first have \[|A_{21}|\leq 2\sum_{s\leq t_{k}\leq t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}} \int_{t_{k^{\prime}}}^{t_{k^{\prime}+1}}|\langle\mathbf{1}_{[s,t_{k^{\prime}}] },\mathbf{1}_{[t_{k},u]}\rangle_{\mathcal{H}}|\cdot|\langle\mathbf{1}_{[s,t_{ k}]},\mathbf{1}_{[t_{k^{\prime}},u^{\prime}]}\rangle_{\mathcal{H}}|du^{\prime}du.\] Invoking the elementary estimates \[|\langle\mathbf{1}_{[s,t_{k^{\prime}}]},\mathbf{1}_{[t_{k},u]} \rangle_{\mathcal{H}}|\leq n^{-2H}\qquad\text{and}\qquad|\langle\mathbf{1}_{[ s,t_{k}]},\mathbf{1}_{[t_{k^{\prime}},u^{\prime}]}\rangle_{\mathcal{H}}|\lesssim n ^{-2H}|k-k^{\prime}|^{2H-1}\] for \(t_{k}\leq t_{k^{\prime}}\) we obtain \[|A_{21}|\lesssim\sum_{s\leq t_{k}\leq t_{k^{\prime}}<t}\int_{t_ {k}}^{t_{k+1}}\int_{t_{k^{\prime}}}^{t_{k^{\prime}+1}}n^{-2H}\cdot n^{-2H} \cdot(k^{\prime}-k)^{2H-1}du^{\prime}du\] \[\lesssim(t-s)^{2H+1}n^{-2H-1}. \tag{3.31}\] We turn to the estimate of \(A_{22}\). A change of variables in (3.30) gives \[A_{22} = n^{-4H-2}\sum_{ns\leq k,k^{\prime}<nt}A_{22,kk^{\prime}}, \tag{3.32}\] where \[A_{22,kk^{\prime}}=\int_{k}^{k+1}\int_{k^{\prime}}^{k^{\prime}+1} \langle\mathbf{1}_{[ns,k]}\otimes\mathbf{1}_{[ns,k^{\prime}]},\mathbf{1}_{[k, u]}\otimes\mathbf{1}_{[k^{\prime},u^{\prime}]}\rangle_{\mathcal{H}^{\otimes 2}}du^{\prime}du. \tag{3.33}\] It is clear that \(|A_{22,kk^{\prime}}|\sim O(1)\). Therefore, from (3.32) we obtain the estimate \[\big{|}A_{22}\big{|}\lesssim(t-s)^{2}n^{-4H}. \tag{3.34}\] _Step 5: Estimate of \(\mathcal{J}^{t}_{s}(x^{1},h^{n})\). Putting together the estimates (3.27), (3.31), and (3.34) and taking into account the decompositions (3.23)-(3.24) and (3.29) we obtain the estimate_ \[|\mathcal{J}^{t}_{s}(x^{1},h^{n})|_{L_{2}}\lesssim(t-s)n^{-2H} \tag{3.35}\] _for \((s,t)\in\mathcal{S}_{2}([0,1])\) such that \(t-s>1/n\)._ _Step 6: Convergence of the second moment of \(\mathcal{J}^{t}_{s}(x^{1},h^{n})\). Let \((s,t)\in\mathcal{S}_{2}([0,1])\). In this step we show the convergence:_ \[n^{2H}|\mathcal{J}^{t}_{s}(x^{1},h^{n})|_{L_{2}}\to\frac{1}{4H+2}(t-s)\qquad \text{as }n\to\infty. \tag{3.36}\] Recall our decomposition of \(|\mathcal{J}^{t}_{s}(x^{1},h^{n})|_{L_{2}}^{2}\) in (3.23)-(3.24) and of \(A_{2}\) in (3.29). So the estimates in (3.27), (3.31) and (3.34) together shows that the convergence of \(|\mathcal{J}^{t}_{s}(x^{1},h^{n})|_{L_{2}}^{2}\) is dominated by that of \(A_{22}\). Namely, we have \[\lim_{n\to\infty}n^{4H}|\mathcal{J}^{t}_{s}(x^{1},h^{n})|_{L_{2}}^{2}=\lim_{n \to\infty}n^{4H}A_{22}. \tag{3.37}\] In the following we focus on the computation of \(\lim_{n\to\infty}n^{4H}A_{22}\). Recall the expression of \(A_{22}\) in (3.32)-(3.33). We first note that since \(|A_{22,kk^{\prime}}|\sim O(1)\) we can replace the summation \(\sum_{ns<k,k^{\prime}<nt}\) in (3.32) by \(\sum_{ns+n^{\varepsilon}\leq k,k^{\prime}<nt}\) for \(0<\varepsilon<1\) without changing the limit of \(A_{22}\). Next, by stationary increment and self-similarity of the fBm we have \[\langle\mathbf{1}_{[ns,k]}\otimes\mathbf{1}_{[ns,k^{\prime}]}, \mathbf{1}_{[k,u]}\otimes\mathbf{1}_{[k^{\prime},u^{\prime}]}\rangle_{\mathcal{ H}^{\otimes 2}}=\langle\mathbf{1}_{[ns-k,0]}\otimes\mathbf{1}_{[ns-k^{ \prime},0]},\mathbf{1}_{[0,u-k]}\otimes\mathbf{1}_{[0,u^{\prime}-k^{\prime}]} \rangle_{\mathcal{H}^{\otimes 2}}\] \[=\langle\mathbf{1}_{[ns-k,0]}\otimes\mathbf{1}_{[\frac{ns-k^{ \prime}}{u^{\prime}-k^{\prime}},0]},\mathbf{1}_{[0,1]}\otimes\mathbf{1}_{[0,1 ]}\rangle_{\mathcal{H}^{\otimes 2}}(u-k)^{2H}(u^{\prime}-k^{\prime})^{2H}\] \[=\langle\mathbf{1}_{(-\infty,0]}\otimes\mathbf{1}_{(-\infty,0]}, \mathbf{1}_{[0,1]}\otimes\mathbf{1}_{[0,1]}\rangle_{\mathcal{H}^{\otimes 2}}(u-k)^{2 H}(u^{\prime}-k^{\prime})^{2H}+o(1),\] where the last equation holds for \(k\) and \(k^{\prime}\) such that \(k-ns\geq n^{\varepsilon}\) and \(k^{\prime}-ns\geq n^{\varepsilon}\). Using the relation \(\langle\mathbf{1}_{(-\infty,0]},\mathbf{1}_{[0,1]}\rangle_{\mathcal{H}}=-1/2\) we obtain \[\langle\mathbf{1}_{[ns,k]}\otimes\mathbf{1}_{[ns,k^{\prime}]}, \mathbf{1}_{[k,u]}\otimes\mathbf{1}_{[k^{\prime},u^{\prime}]}\rangle_{ \mathcal{H}^{\otimes 2}}=\frac{1}{4}(u-k)^{2H}(u^{\prime}-k^{\prime})^{2H}+o(1). \tag{3.38}\] Substituting (3.38) into (3.32) we obtain \[A_{22} = n^{-4H-2}\frac{1}{4}\sum_{ns+n^{\varepsilon}\leq k,k^{\prime}<nt} \int_{k}^{k+1}\int_{k^{\prime}}^{k^{\prime}+1}(u-k)^{2H}(u^{\prime}-k^{\prime} )^{2H}du^{\prime}du+n^{-4H}o(1)\] \[= n^{-4H}(t-s)^{2}\cdot\frac{1}{4}(2H+1)^{-2}+n^{-4H}o(1).\] It follows that \[\lim_{n\to\infty}n^{4H}A_{22}=(t-s)^{2}\cdot\frac{1}{4}(2H+1)^{-2}.\] Recalling relation (3.37), we thus obtain the convergence in (3.36). _Step 7: Convergence of \(\mathcal{J}^{t}_{s}(x^{1},h^{n})\)._ In this step, we show the \(L_{2}\)-convergence of \(\mathcal{J}^{t}_{s}(x^{1},h^{n})\): \[n^{2H}\mathcal{J}^{t}_{s}(x^{1},h^{n})\to-\frac{1}{4H+2}(t-s). \tag{3.39}\] In view of the convergence (3.36), it suffices to show that: \[n^{2H}\mathbb{E}\mathcal{J}_{s}^{t}(x^{1},h^{n})\to-\frac{1}{4H+2}(t-s)\qquad \text{as }n\to\infty. \tag{3.40}\] The convergence (3.40) can be proved in the similar way as in Step 5. Indeed, we have: \[\mathbb{E}[\mathcal{J}_{s}^{t}(x^{1},h^{n})] = \mathbb{E}\sum_{s\leq t_{k}<t}x_{st_{k}}^{1}\int_{t_{k}}^{t_{k+1}} x_{tu}^{1}du\] \[= \sum_{s\leq t_{k}<t}\int_{t_{k}}^{t_{k+1}}\langle\mathbf{1}_{[s,t_ {k}]},\mathbf{1}_{[t_{k},u]}\rangle_{\mathcal{H}}du\] \[= n^{-2H-1}\sum_{ns\leq k<nt}\int_{k}^{k+1}\langle\mathbf{1}_{[ns, k]},\mathbf{1}_{[k,u]}\rangle_{\mathcal{H}}du\] \[= n^{-2H-1}(-1/2)\cdot\sum_{ns\leq k<nt}\int_{k}^{k+1}(u-k)^{2H} du+n^{-2H}o(1)\] \[= n^{-2H}(t-s)(-1/2)(2H+1)^{-1}+n^{-2H}o(1).\] The convergence (3.40) then follows. The two convergences (3.36) and (3.40) together implies that the convergence (3.39) holds. _Step 8: Convergence of \(\mathcal{J}_{s}^{t}(y,h^{n})\)._ Let \((s,t)\in\mathcal{S}_{2}([0,1])\). We start by taking a partition of \([s,t]\): \(s=s_{0}<s_{1}<\cdots<s_{m}=t\) such that \(\max_{j=0,\ldots,m-1}|s_{j+1}-s_{j}|\leq 1/m\) for some \(m<n\). Then we can write \[\mathcal{J}_{s}^{t}(y,h^{n})=\sum_{j=0}^{m-1}\mathcal{J}_{s_{j}}^{s_{j+1}}(y,h ^{n}). \tag{3.41}\] Since \((y,y^{\prime})\) is controlled by \((x,2,H-\varepsilon)\) in \(L_{2}\) we have the expansion \(y_{t_{k}}=y_{s_{j}}+y^{\prime}_{s_{j}}x_{s_{j}t_{k}}^{1}+r_{s_{j}t_{k}}^{(0)}\). Substituting this into \(\mathcal{J}_{s_{j}}^{s_{j+1}}(y,h^{n})\) in (3.41) we obtain \[\mathcal{J}_{s}^{t}(y,h^{n})=\sum_{j=0}^{m-1}y_{s_{j}}\mathcal{J}_{s_{j}}^{s_ {j+1}}(1,h^{n})+\sum_{j=0}^{m-1}y^{\prime}_{s_{j}}\mathcal{J}_{s_{j}}^{s_{j+1} }(x^{1},h^{n})+\sum_{j=0}^{m-1}\mathcal{J}_{s_{j}}^{s_{j+1}}(r^{(0)},h^{n}). \tag{3.42}\] In the following we consider the convergence of the three terms on the right-hand side of (3.42). We note that it follows from relations (3.22) and (3.35) that conditions (3.1)-(3.2) hold for \(h:=n^{2H}h^{n}\), \(\alpha=H-\varepsilon\), \(\beta_{0}:=1-H\) and \(\beta_{1}:=1\). Therefore, applying Proposition 3.1 we have \[n^{2H}|\mathcal{J}_{s}^{t}(r^{(0)},h^{n})|_{L_{1}}\lesssim(t-s)^{1+H-\varepsilon}.\] This implies that \[\lim_{m\to\infty}\limsup_{n\to\infty}n^{2H}\Big{|}\sum_{j=0}^{m-1} \mathcal{J}_{s_{j}}^{s_{j+1}}(r^{(0)},h^{n})\Big{|}_{L_{1}}\lesssim\lim_{m\to \infty}\sum_{j=0}^{m-1}(s_{j+1}-s_{j})^{1+H-\varepsilon}=0. \tag{3.43}\] We turn to the other two terms in the right-hand side of (3.42). Applying (3.22) we have \[n^{2H}\Big{|}\sum_{j=0}^{m-1}y_{s_{j}}\mathcal{J}_{s_{j}}^{s_{j+1}}( 1,h^{n})\Big{|}_{L_{1}}=n^{2H}\sum_{j=0}^{m-1}\Big{|}y_{s_{j}}h_{s_{j}s_{j+1}}^{n }\Big{|}_{L_{1}}\] \[\qquad\lesssim\sum_{j=0}^{m-1}|y_{s_{j}}|_{L_{2}}(s_{j+1}-s_{j})^ {1/2}(1/n)^{1/2-H}\to 0\qquad\text{as }n\to\infty. \tag{3.44}\] Finally, according to (3.39) we have the convergence in probability: \[\lim_{m\to\infty}\lim_{n\to\infty}n^{2H}\sum_{j=0}^{m-1}y_{s_{j}} ^{\prime}\mathcal{J}_{s_{j}}^{s_{j+1}}(x^{1},h^{n}) = -\frac{1}{4H+2}\lim_{m\to\infty}\sum_{j=0}^{m-1}y_{s_{j}}^{\prime }(s_{j+1}-s_{j}) \tag{3.45}\] \[= -\frac{1}{4H+2}\int_{0}^{t}y_{u}^{\prime}du.\] Putting together the convergences (3.43)-(3.45) and recalling the relation (3.42) we conclude the convergence (3.21). With Lemma 3.3 in hand, we are ready to consider the convergence of the Riemann sum for the integral \(\int_{0}^{t}y_{u}du\). **Proposition 3.4**.: _Let \(x\) be a one-dimensional fBm with Hurst parameter \(H<1/2\). Let \((y,y^{\prime},y^{\prime\prime})\) be a rough path controlled by \((x,3,H)\) almost surely.Then we have the convergence in probability:_ \[n^{2H}\left(\frac{1}{n}\sum_{0\leq t_{k}<t}y_{t_{k}}-\int_{0}^{t}y_{u}du \right)\to 0\qquad\text{as }n\to\infty. \tag{3.46}\] Proof.: The proof is divided into several steps. _Step 1: A decomposition of the error of Riemann sum._ We first have \[\int_{0}^{t}y_{u}du-\frac{1}{n}\sum_{0\leq t_{k}<t}y_{t_{k}} = \sum_{0\leq t_{k}<t}\int_{t_{k}}^{t_{k+1}}\delta y_{t_{k}u}du. \tag{3.47}\] Substituting the expansion \(\delta y_{t_{k}u}=y_{t_{k}}^{\prime}x_{t_{k}u}^{1}+y_{t_{k}}^{\prime\prime}x_ {t_{k}u}^{2}+r_{t_{k}u}^{(0)}\) into (3.47) we get the expansion: \[\int_{0}^{t}y_{u}du-\frac{1}{n}\sum_{0\leq t_{k}<t}y_{t_{k}} = I_{1}+I_{2}+I_{3}, \tag{3.48}\] where \[I_{1}=\sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime}\int_{t_{k}}^{t_{k+1}}x_{t_{k}u}^ {1}du,\qquad I_{2}=\sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime\prime}\int_{t_{k}}^{t _{k+1}}x_{t_{k}u}^{2}du,\qquad I_{3}=\sum_{0\leq t_{k}<t}\int_{t_{k}}^{t_{k+1} }r_{t_{k}u}^{0}du.\] In the following we consider the convergence of \(I_{1}\), \(I_{2}\) and \(I_{3}\) which together will give the desired convergence in (3.46). _Step 2: Convergence of \(I_{1}\) and \(I_{3}\)._ Since \(|r_{t_{k}u}^{(0)}|_{L_{1}}\lesssim n^{-3H}\) it follows that \[n^{2H}I_{3}\to 0 \tag{3.49}\] in probability as \(n\to\infty\). On the other hand, a direct application of Lemma 3.3 yields the convergence \[n^{2H}I_{1}\to-\frac{1}{4H+2}\int_{s}^{t}y_{u}^{\prime\prime}du \qquad\text{as }n\to\infty. \tag{3.50}\] _Step 3: Convergence of \(I_{2}\)._ We consider the following decomposition of \(I_{2}\): \[I_{2} = \sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime\prime}\int_{t_{k}}^{t_{k+1} }x_{t_{k}u}^{2}du=I_{21}+I_{22},\] where \[I_{21} = \sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime\prime}\int_{t_{k}}^{t_{k+1} }(x_{t_{k}u}^{2}-\frac{1}{2}(u-t_{k})^{2H})du \tag{3.51}\] \[I_{22} = \frac{1}{2}\sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime\prime}\int_{t_{ k}}^{t_{k+1}}(u-t_{k})^{2H}du.\] It is clear that \[I_{22}=\frac{1}{2}\sum_{0\leq t_{k}<t}y_{t_{k}}^{\prime\prime} \cdot(1/n)^{2H+1}(2H+1)^{-1}.\] It follows that \[n^{2H}I_{22}\to\frac{1}{4H+2}\int_{0}^{t}y_{u}^{\prime\prime}du \qquad\text{in probability as }n\to\infty.\] We turn to the convergence of \(I_{21}\). We first note that a direct computation shows that \[\Big{|}\sum_{0\leq t_{k}<t}\int_{t_{k}}^{t_{k+1}}\Big{(}x_{t_{k}u }^{2}-\frac{1}{2}(u-t_{k})^{2H}\Big{)}du\Big{|}_{L_{2}}^{2} \lesssim \sum_{0\leq t_{k},t_{k^{\prime}}<t}\int_{t_{k}}^{t_{k+1}}\int_{t _{k^{\prime}}}^{t_{k^{\prime}+1}}n^{-4H}|\rho(k-k^{\prime})|^{2}du^{\prime}du \tag{3.52}\] \[\lesssim n^{-4H-2}n(t-s)=n^{-4H-1}(t-s).\] Applying Proposition 3.1 to \(I_{21}\) in (3.51) with \(\ell=1\) and \(\beta_{0}=1-H+\varepsilon\) and invoking the estimate (3.52) we obtain \[n^{2H}|I_{21}|_{L_{1}}\lesssim(t-s)^{1-H+\varepsilon}(1/n)^{H-\varepsilon}\] for any \(\varepsilon>0\). In particular, we have \(n^{2H}|I_{21}|_{L_{1}}\to 0\) as \(n\to\infty\). Combining the convergence of \(I_{21}\) and \(I_{22}\) we obtain \[n^{2H}I_{2}\to\frac{1}{4H+2}\int_{0}^{t}y_{u}^{\prime\prime}du \qquad\text{in probability as }n\to\infty. \tag{3.53}\] _Step 4: Conclusion._ Substituting the convergences of \(I_{i}\), \(i=1,2,3\) in (3.49), (3.50) and (3.53) into (3.48) we obtain the convergence (3.46). _Remark 3.5_.: The proof of Proposition 3.4 suggests that the exact rate of convergence in (3.46) is \(O(n^{-H-1/2})\) given that \(y\) satisfies some regularity conditions. But the rate \(o(n^{-2H})\) we have obtained in (3.46) is sufficient for our purpose in this paper, and it requires a weaker condition of \(y\). In the next result we consider the convergence rate of the Riemann sum under a weaker condition. We will also include the case when \(H=1/2\). **Proposition 3.6**.: _Let \(x\) be a one-dimensional fBm with Hurst parameter \(H\leq 1/2\). Let \((y,y^{\prime})\) be a rough path controlled by \((x,2,H-\varepsilon)\) in \(L_{2}\) for some \(\varepsilon>0\). Then there is a constant \(K\) independent of \(n\) such that:_ \[\Big{|}\frac{1}{n}\sum_{0\leq t_{k}<t}y_{t_{k}}-\int_{0}^{t}y_{u} du\Big{|}_{L_{1}}\leq Kn^{-2H+2\varepsilon} \tag{3.54}\] _for all \(t\in[0,1]\)._ Proof.: Because \(y\) is controlled by \((x,2,H)\) we have the relation \(\delta y_{t_{k}u}=y^{\prime}_{t_{k}}x^{1}_{t_{k}u}+r^{(0)}_{t_{k}u}\). So, similar to (3.48), we have the decomposition \[\int_{0}^{t}y_{u}du-\frac{1}{n}\sum_{0\leq t_{k}<t}y_{t_{k}} = I_{1}+I_{2}, \tag{3.55}\] where \[I_{1}=\sum_{0\leq t_{k}<t}y^{\prime}_{t_{k}}\int_{t_{k}}^{t_{k+1 }}x^{1}_{t_{k}u}du,\qquad I_{2}=\sum_{0\leq t_{k}<t}\int_{t_{k}}^{t_{k+1}}r^{0 }_{t_{k}u}du.\] It is readily checked that \(|I_{2}|_{L_{1}}\lesssim n^{-2H+2\varepsilon}\). Let \(h^{n}\) be defined in (3.20). Applying Proposition 3.1 with \(h=n^{2H-2\varepsilon}h^{n}\), \(\ell=1\) and \(\beta_{0}=1-H+2\varepsilon\) we obtain that \[n^{2H-2\varepsilon}|I_{1}|_{L_{1}}\lesssim 1.\] Combining the estimate of \(I_{1}\) and \(I_{2}\) in (3.55) we obtain the desired estimate (3.54). ### Weighted \(p\)-variations In this subsection we consider limit theorems for weighted random sums of some fBms functionals. For \(p>-1\), we denote \[c_{p}=\mathbb{E}(|N|^{p})=\frac{2^{p/2}}{\sqrt{\pi}}\Gamma\left( \frac{p+1}{2}\right). \tag{3.56}\] It is easy to see that \(c_{p+2}=(p+1)c_{p}\), and when \(p\) is an even integer we have \(c_{p}=\mathbb{E}(N^{p})=(p-1)(p-3)\cdots 1\). We define the sign function: \[\mathrm{sign}(x)=1,-1,0\text{ for }x>0,\,x<0\text{ and }x=0,\,\text{respectively}. \tag{3.57}\] **Lemma 3.7**.: _Let \(x\) be a fBm with Hurst parameter \(H<1/2\). Let \((y,y^{\prime})\) be a process controlled by \((x,2,H)\) almost surely. Take \(p>1/2\) and let_ \[f(x)=|x|^{p+1}\cdot\mathrm{sign}(x),\qquad x\in\mathbb{R}. \tag{3.58}\] _Then we have the following convergence in probability:_ \[n^{H-1}\sum_{0\leq t_{k}<t}y_{t_{k}}f(n^{H}x^{1}_{t_{k}t_{k+1}} )\to-\frac{1}{2}c_{p+2}\int_{0}^{t}y^{\prime}_{u}du\qquad\text{as }n\to\infty. \tag{3.59}\] Proof.: We prove the convergence (3.59) by applying [32, Theorem 4.14 (ii)]. It is easy to see that the function \(f\) in (3.58) belongs to \(L_{2}(\gamma)\) with Hermite rank \(d=1\) as long as \(p>-3/2\). Take \(\ell=d+1=2\). By assumption \((y,y^{\prime})\) is a rough path controlled by \((x,\ell,H)\) almost surely. Furthermore, it is easy to see that \(f\in W^{2,2}(\mathbb{R},\gamma)\) when \(p>1/2\). In summary, we have shown that the conditions in [32, Theorem 4.14 (ii)] hold for the weighted sum in (3.59). Since \(f\) is an odd function it has the decomposition \(f(x)=\sum_{q=0}^{\infty}a_{2q+1}H_{2q+1}(x)\). We compute the first coefficient: \[a_{1}=\mathbb{E}[|N|^{p+1}\cdot\operatorname{sign}(N)N]=\mathbb{E}[|N|^{p+2}]= c_{p+2}.\] Applying [32, Theorem 4.14] we thus obtain the convergence (3.59). Let \(f(x)=|x|^{p}-c_{p}\), \(x\in\mathbb{R}\). It is easily seen that \(f\in L_{2}(\gamma)\) with Hermite rank \(d=2\) when \(p>-\frac{1}{2}\). Furthermore, we have the decomposition \(f(x)=\sum_{q=1}^{\infty}a_{2q}H_{2q}(x)\), where the constants \(a_{2q}\) are given by: \[a_{2q} = \sum_{r=0}^{q}\frac{(-1)^{r}}{2^{r}r!(2q-2r)!}c_{2q-2r+p},\qquad q =1,2,\ldots. \tag{3.60}\] We also set the constant \(\sigma\): \[\sigma^{2}=\sum_{q=1}^{\infty}(2q)!a_{2q}^{2}\sum_{k\in\mathbb{Z}}\rho(k)^{2q}, \tag{3.61}\] where \(\rho\) is defined in (2.6). Note that when \(H=1/2\) we have \(\rho(0)=1\) and \(\rho(k)=0\) for \(k\neq 0\), and so (3.61) gives \(\sigma=\|f\|_{L_{2}(\gamma)}=(c_{2p}-c_{p}^{2})^{1/2}\). The next limit theorem result is an application of [32, Theorem 4.7 and Theorem 4.14]. The proof is similar to Lemma 3.7 and is omitted for sake of conciseness. In the following \(\xrightarrow{stable\ f.d.d.}\) stands for the stable convergence of finite dimensional distributions. That is, we say \(X_{t}^{n}\xrightarrow{stable\ f.d.d.}X_{t}\), \(t\in[0,1]\) if the finite dimensional distribution of the process \(X_{t}^{n}\), \(t\in[0,1]\) converges stably to that of the process \(X_{t}\), \(t\in[0,1]\) as \(n\to\infty\). **Proposition 3.8**.: _Let \(x\) be a fBm with Hurst parameter \(H\leq 1/2\). Let \((y^{(0)},\ldots,y^{(\ell-1)})\) be a process controlled by \((x,\ell,H)\) almost surely for some \(\ell\in\mathbb{N}\). Let \(a_{2q}\) and \(\sigma\) be constants given in (3.60)-(3.61). Then: (i) For \(\frac{1}{2}\geq H>\frac{1}{4}\), \(\ell=2\) and \(p\in(3/2,\infty)\) we have the convergence:_ \[\frac{1}{\sqrt{n}}\sum_{0\leq t_{k}<t}y_{t_{k}}(|n^{H}x_{t_{k}t_{k+1}}^{1}|^{p }-c_{p})\xrightarrow{stable\ f.d.d.}\sigma\int_{0}^{t}y_{t}dW_{t}\qquad\text {for $t\in[0,1]$,} \tag{3.62}\] _where \(W\) is a Wiener process independent of \(x\). (ii) For \(H=\frac{1}{4}\), \(\ell=3\) and \(p\in(7/2,\infty)\cup\{2\}\) we have the convergence:_ \[\frac{1}{\sqrt{n}}\sum_{0\leq t_{k}<t}y_{t_{k}}(|n^{H}x_{t_{k}t_{k+1}}^{1}|^{ p}-c_{p})\xrightarrow{stable\ f.d.d.}\sigma\int_{0}^{t}y_{u}dW_{u}+\frac{pc_{p}}{8} \int_{0}^{t}y_{u}^{\prime\prime}du\qquad\text{for $t\in[0,1]$.}\] _(iii) For \(H<\frac{1}{4}\), \(\ell=3\) and \(p\in(7/2,\infty)\cup\{2\}\) we have the convergence in probability:_ \[n^{2H-1}\sum_{0\leq t_{k}<t}y_{t_{k}}(|n^{H}x^{1}_{t_{k}t_{k+1}}|^{p}-c_{p}) \xrightarrow{pc_{p}}\frac{pc_{p}}{8}\int_{0}^{t}y^{\prime\prime}_{u}du\qquad \text{for $t\in[0,1]$.}\] ## 4. Limit theorem for \(p\)-variation of processes controlled by fBm In this section we consider the convergence of \(p\)-variation for processes controlled by fBm. Throughout the section we let \(\phi(x)=|x|^{p}\), \(x\in\mathbb{R}\) for \(p\geq 1\). We first state the following elementary result. **Lemma 4.1**.: _Denote by \(\phi^{(j)}\) the \(j\)th derivative of \(\phi\). For convenience we will also write \(\phi(x)=\phi^{(0)}(x)\), \(\phi^{\prime}(x)=\phi^{(1)}(x)\) and \(\phi^{\prime\prime}(x)=\phi^{(2)}(x)\). For \(j=0,1,\ldots,[p]\) we set_ \[\phi_{j}(x)=|x|^{p-j}\cdot\operatorname{sign}(x)^{j}\qquad\text{and}\qquad K _{j}=p\cdots(p-j+1)=\prod_{i=0}^{j-1}(p-i), \tag{4.1}\] _where recall that \(\operatorname{sign}(x)\) is defined in (3.57) and we use the convention that \(\prod_{i=0}^{-1}(p-i)=1\). For example, we have \(K_{0}=1\), \(K_{1}=p\), \(K_{2}=p(p-1)\). Then_ _(i) When_ \(p\) _is odd,_ \(\phi\) _has derivative up to order_ \([p]-1\)_, and_ \(\phi^{([p]-1)}\) _is Lipschitz. When_ \(p\) _is even,_ \(\phi\) _has derivative of all orders. When_ \(p\) _is non-integer,_ \(\phi\) _has derivative up to order_ \([p]\)_._ _(ii) For_ \(x\in\mathbb{R}\) _we have_ \[\phi^{(j)}(x) = K_{j}\cdot\phi_{j}(x),\qquad j=0,1,\ldots,[p], \tag{4.2}\] _with the exception that_ \(\phi^{(p)}(0)\) _is undefined when_ \(p\) _is an odd number. In particular, when_ \(p\) _is an odd number we have_ \(\phi^{(j)}(x)=K_{j}x^{p-j}\mathrm{sign}(x)\)_, while when_ \(p\) _is even we get_ \(\phi^{(j)}(x)=K_{j}x^{p-j}\)_._ Following is our main result. Recall that we define \(\phi(x)=|x|^{p}\), \(x\in\mathbb{R}\), and for a continuous process \(y\) the \(p\)-variation of \(y\) over the time interval \([0,t]\) is defined as \[\sum_{0\leq t_{k}<t}\phi(\delta y_{t_{k}t_{k+1}})=\sum_{0\leq t_{k}<t}|\delta y _{t_{k}t_{k+1}}|^{p},\] where \(t_{k}=k/n\), \(k=0,1,\ldots,n\) is a uniform partition of \([0,1]\). **Theorem 4.2**.: _Let \(x\) be a fBm with Hurst parameter \(H\leq 1/2\) and \((y^{(0)},\ldots,y^{(\ell-1)})\) be a process controlled by \((x,\ell,H)\) almost surely (see Definition 2.1) for some \(\ell\in\mathbb{N}\). Recall that \(\phi^{\prime}\) and \(\phi^{\prime\prime}\) are derivatives of \(\phi\) defined in (4.2), and \(c_{p}\) and \(\sigma\) are constants given in (3.56) and (3.61), respectively. Let_ \[U^{n}_{t}=n^{pH-1}\sum_{0\leq t_{k}<t}\phi(\delta y_{t_{k}t_{k+1}})-c_{p}\int_ {0}^{t}\phi(y^{\prime}_{u})du\qquad t\in[0,1].\] _Then_ _(i) When_ \(1/4<H\leq 1/2\)_,_ \(p\in[3,\infty)\cup\{2\}\) _and_ \(\ell\geq 4\) _we have the stable f.d.d. convergence_ \[\big{(}n^{1/2}U^{n},x\big{)}\to(U,x),\qquad\text{as $n\to\infty$,} \tag{4.3}\] _where_ \[U_{t}=\sigma\int_{0}^{t}\phi(y^{\prime}_{u})dW_{u}\qquad t\in[0,1]\,,\] _and \(W\) is a standard Brownian motion independent of \(x\). (ii) When \(H=1/4\), \(p\in[5,\infty)\cup\{2,4\}\) and \(\ell\geq 6\) we have the stable f.d.d. convergence_ \[\big{(}n^{1/2}U^{n},x\big{)}\to(U,x),\qquad\text{as $n\to\infty$,} \tag{4.4}\] _where_ \[U_{t}=\sigma\int_{0}^{t}\phi(y^{\prime}_{u})dW_{u}-\frac{c_{p}}{8}\int_{0}^{t }\phi^{\prime\prime}(y^{\prime}_{u})(y^{\prime\prime}_{u})^{2}du+\frac{(p-2)c _{p}}{24}\int_{0}^{t}\phi^{\prime}(y^{\prime}_{u})y^{\prime\prime\prime}_{u}du.\] _(iii) When \(H<1/4\), \(p\in[5,\infty)\cup\{2,4\}\) and \(\ell\geq 6\) we have the convergence in probability_ \[n^{2H}U^{n}_{t}\to U_{t}\qquad\text{as $n\to\infty$} \tag{4.5}\] _for \(t\in[0,1]\), where_ \[U_{t}=-\frac{c_{p}}{8}\int_{0}^{t}\phi^{\prime\prime}(y^{\prime}_{u})(y^{ \prime\prime}_{u})^{2}du+\frac{(p-2)c_{p}}{24}\int_{0}^{t}\phi^{\prime}(y^{ \prime}_{u})y^{\prime\prime\prime}_{u}du.\] Proof.: Take \(\varepsilon>0\) sufficiently small. Recall that \(y^{(i)}\), \(r^{(i)}\), \(i=0,\ldots,\ell-1\) and \(G_{\mathbf{y}}=G_{\mathbf{y},\varepsilon}\) are defined in Definition 2.1. Let \(G_{x}=G_{x,\varepsilon}\) be a finite random variable such that \(|x^{1}_{st}|\leq G_{x}(t-s)^{H-\varepsilon}\). By localization (cf. [26, Lemma 3.4.5]) we can and will assume that there exists some constant \(C_{0}>0\) such that \[\sum_{i=0}^{\ell-1}\Big{(}\sup_{t\in[0,1]}|y^{(i)}_{t}|+\sup_{s,t\in[0,1]}|r^{ (i)}_{st}|\Big{)}+G_{\mathbf{y}}+G_{x}<C_{0}\qquad\text{almost surely.} \tag{4.6}\] Note that under this assumption it is clear that \((y^{(0)},\ldots,y^{(\ell-1)})\) is controlled by \((x,\ell,H-\varepsilon)\) in \(L_{p}\) for any \(p>0\) (see Definition 2.1). We divide the proof into several steps. _Step 1: Taylor's expansion of the function \(\phi\)._ For convenience let us denote \[q=\begin{cases}p&\text{when $p$ is an even number.}\\ [p]-1&\text{otherwise.}\end{cases}\] Applying the Taylor expansion to \(\phi(\delta y_{t_{k}t_{k+1}})\) at the value \(y^{(1)}_{t_{k}}\delta x_{t_{k}t_{k+1}}\) we get \[\phi(\delta y_{t_{k}t_{k+1}}) = I_{1}+I_{2}, \tag{4.7}\] where \[I_{1} = \sum_{j=0}^{q}\frac{\phi^{(j)}(y^{(1)}_{t_{k}}x^{1}_{t_{k}t_{k+1} })}{j!}\cdot(\delta y_{t_{k}t_{k+1}}-y^{(1)}_{t_{k}}\delta x_{t_{k}t_{k+1}})^{j} \tag{4.8}\] \[I_{2} = \frac{\phi^{(q+1)}(\xi_{k})}{(q+1)!}\cdot(\delta y_{t_{k}t_{k+1} }-y^{(1)}_{t_{k}}\delta x_{t_{k}t_{k+1}})^{q+1}, \tag{4.9}\] where \(\xi_{k}\) is some value between \(\delta y_{t_{k}t_{k+1}}\) and \(y^{(1)}_{t_{k}}\delta x_{t_{k}t_{k+1}}\). _Step 2: Estimate of \(I_{2}\)._ We first note that when \(p\) is an even number \(\phi^{(q+1)}(\xi_{k})=\phi^{(p+1)}(\xi_{k})=0\), and so \(I_{2}=0\). In the following we assume that \(p\) is not even and by definition of \(q\) we have \(q+1=[p]\). It is clear that \[|\xi_{k}|\leq|\delta y_{t_{k}t_{k+1}}|+|y_{t_{k}}^{(1)}\delta x_{t_{k}t_{k+1}}|. \tag{4.10}\] The relation (4.10) together with the definition of \(\phi^{(q+1)}\) in (4.1) yields \[|\phi^{(q+1)}(\xi_{k})|=|\phi^{([p])}(\xi_{k})|\lesssim|\delta y_{t_{k}t_{k+1} }|^{p-[p]}+|y_{t_{k}}^{(1)}\delta x_{t_{k}t_{k+1}}|^{p-[p]}. \tag{4.11}\] Since \(y\) is controlled by \(x\), Definition 2.1 and the assumption (4.6) gives \[|\delta y_{t_{k}t_{k+1}}|\leq G_{\mathbf{y}}(1/n)^{H-\varepsilon}\leq C_{0}( 1/n)^{H-\varepsilon}.\] Similarly, we have \(|y_{t_{k}}^{(1)}\delta x_{t_{k}t_{k+1}}|\lesssim(1/n)^{H-\varepsilon}\). Substituting these two estimates into (4.11) we get \[|\phi^{([p])}(\xi_{k})|\lesssim(1/n)^{(p-[p])H-\varepsilon}\wedge 1, \tag{4.12}\] where we added \(\wedge 1\) to include the case when \(p\) is odd. By Definition 2.1 of controlled processes again we have the estimate \(|\delta y_{t_{k}t_{k+1}}-y_{t_{k}}^{(1)}\delta x_{t_{k}t_{k+1}}|_{L_{2}} \lesssim(1/n)^{2H-\varepsilon}\). Applying this estimate and the estimate (4.12) to (4.9) we obtain \[\Big{|}\sum_{0\leq t_{k}<t}I_{2}\Big{|}\leq\sum_{0\leq t_{k}<t}|I_{2}|\lesssim n \cdot(1/n)^{(p-[p])H-\varepsilon}\cdot(1/n)^{2[p]H-\varepsilon}=(1/n)^{pH+[p ]H-1-2\varepsilon}. \tag{4.13}\] It follows from (4.13) that when \(1/2\geq H>1/4\) and \(p\geq 2\) we have \[n^{pH-1/2}\sum_{0\leq t_{k}<t}I_{2}\to 0\qquad\text{in probability as $n\to\infty$} \tag{4.14}\] and when \(H\leq 1/4\) and \(p\geq 3\) we have \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}I_{2}\to 0\qquad\text{in probability as $n\to\infty$}. \tag{4.15}\] This shows that \(I_{2}\) does not have contribution in the limits of \(U^{n}\) in (4.3)-(4.5) under the given conditions in Theorem 4.2. _Step 3: Decomposition of \(I_{1}\)._ Recall that \(\phi^{(j)}(x)\) and \(\phi_{j}(x)\) are defined in (4.1)-(4.2). It is clear that \[\phi^{(j)}(a\cdot b)=K_{j}\phi_{j}(a\cdot b)=K_{j}\phi_{j}(a)\phi_{j}(b)\qquad \text{for any $a$ and $b\in\mathbb{R}$}. \tag{4.16}\] On the other hand, we rewrite the relation (2.3) as: \[\delta y_{t_{k}t_{k+1}}-y_{t_{k}}^{(1)}\delta x_{t_{k}t_{k+1}}=\sum_{i=2}^{ \ell-1}y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}}^{i}+r_{t_{k}t_{k+1}}^{(0)}. \tag{4.17}\] Substituting (4.16)-(4.17) into (4.8) we obtain \[I_{1}=\sum_{j=0}^{q}\frac{K_{j}\phi_{j}(y_{t_{k}}^{(1)}\phi_{j}(x_{t_{k}t_{k+1 }}^{1})}{j!}\cdot\left(\sum_{i=2}^{\ell-1}y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}}^{i}+ r_{t_{k}t_{k+1}}^{(0)}\right)^{j}. \tag{4.18}\] In the following we consider two different decompositions of \(I_{1}\) in (4.18) according to the value of \(H\). When \(H\leq 1/4\) we consider the decomposition: \[I_{1}=J_{1}+J_{2}+J_{3}+J_{4}+J_{5}+J_{6}, \tag{4.19}\] where \[J_{1} = K_{0}\phi_{0}(y_{t_{k}}^{(1)})\phi_{0}(x_{t_{k}t_{k+1}}^{1})=|y_{t _{k}}^{(1)}|^{p}\cdot|x_{t_{k}t_{k+1}}^{1}|^{p}\] \[J_{2} = K_{1}\phi_{1}(y_{t_{k}}^{(1)})\phi_{1}(x_{t_{k}t_{k+1}}^{1}) \cdot y_{t_{k}}^{(2)}x_{t_{k}t_{k+1}}^{2} \tag{4.20}\] \[J_{3} = K_{1}\phi_{1}(y_{t_{k}}^{(1)})\phi_{1}(x_{t_{k}t_{k+1}}^{1}) \cdot y_{t_{k}}^{(3)}x_{t_{k}t_{k+1}}^{3}\] \[J_{4} = \frac{K_{2}\phi_{2}(y_{t_{k}}^{(1)})\phi_{2}(x_{t_{k}t_{k+1}}^{1} )}{2!}\cdot\left(y_{t_{k}}^{(2)}x_{t_{k}t_{k+1}}^{2}\right)^{2}\] \[J_{5} = \sum_{j=0}^{q}\frac{K_{j}\phi_{j}(y_{t_{k}}^{(1)})\phi_{j}(x_{t_{ k}t_{k+1}}^{1})}{j!}\cdot\left(\sum_{i=2}^{\ell-1}y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}} ^{i}\right)^{j}-\sum_{e=1}^{4}J_{e}.\] (4.21) \[J_{6} = I_{1}-\sum_{j=0}^{q}\frac{K_{j}\phi_{j}(y_{t_{k}}^{(1)})\phi_{j} (x_{t_{k}t_{k+1}}^{1})}{j!}\cdot\left(\sum_{i=2}^{\ell-1}y_{t_{k}}^{(i)}x_{t_{ k}t_{k+1}}^{i}\right)^{j}. \tag{4.22}\] When \(H>1/4\) we consider the decomposition \[I_{1}=J_{1}+J_{6}+(I_{1}-J_{1}-J_{6}). \tag{4.23}\] _Step 4: Estimate of \(J_{6}\)_. Recall that \(J_{6}\) is defined in (4.22). Note that \(J_{6}\) consists of the terms in (4.18) which contain \(r_{t_{k}t_{k+1}}^{(0)}\). Similar to the estimate in (4.12), invoking the definition of controlled processes (see Definition 2.1) and the assumption (4.6) we have \[|r_{t_{k}t_{k+1}}^{(0)}|\lesssim(1/n)^{\ell H-\varepsilon}\qquad\text{and} \qquad|y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}}^{i}|\lesssim(1/n)^{2H-\varepsilon}, \quad i=2,\ldots,\ell-1.\] It follows that \[\left|\left(\sum_{i=2}^{\ell-1}y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}}^{i}+r_{t_{k}t_{ k+1}}^{(0)}\right)^{j}-\left(\sum_{i=2}^{\ell-1}y_{t_{k}}^{(i)}x_{t_{k}t_{k+1}}^{ i}\right)^{j}\right|\lesssim(1/n)^{\ell H-\varepsilon}(1/n)^{2H\cdot(j-1)- \varepsilon}. \tag{4.24}\] On the other hand, by the definition of \(\phi_{j}\) in (4.1) we have the estimate \[\left|\frac{K_{j}\phi_{j}(y_{t_{k}}^{(1)})\phi_{j}(x_{t_{k}t_{k+1}}^{1})}{j!} \right|\lesssim(1/n)^{(p-j)H-\varepsilon}. \tag{4.25}\] Substituting the two estimates (4.24)-(4.25) into (4.22) we obtain \[|J_{6}|\lesssim\sum_{j=1}^{q}(1/n)^{(\ell+2(j-1)+(p-j))H-\varepsilon}\lesssim( 1/n)^{(\ell-1+p)H-\varepsilon}.\] It follows that \[|\sum_{0\leq t_{k}<t}J_{6}|\leq\sum_{0\leq t_{k}<t}|J_{6}|\lesssim(1/n)^{( \ell-1+p)H-1-\varepsilon}.\] It is readily checked that \[n^{pH-1/2}\sum_{0\leq t_{k}<t}J_{6}\to 0\qquad\text{when $\ell\geq 3$ and $H>1/4$} \tag{4.26}\] and \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{6}\to 0\qquad\text{when $\ell\geq 4$ and $H\leq 1/4$}. \tag{4.27}\] Note that this shows that \(J_{6}\) does not have contribution in any of the limits of \(U^{n}\) in (4.3)-(4.5). _Step 5: Convergence of \(J_{1}\)._ We first consider the case when \(1/2\geq H>1/4\). According to Proposition 3.8 given that \(p>3/2\) and that \(|y^{\prime}|^{p}\) is controlled by \((x,2,H)\) we have the stable f.d.d. convergence: \[\frac{1}{\sqrt{n}}\sum_{0\leq t_{k}<t}(n^{pH}J_{1}-|y^{\prime}_{t_{k}}|^{p}c_{ p})\xrightarrow{}\sigma\int_{0}^{t}|y^{\prime}_{t}|^{p}dW_{t}\qquad\text{ as $n\to\infty$}. \tag{4.28}\] Note that by Lemma 2.4 (ii) and Lemma 4.1 (i) for \(|y^{\prime}|^{p}\) to be controlled by \((x,2,H)\) it requires \(p\geq 2\) and that \(y\) is controlled by \((x,\ell,H)\) for \(\ell\geq 3\). On the other hand, given that \(|y^{\prime}|^{p}\) is controlled by \((x,2,H)\) Proposition 3.6 implies that \[\frac{1}{\sqrt{n}}\sum_{0\leq t_{k}<t}|y^{\prime}_{t_{k}}|^{p}c_{p}-\sqrt{n} \cdot c_{p}\int_{0}^{t}|y^{\prime}_{u}|^{p}du\to 0\qquad\text{ as $n\to\infty$}. \tag{4.29}\] Combining (4.29) with the convergence in (4.28) we obtain the stable f.d.d. convergence \[\sqrt{n}\left(\sum_{0\leq t_{k}<t}n^{pH-1}J_{1}-c_{p}\int_{0}^{t}|y^{\prime}_ {u}|^{p}du\right)\xrightarrow{}\sigma\int_{0}^{t}|y^{\prime}_{u}|^{p}dW_{u} \qquad\text{ as $n\to\infty$}. \tag{4.30}\] We turn to the case when \(H=1/4\). According to Proposition 3.8 given that \(|y^{\prime}|^{p}\) is controlled by \((x,3,H)\), \(p\in(7/2,\infty)\cup\{2\}\) we have the stable f.d.d. convergence: \[\frac{1}{\sqrt{n}}\sum_{0\leq t_{k}<t}(n^{pH}J_{1}-|y^{\prime}_{t_{k}}|^{p}c_ {p})\xrightarrow{}\sigma\int_{0}^{t}|y^{\prime}_{u}|^{p}dW_{u}+\frac{pc_{p}} {8}\int_{0}^{t}(|y^{\prime}_{u}|^{p})^{\prime\prime}du\qquad\text{as $n\to\infty$},\] where \((|y^{\prime}_{u}|^{p})^{\prime\prime}=(\phi(y^{\prime}_{u}))^{\prime\prime}= \phi^{\prime\prime}(y^{\prime}_{u})(y^{\prime\prime}_{u})^{2}+\phi^{\prime}(y ^{\prime}_{u})y^{\prime\prime\prime}_{u}\). By Lemma 2.4 (ii) and Lemma 4.1 (i) again this requires \(p\in[3,\infty)\cup\{2\}\) and \(\ell\geq 4\). Similar to (4.30), we can apply Proposition 3.4 to obtain the stable f.d.d. convergence: \[\sqrt{n}\left(\sum_{0\leq t_{k}<t}n^{pH-1}J_{1}-c_{p}\int_{0}^{t}|y^{\prime}_ {u}|^{p}du\right)\xrightarrow{}\sigma\int_{0}^{t}|y^{\prime}_{u}|^{p}dW_{u}+ \frac{pc_{p}}{8}\int_{0}^{t}(|y^{\prime}_{u}|^{p})^{\prime\prime}du\,. \tag{4.31}\] When \(H<1/4\), given that \(p\in(7/2,\infty)\cup\{2\}\) and \(|y^{\prime}|^{p}\) is controlled by \((x,3,H)\) we have the convergence in probability: \[n^{2H-1}\sum_{0\leq t_{k}<t}(n^{pH}J_{1}-|y^{\prime}_{t_{k}}|^{p}c_{p}) \xrightarrow{}\frac{pc_{p}}{8}\int_{0}^{t}(|y^{\prime}_{u}|^{p})^{\prime \prime}du\,. \tag{4.32}\] Then it follows from Proposition 3.4 again that \[n^{2H}\left(\sum_{0\leq t_{k}<t}n^{pH-1}J_{1}-c_{p}\int_{0}^{t}|y^{ \prime}_{u}|^{p}du\right)\xrightarrow{pc_{p}}\frac{pc_{p}}{8}\int_{0}^{t}(|y^{ \prime}_{u}|^{p})^{\prime\prime}du\qquad\text{in probability}.\] _Step 6: Proof of (4.3) and the convergence of \((I_{1}-J_{1}-J_{6})\)._ In this step we show the convergence: \[n^{pH-1/2}\sum_{0\leq t_{k}<t}(I_{1}-J_{1}-J_{6})\to 0. \tag{4.33}\] Combining (4.33) with the convergences of \(I_{2}\), \(J_{6}\) and \(J_{1}\) respectively in (4.14), (4.26) and (4.30), and invoking the relations (4.7) and (4.23) we then obtain the convergence in (4.3). We first note that by the definition of \(I_{1}\), \(J_{1}\) and \(J_{6}\) we have \[\sum_{0\leq t_{k}<t}(I_{1}-J_{1}-J_{6})=\sum_{0\leq t_{k}<t}\sum_ {j=1}^{q}\frac{K_{j}\phi_{j}(y^{(1)}_{t_{k}})\phi_{j}(x^{1}_{t_{k}t_{k+1}})}{j! }\cdot\left(\sum_{i=2}^{\ell-1}y^{(i)}_{t_{k}}x^{i}_{t_{k}t_{k+1}}\right)^{j}. \tag{4.34}\] It is easy to see that (4.34) consists of weighted sums of the form \(\mathcal{J}^{t}_{0}(z,h^{n})\) for \[z=\frac{K_{j}\phi_{j}(y^{(1)}_{t_{k}})}{j!}\cdot\sum_{\begin{subarray}{c}2 \leq i_{1},\ldots,i_{j}\leq\ell-1\\ i_{1}+\cdots+i_{j}=r\end{subarray}}y^{(i_{1})}_{t_{k}}\cdots y^{(i_{j})}_{t_{k}} \tag{4.35}\] and \[h^{n}_{st}=\sum_{s\leq t_{k}<t}\phi_{j}(x^{1}_{t_{k}t_{k+1}}) \cdot(x^{1}_{t_{k}t_{k+1}})^{r}=\sum_{s\leq t_{k}<t}(x^{1}_{t_{k}t_{k+1}})^{r }|x^{1}_{t_{k}t_{k+1}}|^{p-j}\text{sign}(x^{1}_{t_{k}t_{k+1}})^{j}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad _Step 7: Convergence of \(J_{2}\) when \(H\leq 1/4\)._ Recall the definition of \(\phi_{j}\) and \(J_{2}\) in (4.1) and (4.20), respectively. So we have \[\sum_{0\leq t_{k}<t}J_{2}=\frac{p}{2}\sum_{0\leq t_{k}<t}\phi_{1}(y^{\prime}_{t_ {k}})y^{\prime\prime}_{t_{k}}|x^{1}_{t_{k}t_{k+1}}|^{p+1}\text{sign}(x^{1}_{t_{ k}t_{k+1}}).\] Suppose that \(\phi_{1}(y^{\prime}_{t})y^{\prime\prime}_{t}\) is controlled by \((x,2,H)\). According to Lemma 2.4(ii) this requires \(p\in[3,\infty)\cup\{2\}\) and \(\ell\geq 4\), and in this case we have \[(\phi_{1}(y^{\prime}_{t})y^{\prime\prime}_{t})^{\prime}=\phi^{\prime}_{1}(y^{ \prime}_{t})(y^{\prime\prime}_{t})^{2}+\phi_{1}(y^{\prime}_{t})y^{\prime \prime\prime}_{t}.\] Applying Lemma 3.7 we obtain the convergence in probability \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{2}\to\frac{p}{2}(-\frac{1}{2 }c_{p+2})\int_{0}^{t}(\phi^{\prime}_{1}(y^{\prime}_{u})(y^{\prime\prime}_{u})^ {2}+\phi_{1}(y^{\prime}_{u})y^{\prime\prime\prime}_{u})du\] \[=-\frac{1}{4}(p+1)c_{p}\int_{0}^{t}(\phi^{\prime\prime}(y^{ \prime}_{u})(y^{\prime\prime}_{u})^{2}+\phi^{\prime}(y^{\prime}_{u})y^{\prime \prime\prime}_{u})du. \tag{4.38}\] _Step 8: Convergence of \(J_{3}\) when \(H\leq 1/4\)._ We first rewrite \(J_{3}\) as \[J_{3}=K_{1}\phi_{1}(y^{(1)}_{t_{k}})\phi_{1}(x^{1}_{t_{k}t_{k+1}})\cdot y^{(3) }_{t_{k}}x^{3}_{t_{k}t_{k+1}}=\frac{p}{6}\phi_{1}(y^{\prime}_{t_{k}})y^{\prime \prime\prime}_{t_{k}}|x^{1}_{t_{k}t_{k+1}}|^{p+2}.\] It is easy to see that we have the bound \(|\sum_{0\leq t_{k}<t}J_{3}|_{L_{p}}\lesssim(1/n)^{(p+2)H}.\) In the following we show that \(\sum_{0\leq t_{k}<t}J_{3}\) is also convergence under proper conditions of \(p\) and \(\ell\). We consider the following decomposition \[J_{3} = \frac{p}{6}\phi_{1}(y^{\prime}_{t_{k}})y^{\prime\prime\prime}_{t _{k}}\left(|x^{1}_{t_{k}t_{k+1}}|^{p+2}-c_{p+2}(1/n)^{(p+2)H}\right)+\frac{p} {6}c_{p+2}\phi_{1}(y^{\prime}_{t_{k}})y^{\prime\prime\prime}_{t_{k}}(1/n)^{(p +2)H} \tag{4.39}\] \[=: J_{31}+J_{32}.\] Applying Proposition 3.2 (ii) to \(J_{31}\) with \(d=2\) we obtain that \(n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{31}\to 0\) in probability. Note that the application of Proposition 3.2 (ii) requires \(p\in[4,\infty)\cup\{2\}\) and \(\ell\geq 6\). On the other hand, by continuity of \(\phi_{1}(y^{\prime})y^{\prime\prime\prime}\) we have the convergence \(n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{32}\to\frac{p}{6}c_{p+2}\int_{0}^{t}\phi_{1 }(y^{\prime}_{u})y^{\prime\prime\prime}_{u}du\). Substituting these two convergence into (4.39) we obtain \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{3}\to\frac{p}{6}c_{p+2}\int_{0 }^{t}\phi_{1}(y^{\prime}_{u})y^{\prime\prime\prime}_{u}du\] \[=\frac{1}{6}(p+1)c_{p}\int_{0}^{t}\phi^{\prime}(y^{\prime}_{u})y^{ \prime\prime\prime}_{u}du. \tag{4.40}\] _Step 9: Convergence of \(J_{4}\) when \(H\leq 1/4\)._ We first rewrite \(J_{4}\) as \[J_{4}=\frac{K_{2}\phi_{2}(y^{(1)}_{t_{k}})\phi_{2}(x^{1}_{t_{k}t_ {k+1}})}{2!}\cdot\left(y^{(2)}_{t_{k}}x^{2}_{t_{k}t_{k+1}}\right)^{2}\] \[=\frac{p(p-1)}{8}\phi_{2}(y^{\prime}_{t_{k}})(y^{\prime\prime}_{ t_{k}})^{2}\cdot|x^{1}_{t_{k}t_{k+1}}|^{p+2}.\] Similar to \(J_{3}\) by applying Proposition 3.8 (ii)-(iii) with \(d=2\) we obtain the convergence \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{4} \to\frac{p(p-1)}{8}c_{p+2}\int_{0}^{t}\phi_{2}(y^{\prime}_{u})(y^{ \prime\prime}_{u})^{2}du\] \[=\frac{1}{8}(p+1)c_{p}\int_{0}^{t}\phi^{\prime\prime}(y^{\prime}_{ u})(y^{\prime\prime}_{u})^{2}du. \tag{4.41}\] Note that the application of Proposition 3.8 (ii)-(iii) requires \(p\in[5,\infty)\cup\{2,4\}\) and \(\ell\geq 5\). _Step 10: Convergence of \(J_{5}\)._ Recall that \(J_{5}\) is defined in (4.21). It is easy to see that we have \[J_{5} = \frac{K_{1}\phi_{1}(y^{(1)}_{t_{k}})\phi_{1}(x^{1}_{t_{k}t_{k+1}} )}{1!}\cdot\left(\sum_{i=4}^{\ell-1}y^{(i)}_{t_{k}}x^{i}_{t_{k}t_{k+1}}\right)\] \[+\frac{K_{2}\phi_{2}(y^{(1)}_{t_{k}})\phi_{2}(x^{1}_{t_{k}t_{k+1} })}{2!}\cdot\left(\sum_{i=3}^{\ell-1}y^{(i)}_{t_{k}}x^{i}_{t_{k}t_{k+1}}\right) ^{2}\] \[+\sum_{j=3}^{q}\frac{K_{j}\phi_{j}(y^{(1)}_{t_{k}})\phi_{j}(x^{1}_ {t_{k}t_{k+1}})}{j!}\cdot\left(\sum_{i=2}^{\ell-1}y^{(i)}_{t_{k}}x^{i}_{t_{k}t _{k+1}}\right)^{j}\] \[= J_{51}+J_{52}+J_{53},\] where we use the convention that \(\sum_{j=3}^{q}=0\) when \(q<3\) and that \(\sum_{i=4}^{\ell-1}y^{(i)}_{t_{k}}x^{i}_{t_{k}t_{k+1}}=0\) when \(\ell-1<4\). In the following we bound each \(J_{5i}\), \(i=1,2,3\). For \(J_{51}\) a direct estimate shows that \[|J_{51}|\lesssim|\phi_{1}(y^{(1)}_{t_{k}})|\cdot|\phi_{1}(x^{1}_{t_{k}t_{k+1} })|\cdot\sum_{i=4}^{\ell-1}|y^{(i)}_{t_{k}}|\cdot|x^{i}_{t_{k}t_{k+1}}|\lesssim (1/n)^{(p-1)H+4H}=(1/n)^{(p+3)H}.\] So we have \(n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{51}\to 0\). Similarly, we can bound \(J_{52}\) and \(J_{53}\) by \[|J_{52}|\lesssim(1/n)^{(p-2)H+6H}\qquad\text{and}\qquad|J_{53}|\lesssim\sum_{ j=3}^{\lfloor p\rfloor-1}(1/n)^{(p-j)H+2jH}\lesssim(1/n)^{(p+3)H}.\] We conclude that \(n^{(p+2)H-1}\sum_{0\leq t_{k}<t}(J_{52}+J_{53})\to 0\) in probability as \(n\to\infty\). We conclude that the convergence in probability \[n^{(p+2)H-1}\sum_{0\leq t_{k}<t}J_{5}\to 0\qquad\text{as }n\to\infty. \tag{4.42}\] _Step 11: Conclusion._ In Step 5 we have shown that the convergence in (4.3) holds. Putting together the convergences (4.15), (4.27), (4.31), (4.38), (4.40), (4.41), (4.42) for \(I_{2}\), \(J_{i}\), \(i=1,\ldots,6\) and invoking Lemma 2.6, and then taking into account the decompositions (4.7) and (4.19), we obtain the convergence in (4.4). Finally, replacing (4.31) by (4.32) in the argument we obtain the convergence (4.5).
2307.16788
Congestion Analysis for the DARPA OFFSET CCAST Swarm
The Defense Advanced Research Projects Agency (DARPA) OFFensive Swarm-Enabled Tactics program's goal of launching 250 unmanned aerial and ground vehicles from a limited sized launch zone was a daunting challenge. The swarm's aerial vehicles were primarily multirotor platforms, which can efficiently be launched en masse. Each field exercise expected the deployment of an even larger swarm. While the launch zone's spatial area increased with each field exercise, the relative space for each vehicle was not necessarily increased, considering the increasing size of the swarm and the vehicles' associated GPS error; however, safe mission deployment and execution were expected. At the same time, achieving the mission goals required maximizing efficiency of the swarm's performance by reducing congestion that blocked vehicles from completing tactic assignments. Congestion analysis conducted before the final field exercise focused on adjusting various constraints to optimize the swarm's deployment without reducing safety. During the field exercise, data was collected that permitted analyzing the number and durations of individual vehicle blockages' impact on the resulting congestion. After the field exercise, additional analyses used the mission plan to validate the use of simulation for analyzing congestion.
Robert Brown, Julie A. Adams
2023-07-31T15:55:50Z
http://arxiv.org/abs/2307.16788v1
# Congestion Analysis for the DARPA OFFSET CCAST Swarm ###### Abstract The Defense Advanced Research Projects Agency's (DARPA) OFFensive Swam-Enabled Tactics program's goal of launching 250 unmanned aerial and ground vehicles from a limited sized launch zone was a daunting challenge. The swarm's aerial vehicles were primarily multi-rotor platforms, which can efficiently be launched en mass. Each field exercise expected the deployment of an even larger swarm. While the launch zone's spatial area increased with each field exercise, the relative space for each vehicle was not necessarily increased considering the increasing size of the swarm and the vehicles' associated GPS error. However, safe mission deployment and execution were expected. At the same time, achieving the mission goals required maximizing the efficiency of the swarm's performance, by reducing congestion that blocked vehicles from completing tactic assignments. Congestion analysis conducted before the final field exercise focused on adjusting various constraints to optimize the swarm's deployment without reducing safety. During the field exercise, data was collected that permitted analyzing the number and durations of individual vehicle blockages' impact on the resulting congestion. After the field exercise, additional analyses used the mission plan to validate the use of simulation for analyzing congestion. ## 1 Introduction The Defense Advanced Research Projects Agency (DARPA) OFFensive Swam-Enabled Tactics (OFFSETET) program was designed to enable a very large heterogeneous swarm of unmanned air and ground vehicles in complex urban environments (DARPA, nd). As swarm size increased, DARPA intentionally limited the launch zone size and allotted deployment time in order to "encourage" the teams to address swarm deployment logistics challenges. The OFFSET program's Command and Control of Aggregate Swarm Tactics (CCAST) team's swarm architecture was designed to enable a single operator to deploy and monitor a swarm of up to 250 unmanned vehicles for diverse missions (Clark et al., 2021). Over the course of the OFFSET program, the swarm size increased as the field exercises occurred at differing Department of Defense Combined Armed Collective Training Facilities (CACTF). Each CACTF presented different challenges when deploying a hardware swarm composed of heterogeneous ground and multi-rotor aerial vehicles. The CACTF's size and shape as well as its structures (e.g., buildings, light poles, power lines, street signs, and curbs), the designated launch/landing zone size, along with the swarm's size and composition influenced the distribution of vehicles and increased the likelihood of launch, en-route, and landing conflicts
2301.00264
Application Of ADNN For Background Subtraction In Smart Surveillance System
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
Piyush Batra, Gagan Raj Singh, Neeraj Goyal
2022-12-31T18:42:11Z
http://arxiv.org/abs/2301.00264v1
# Application Of ADNN For Background Subtraction In Smart Surveillance System # Application Of ADNN For Background Subtraction In Smart Surveillance System University Of Alberta Department of Computing Science Piyush Batra, Gagan Raj Singh, Neeraj Goyal **Brief Abstract** Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video. ## 1 Literature review Motion detection aims to find regions related to moving objects, and background subtraction is a widely used technique for this task. Herein, every pixel of each video frame is compared against their historical counterparts or a background model, depending on the technique, and then classified into foreground or background. Pixels that differ significantly from the reference are classified as moving objects or foreground, and static pixels are referred to as background. This section will discuss some of the previously proposed methods related to background subtraction, video surveillance, and anomaly detection. Many background modeling techniques based on mathematical theories, like the temporal average [2], temporal median [3], or the histogram over time [4], have been proposed for motion detection by a stationary camera. But these are not robust to challenges in surveillance videos such as dynamic background, object shadow, camera jitter, weather conditions (either snow or rain), and variations in illumination. To overcome these problems, several motion-detection methods, like Temporal Differencing [5], Three-frame Difference [6], Gaussian mixture model [7], DSTEI [9], etc, have been presented over the past years. Temporal Differencing, proposed by Cheung et al. [5], is used for detecting temporal changes in intensity in video frames. However, its main drawback is that the detected objects are incomplete and poorly presented. In the Gaussian mixture model proposed by Stauffer and Grimson [7], the temporal histogram of each pixel is modeled using a mixture of K Gaussian distributions to precisely model a dynamic background. This method produced a real-time tracker which can deal with lighting changes, repetitive motions from clutter, and long-term scene changes. Later, Chan et al. [8] proposed a Generalized Stauffer-Grimson (GSG) algorithm for background subtraction in dynamic scenes. In this method, the statistics required for online learning of dynamic texture are derived from generalizing the GMM proposed by Stauffer and Grimson [7]. The Difference-based Spatio-Temporal Entropy Image (DSTEI) by Jing et al. [9] is an entropy-based method for human motion detection. A Spatio-temporal histogram is generated by accumulated pixels obtained by the difference between consecutive images. This histogram is then normalized to calculate the degree of randomness and magnitude of entropy to denote the significance of motion. In this method, noises are assumed to follow Gaussian distribution. However, these assumptions, such as heavy shadows or sudden illumination changes, will be violated in some cases. Soumyadip Sengupta et al. [10] proposed a background matting technique that generated high-quality foreground and alpha mattes in natural settings. In this method, a deep learning framework is developed and trained on synthetic-composite data and then adapted to actual data using an adversarial network. Even though providing an additional photo of the background requires a small amount of foresight, it is far less tedious than creating a trimap for traditional matting methods. In 2017, Dan Yang et al. [11] proposed a multi-feature background approach for complex video scenes that measures the stability of features and then selects different dominant features to model the background from the pixel and time-sequence domains. This Stability of Adaptive Features approach showed promising results on both complex and baseline scenes. For applications of background subtraction in real time, Z. Kuang et al. [12] proposed a combination of the Horn-Schunck optical-flow estimation technique [13] and autoencoder neural networks that solve the problem of motion blur in real-time background subtraction during video conferencing. This method uses an optical-flow-based model to extract motion features between every two frames and then combine these features with the appearance feature from the original frame. An encoder-decoder network in combination with CNN is then used to learn and predict a mask output for the human head and shoulders for background subtraction. Similarly, for real-time background subtraction, DK Yadav et al. [14] proposed a Pixel Intensity Based (PIBBS) system that first models the background, then extracts moving objects with a threshold and updates the background using a feedback-based background updation scheme. To improve the detection quality, this system also uses morphological operators as the last step. Bruno Sauvalle et al. [15] proposed using an autoencoder to model the background of a video as a low-dimensional manifold. The output of this autoencoder is then compared with the original image to compute the segmentation masks. In this method, the autoencoder is also trained to predict the background noise, which allows it to compute a pixel-dependent threshold for each frame to perform the foreground segmentation. Without using temporal or motion information, this method could perform at par with state-of-the-art solutions on CDnet 2014 [16] and LASIESTA [17] datasets. To overcome the problem of camera jitter and sudden changes in illumination, Ye Tao et al. [18] proposed a generative architecture for unsupervised deep background modeling, which learns the parameters automatically and uses intensity and optical flow features between a reference and a target frame. This system generates a background with a probabilistic heat map of the color values for a given input frame. This method could also be applied to unseen videos without re-training. When tested, this method shows promising results over state-of-the-art [19][20][21] methods on the SBMnet dataset[22]. Guanfang Dong et al. [23] proposed a novel denoising neural network model called Feature-guided Denoising Convolutional Neural Network (FDCNN) to denoise the images produced by portable devices. This technique employed a hierarchical denoising framework driven by a feature masking layer. The feature extraction algorithm used in this method is based on Explainable Artificial Intelligence (XAI) for medical images. Similarly, Yingnan Ma et al. [24] proposed an Edge-guided Denoising Convolutional Neural Network which can preserve important edge information in ultrasound images when removing noise. This method increases the recognition of various organs in ultrasound images. Jhony H. Giraldo et al. [25] proposed a new algorithm called Graph Background Subtraction (GraphBGS). It is composed of instance segmentation, background initialization, graph construction, and graph sampling. Unlike Deep Learning methods for background subtraction which require vast amounts of data, this method is a semi-supervised algorithm inspired by the theory of recovery of graph signals. To generate descriptions of human actions and their interactions, Zijian Kuang et al. [26] proposed a technique that utilizes an Actor Relation Graph (ARG) based model with novel improvements for group activity recognition. This method also used MobileNet as the backbone to extract features from each video frame. To accurately perform background subtraction in a freely moving camera, Zhao et al. [27] developed a novel method called "the integration of foreground and background cues." The underlying motivation in this technique is to utilize the exclusiveness between these cues to compensate for their corresponding defects. The foreground is segmented by combining superpixels with proximity under multiple levels. As video resolution and, subsequently, the video size is increasing daily, Ruixing et al. [28] proposed a method to compute the optimal image resolution adaptively. This is achieved by exploiting the correlation between an image's gray-value distribution and resolution. This approach was proposed to increase the performance of multi-object online tracking and learning. A novel tracklet reliability assessment metric was also introduced in this paper to eliminate the incorrect samples and can recover occluded targets. As a unique application of neural networks in multimedia, C. Sun et al. [29] proposed a 2-step product re-identification (Re-ID) method which involves image feature extraction and a feature search and retrieval engine. To extract the features of the input image, a novel AlphaAlexNet, an extended version of the AlexNet, is being used. Vearch, a visual search system, is used as the image search similarity engine. The new model - AlphaAlexNet, demonstrated improved object detection accuracy of Vearch. To classify two distributions without using just histograms and incorporating a deep learning network to learn and classify distributions automatically, Chunqiu Zhao and Anup Basu [30] proposed a novel vessel segmentation method based on distribution learning using a spatial distribution descriptor (RPoSP) under multiple scales. Here, statistical distributions are indirectly forced as an input to a CNN for distribution learning. The proposed approach showed promising results when compared to existing state-of-the-art methods[31][32] on the DRIVE[33] dataset. Yongxin Ge et al. [34] proposed the Deep Variation Transformation Network (DVTN) model, which uses pixel variations to detect the background. This model assigns the probability to each pixel, and then by using thresholding, it computes whether it's background or foreground. This model compares the pixel variation instead of distributions. Previously used models in background detection usually fail when they encounter similar observations, causing false detections. The DVTN analyzes the pixel variations in a new space, where the above observations are classified easily. This model outperforms the traditional background detection models by showing astonishing results on the CDnet2014 dataset. However, all of the methods mentioned above require either a large amount of ground truth data or result in inadequate performance on unseen videos. Zhao and Basu [35] proposed a Deep Pixel Distribution Learning (DPDL) technique to overcome these issues. Unlike typical approaches, which compare new frames to a formulated background model, this technique focuses on comparing pixels' current and historical frames. This method uses a novel pixel-based feature called the Random Permutation of Temporal Pixels (RPoTP) to represent the distribution of past observations for a particular pixel. Subsequently, a CNN is used to learn whether the current pixel is foreground or background. Adding on to this method, Zhao et al. [36] later proposed a new Dynamic Deep Pixel Distribution Learning (D-DPDL) technique. In this method, the RPoTP feature is dynamically permuted in this method for every training epoch. To compensate for the random noise generated in this process, a Bayesian Refinement model is used and improve the accuracy. Zhao et al. [1] also proposed an Arithmetic Distribution Neural Network architecture demonstrating even better performance than the D-DPDL method. The input in the ADNN method is histograms of subtractions between current pixels and their historical counterparts. The sum and product arithmetic distribution layers proposed here demonstrate a better ability to classify distributions than the convolutional layers in D-DPDL. Moreover, the number of learning parameters used in ADNN architecture (0.1 Million) is significantly less than that used in the D-DPDL method (7 Million). Coming onto detecting anomalies in videos, Virender Singh et al. [37] proposed an approach to detect variation from the norm in real-world CCTV recordings. This method uses two deep learning models (CNN and RNN) to learn a general anomaly detection model with a poorly labeled dataset. The training dataset has been doubled by flipping the videos horizontally, thus increasing the testing accuracy. The overall accuracy of the model is 97.23 Y Fan et al. [38] proposed a technique that first converts the video clips of an ongoing event into Dynamic Images, which can simultaneously capture the appearance and temporal evolution of the occurrence. The approach uses dynamic images of two categories of video clips and involves training a detector based on deep-learning techniques. Yu Tian et al. [39] proposed a weakly-supervised anomaly detection algorithm, Robust Temporal Feature Magnitude learning (RTFM), aiming to identify snippets containing abnormal events. This method trains a feature magnitude learning function to effectively recognize the positive instances, substantially enhancing the robustness of this method to the negative instances from abnormal videos. RTFM achieves significantly improved subtle anomaly discriminability and sample efficiency. The Weakly Supervised Video Anomaly Detection(WSVAD) [40]-[42] method for anomaly detection suffers from the wrong identification of normal and abnormal instances during the training process. Kapil Deshpande et al. [43] proposed better-quality transformer-based features named Videoswin Features, followed by an attention layer to capture long and short-range dependencies in the temporal domain. This method extracts better-quality features from available videos resulting in better performance. ## 2 Method In this work, we implemented an Arithmetic Distribution Neural Network [1] to develop a video surveillance system for identifying object movement in a static video. In this ADNN model, the arithmetic operations are utilized to introduce the arithmetic distribution layers, including the product and sum distribution layers. Outputs from these layers are combined and passed through a classifier for accurate classification. We chose this architecture because it requires training only one network, with limited training data, and it works well with unseen test videos. Upon successful object movement detection using background subtraction, we further analyzed the results obtained from ADNN to filter out their anomalous activities. ### Motion detection - Arithmetic Distribution Neural Network In this work, we used ADNN proposed by Zhao et al. [1] to detect motion in the input surveillance video. This paper proposed arithmetic distribution layers, which are a new type of network layer that is designed to improve distribution analysis in classification tasks. These layers, which include product and sum distribution layers, are an alternative to convolution layers. During the forward pass of the proposed arithmetic distribution layers, the input distributions are processed using the distributions in the learning kernels to generate the output distributions. In the backpropagation process, the gradient of the distributions in the learning kernels with respect to the network output is calculated to update the learning kernels. These operations are based on histograms and arithmetic distribution operations rather than the matrix arithmetic operations used in traditional convolution layers. To improve the accuracy of the foreground mask generated, an improved Bayesian refinement model is used. This model takes into account the correlations between pixels by using a mixture of Gaussian approximation functions rather than just Euclidean distance, as in the original Bayesian refinement model. The Bayesian refinement model is used to iteratively refine the foreground mask, with the output of the arithmetic distribution neural network serving as the initial binary mask for the iteration process. Figure 1: The flow diagram of our proposed approach After obtaining the refined foreground masks from the ADNN architecture, we utilize a python script to generate a trimmed video from a set of input frames by using a threshold value on the frames generated by the Bayesian refinement model. The threshold value determines the minimum number of white pixels (foreground pixels) that must be present in a frame in order for it to be included in the trimmed video. For this work, we are using a threshold value of 5% to generate the trimmed videos. ### Anomaly Detection Following the works of Waqas Sultani et al. [44], we have put into use their novel Multiple Instance Learning framework for the second part of our system. Once we obtain the trimmed video from the previous step, we use that as the input in this step. In this, a training set of positive (containing an abnormality someplace) and negative (having no anomaly) videos are used to train the anomaly detection model. Then each video is divided into a sequence of non-overlapping temporal segments. Figure 3: Generation of trimmed video from input frames passed through the ADNN (arithmetic distribution neural network) Figure 2: Arithmetic distribution neural network for background subtraction Each video in the training set can be represented as a bag, and each video segment represents an instance in the bag. After extracting C3D features from video segments using a pre-trained 3D convNet, a fully connected neural network is trained using the novel ranking loss function; it computes the ranking loss between the top-rated occurrences in the positive bag and the negative bag. In conclusion, the proposed method for detecting anomalies in surveillance videos consists of two main steps. First, the ADNN architecture is used to detect motion in the input video and generate a refined foreground mask. This mask is then used to create a trimmed video, which is used as input for the second step of the system. In this step, we used a pre-trained multiple instance learning model trained on a set of positive and negative videos and used to classify each temporal segment in the test video as normal or anomalous. The predicted scores for each segment are then combined to generate a prediction (anomaly graph) for the entire video. By combining these two approaches, the system is able to effectively detect abnormalities in surveillance videos, even when they only occur for a short period of time or are only present in a small number of segments. ## 3 Results In this section, we will discuss our experimental results for two different videos. Table 1 compares the full video and trimmed video for two different videos, labeled Video 1 and Video 2. For Video 1, the full video had a duration of 06:37 minutes, a size of 90.5 MB, and contained 11937 frames. The anomaly detection process for this video took 789 seconds. The trimmed video for Video 2 had a duration of 04:09 minutes, a size of 68.5 MB, and contained 7470 frames. The anomaly detection process for this video took 540 seconds, which is lower than the time taken for the full video. For Video 2, the full video had a duration of 04:59 minutes, a size of 40.6 MB, and contained 8990 frames. The anomaly detection process for this video took 610 seconds. On the other hand, the trimmed video for Video 2 had a duration of 1:04 minutes, a size of 10.2 MB, and contained 1950 frames. The anomaly detection process for this video took 137 seconds, which is also lower than the time taken for the full video. Figure 4: Anomaly detection flow diagram The graphs obtained after anomaly detection are shown below. These are the relative anomaly scores of each video segment (32 in this case). We can see that the anomalous regions in the trimmed video are more focused, and there are comparatively fewer inactive regions. Moreover, the overall structure of the graphs is similar for both the original and trimmed videos, indicating that trimming down the video does not affect the anomaly identification and the relative scores of different segments. Overall, the results in Table 1 show that the trimmed videos had shorter durations and smaller sizes compared to the full videos. Additionally, the anomaly detection process for the trimmed videos took much less time than the full videos in both examples. This suggests that using trimmed videos leads to a more efficient anomaly detection process. ## 4 Discussion The results presented above demonstrate the effectiveness of our ADNN-based video surveillance system in identifying object movement and filtering out anomalous activities. As shown, the trimmed videos had shorter durations, smaller sizes and required less time for anomaly detection compared to the full videos in both examples. This suggests that the ADNN model and the use of trimmed videos lead to a more efficient and effective video surveillance system. Additionally, the ADNN model we employed has the advantage of requiring only limited training data and \begin{table} \begin{tabular}{|c|l|c|c|c|c|} \hline & & **Duration** & **Size** & **Frames** & **Anomaly Detection** \\ & & **(mm: ss)** & **(MB)** & & **(cpu - sec)** \\ \hline \multirow{3}{*}{Video 1} & Full Video & 06:37 & 90.5 & 11937 & 789 \\ \cline{2-6} & Trimmed Video (Combined) & 04:09 & 68.5 & 7470 & 540 \\ \hline \multirow{3}{*}{Video2} & Full Video & 04:59 & 40.6 & 8990 & 610 \\ \cline{2-6} & Trimmed Video (Combined) & 1:04 & 10.2 & 1950 & 137 \\ \hline \end{tabular} \end{table} Table 1: Comparison of results for trimmed and full-length videos Figure 5: The graphs indicate anomaly scores of the video2 (left) and its trimmed version (right) being able to perform well with unseen test videos. This makes it a suitable choice for practical implementation in real-world scenarios. In conclusion, our ADNN-based video surveillance system has demonstrated its ability to accurately detect object movement and filter out anomalous activities, making it a promising solution for video surveillance applications. ## 5 Future Work In the future, we plan to work on making the ADNN model more efficient at inferring foreground masks, as it currently takes a significant amount of time to process videos. This will be a major challenge, but we believe it is necessary in order to make the system more practical and useful in real-world scenarios. Additionally, we will work on generating a better test dataset to further evaluate the adaptability of this system. This will help us to better understand the limitations and potential improvements of the system. Overall, the goal would be to improve the efficiency of the ADNN model in order to make it a useful tool for video surveillance and anomaly detection applications.
2310.00525
Reinforcement learning adaptive fuzzy controller for lighting systems: application to aircraft cabin
The lighting requirements are subjective and one light setting cannot work for all. However, there is little work on developing smart lighting algorithms that can adapt to user preferences. To address this gap, this paper uses fuzzy logic and reinforcement learning to develop an adaptive lighting algorithm. In particular, we develop a baseline fuzzy inference system (FIS) using the domain knowledge. We use the existing literature to create a FIS that generates lighting setting recommendations based on environmental conditions i.e. daily glare index, and user information including age, activity, and chronotype. Through a feedback mechanism, the user interacts with the algorithm, correcting the algorithm output to their preferences. We interpret these corrections as rewards to a Q-learning agent, which tunes the FIS parameters online to match the user preferences. We implement the algorithm in an aircraft cabin mockup and conduct an extensive user study to evaluate the effectiveness of the algorithm and understand its learning behavior. Our implementation results demonstrate that the developed algorithm possesses the capability to learn user preferences while successfully adapting to a wide range of environmental conditions and user characteristics. and can deal with a diverse spectrum of environmental conditions and user characteristics. This underscores its viability as a potent solution for intelligent light management, featuring advanced learning capabilities.
Kritika Vashishtha, Anas Saad, Reza Faieghi, Fengfeng Xi
2023-09-30T23:55:49Z
http://arxiv.org/abs/2310.00525v1
Reinforcement learning adaptive fuzzy controller for lighting systems: application to aircraft cabin ###### Abstract The lighting requirements are subjective and one light setting cannot work for all. However, there is little work on developing smart lighting algorithms that can adapt to user preferences. To address this gap, this paper uses fuzzy logic and reinforcement learning to develop an adaptive lighting algorithm. In particular, we develop a baseline fuzzy inference system (FIS) using the domain knowledge. We use the existing literature to create a FIS that generates lighting setting recommendations based on environmental conditions i.e. daily glare index, and user information including age, activity, and chronotype. Through a feedback mechanism, the user interacts with the algorithm, correcting the algorithm output to their preferences. We interpret these corrections as rewards to a Q-learning agent, which tunes the FIS parameters online to match the user preferences. We implement the algorithm in an aircraft cabin mockup and conduct an extensive user study to evaluate the effectiveness of the algorithm and understand its learning behavior. Our implementation results demonstrate that the developed algorithm possesses the capability to learn user preferences while successfully adapting to a wide range of environmental conditions and user characteristics. and can deal with a diverse spectrum of environmental conditions and user characteristics. This underscores its viability as a potent solution for intelligent light management, featuring advanced learning capabilities. **Keywords:** fuzzy logic, reinforcement learning, Q-learning, adaptive algorithm ## 1 Introduction Interior lighting plays a crucial role in enhancing the comfort and well-being of aircraft passengers. Diverse lighting scenarios create a variety of atmospheres and experiences for passengers, and can significantly impact passengers' mood, perception, alertness, and circadian rhythm. This can range from creating a sense of relaxation to providing stimulation or entertainment. Additionally, lighting can assist passengers in adapting to different time zones and minimizing the effects of jet lag [1, 2]. With the advancement of artificial intelligence and control systems, intelligent lighting systems are becoming increasingly popular. Previous research in this area includes the development of a smart lighting system to elevate visual comfort while minimizing energy consumption [3]. The system maintains a desired light level where needed while minimizing it where not required. This concept is further explored in [4], using the Internet of Things (IoT) to create an autonomous and more efficient lighting management system in smart cities. Another notable work is [5], which details the design, implementation, and deployment of a smart emergency light system for buildings. The system's key advantage is its integration with existing building facilities (i.e., emergency lights), and it has been successfully implemented in smart buildings. While developing similar systems for aircraft interiors holds great promise for elevating the flight experience, progress in this area has been limited. The papers that we were able to find have primarily focused on the architecture of intelligent lighting systems with little or no emphasis on developing intelligent light management algorithms [6; 7; 8]. Our objective in this paper is to present an intelligent algorithm for light management in aircraft cabins. In particular, we aim to develop an algorithm that can automatically adjust the intensity of aircraft interior lights based on the perception of the environment, e.g., outdoor lighting and passengers' activities. Further, the algorithm should be able to learn the passengers' preferences and adapt light settings to their liking. A particular challenge in developing such an algorithm is that lighting preferences are highly individualistic and influenced by a multitude of factors. As pointed out in [9], factors such as age, gender, and even the type of activity being undertaken can significantly impact an individual's lighting needs. For instance, the light intensity required for reading a book can greatly differ from the light intensity preferred while watching a movie or enjoying a meal. Further, visual comfort and glare sensations are subjective feelings that cannot be accurately captured by a one-size-fits-all setting. In this intricate landscape where a comprehensive model is difficult to find, we propose to use a rule-based algorithm, namely a fuzzy inference system (FIS). The strength of a FIS lies in its ability to build a controller using intuitive natural rules. This becomes particularly advantageous in controlling processes that are difficult to model, such as the nuanced interplay of environmental conditions and personal preferences in defining optimal lighting conditions. In fact, FIS has been successfully implemented in related contexts such as IoT-based traffic light control [10], and also energy-saving strategies in smart LED lighting systems that account for lighting comfort and daylight [11]. In addition, our team has had success in using FIS to automatically adjust the transparency of electrochromic windows and control glare in aircraft cabins to maintain a comfortable visual environment amidst varying glare conditions [12]. As we intend to develop an adaptive algorithm that can learn and respond to the user's preferences, we need to enhance FIS with an adaptive mechanism that can interact with the user, interpret the interaction, and take action accordingly. A standard method of choice to achieve this is reinforcement learning (RL) [13]. The combination of RL and FIS is an active area of research with a wide range of applications; see for example [14; 15; 16; 17; 18]. The choice of the RL algorithm depends on the inherent characteristics of the problem domain. As will be clarified in our developments, for the intelligent lighting management system, the problem domain is relatively small, discrete, and model-free. One RL algorithm that is proven effective for such problems is Q-learning [13]. Therefore, a Q-learning FIS; hereafter referred to as QFIS, will be our method of choice to develop the intended adaptive light management algorithm. QFIS has been explored in numerous studies and has been successfully implemented for various applications; see for example [19; 20; 21; 22; 23; 24; 25]. The idea behind QFIS is to build a self-learning FIS whose parameters can be tuned online using Q-learning. In this study, we first develop a FIS for aircraft cabin light management. We develop the FIS using domain knowledge and consider inputs such as age, chronotype, and user activity type. Next, we make the FIS adaptive by designing a Q-learning algorithm that adjusts the FIS parameters online during interactions with the user. We present an extensive user study in an aircraft cabin to evaluate the effectiveness of the algorithm. In our developments, we adopt the standard formulation of QFIS given in [20]; however, this requires careful attention in determining the inputs, membership functions (MFs), FIS rules, the representation of problem domain using Q-tables, the choice of a reward function, and the learning rate, all of which are detailed in subsequent sections. The major contribution of this work is to develop an intelligent lighting management algorithm for aircraft cabins. To our knowledge, this algorithm is the first to automatically change lighting settings based on user activities and preferences. While our focus is on aircraft interiors, the developments here can be extended to different lighting where an intelligent light management system is desired, e.g., smart buildings. ## 2 Algorithm Overview This section provides an overview of the proposed QFIS algorithm for intelligent lighting management. Figure 1 illustrates the algorithm architecture. The inputs are categorized into three groups: (1) environment, (2) passenger, and (3) feedback for the RL agent. For the environmental information, we use photopic sensors to measure the current ambient lighting quantified in terms of the daily glare index (DGI). For the passenger information, we use age, chronotype, and the passenger's current activity. For any RL task, a mechanism for interaction with the environment is required. For this purpose, we use a control knob by which the passenger can correct the lighting setting at any time. The interactions with the control knob will constitute the reward feedback for the RL agent. In this approach, if the algorithm changes lighting conditions and the passenger manually overrides it via the control knob, the adjustment applied by the passenger constitutes a negative reward that will be used for learning and adaptation. Note that the selection of the above inputs was based on recommendations from our aircraft manufacturer partner. In practice, the passenger age and chronotype can be recorded in a user-specific account within the aircraft infotainment system. Subsequently, the passenger interactions with the control knob can be recorded in this account for continuous learning and adaptation over multiple flights. Also, passenger activity can be obtained using a camera and various algorithms available in the vast literature of vision-based activity recognition [26]. To generate an appropriate light setting, the environmental and passenger information enters the FIS module. The role of FIS is to control lighting based on fuzzy MFs and rule-base that are developed based on expert knowledge. Once FIS generates the light setting, the RL agent will monitor the passenger interactions with the control knob. The passenger adjustments will be used to adapt FIS parameters according to the passenger preferences. As will be described shortly, the adaptable parameters will modify the shape of MFs and the weights of rules in the fuzzy inference step. The next two sections provide the details of the FIS and RL modules. ### Fuzzy inference system Here, we explain the details of the proposed FIS in our algorithm. In the design of the FIS, we use the Gaussian MFs and the Takagi-Sugeno (TS) inference system. These choices ensure the smoothness and stability of the algorithm as it is detailed in [27]. The inputs include age, activity, DGI, and chronotype. The output is the intensity of interior lights. The structure of the fuzzy system is comprised of these components: fuzzification, rule-base, inference engine, and defuzzification, as detailed below. ### Fuzzification We use the Gaussian MF for the fuzzification of all inputs. Let \(m\) and \(\sigma\) denote the mean and standard deviation of a Gaussian MF, then its output is expressed as \[\mu\left(x\right)=\exp\left(-\frac{1}{2}\left(\frac{x-m}{\sigma}\right)^{2}\right) \tag{1}\] Figure 2 illustrates the MFs designed for each input. While DGI and age are numerical inputs, passenger activity and chronotype are categorical. To process all inputs in the same framework, we consider MFs with zero standard deviation for the categorical data and pass them through the fuzzification step. The details of fuzzification for each input are given below. #### 2.2.1 Daily glare index DGI is a common metric used to assess the potential discomfort caused by glare from daylight sources in indoor spaces. According to [28], DGI values up to 22 are generally deemed acceptable, and higher values tend to become uncomfortable and even intolerable. As such, we set the DGI value of 22 as a point of symmetry for the fuzzification of DGI, and use the Figure 1: Architecture of the proposed QFIS algorithm for intelligent lighting management deviations from this value to categorize the level of comfort/discomfort of ambient light (Fig. (a)a). Note that we overlap the MFs by 15 percent to ensure smooth transitions from one MF to another. Table 1 shows the parameter values of DGI MFs. #### 2.2.2 Activity We consider three types of activities: meeting/entertainment, eating, and sleeping. As the activity is categorical data, we use MFs with zero standard deviations and rank them based on their light requirements from low to high. As shown in Fig. (b)b, we consider the same level of light intensity for meeting and entertainment and rank them higher than eating which itself ranks higher than sleeping. As mentioned above, passenger activity is recognized by a computer vision algorithm, and the type of activities to be considered in our algorithm can be easily extended to an arbitrary number of activity categories. \begin{table} \begin{tabular}{l l l} \hline MF & \(m\) & \(\sigma\) \\ \hline Negligible & 14 & 1.5 \\ Acceptable & 18 & 1 \\ Comfortable & 22 & 1 \\ Uncomfortable & 25 & 1 \\ Very Uncomfortable & 29 & 1.8 \\ \hline \end{tabular} \end{table} Table 1: Parameters of DGI MFs Figure 2: MFs designed for each input of the FIS module #### 2.2.3 Age For the fuzzification of age, we rely on the findings of [29] in which the glare thresholds for two major age groups, 20-40 and 40-60 years old, were studied. The luminosity thresholds for the 40-60 years old category are higher than those for the 20-40 years group. This information is helpful both in the design of the MFs and in setting up the rule base. The MFs for age are illustrated in Fig. 2c, and the details of their parameters are presented in Tab. 2. Note that, aside from the 60+ category, we used identical standard deviations to ensure all age groups have an equal range of 20 years We also use a five percent overlap between MFs to handle transitions between the age brackets. #### 2.2.4 Chronotype Chronotype refers to the inherent preference that individuals have for certain times of the day. It is a concept used to describe an individual's natural sleep-wake patterns and their corresponding preferences for being active or resting during particular parts of the day. Therefore, it can be used to tailor light settings to the passengers' preferences. There are three primary chronotypes: morning, evening, and intermediate. Morning chronotypes tend to feel more alert, awake, and productive during the early morning hours. Evening chronotypes feel more energetic, alert, and creative during the evening and nighttime hours. Intermediate chronotypes fall in between morning and evening chronotypes. The relationship between chronotypes and light exposure is well-studied in the literature [30, 31, 32]. It is shown that generally morning types prefer morning light exposure while evening types prefer evening light exposure. This allows us to rank the morning chronotype \(>\) intermediate chronotype \(>\) evening chronotype in terms of the tolerance of glare discomfort, leading to the MFs depicted in Fig. 2d. ### Rule-base With the above selection of MFs, there are five and four MFs allocated to DGI and age, respectively, while passenger activity and chronotype each have three MFs. Therefore, there exist \(5\times 4\times 3\times 3=180\) possible combinations of fuzzy sets that must be taken into account in constructing the rule base. The rules are created to cover all 180 combinations of inputs of the FIS with each rule corresponding to one of the fuzzy outputs. Due to the large number of rules, we present them in a supplementary document. \begin{table} \begin{tabular}{l c c} \hline \hline MF & \(m\) & \(\sigma\) \\ \hline 0-20 years old & 10 & 5 \\ 20-40 years old & 30 & 5 \\ 40-60 years old & 50 & 5 \\ 60+ years old & 75 & 8 \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters of age MFs ### Inference engine and defuzzification We use the zero-order TS inference engine for the design of the FIS. As mentioned earlier, this choice ensures the stability of the QFIS algorithm as shown in [27]. With this choice, the FIS output will be a weighted average of the output values from the rules. To formulate the FIS, let \(\mathbf{x}_{t}=\left[x_{1,t},x_{2,t},x_{3,t},x_{4,t}\right]^{T}\) be the vector of FIS inputs at time \(t\). The \(j\)-th rule denoted by \(R_{j}\) takes the following form \[R_{j}=\text{IF}\quad x_{1,t}\text{ is }A_{1}^{j}\quad\text{AND}\quad\cdots \quad\text{AND}\quad x_{4,t}\text{ is }A_{4}^{j}\quad\text{THEN}\quad f_{j}\left(\mathbf{x}_{t} \right)=k_{j}, \tag{2}\] where \(A_{i}^{j}\) is the fuzzy set for \(x_{i},t\), and the \(k_{j}\) is the consequent parameter. Assuming a weight of \(w_{j}\) for \(R_{j}\), the the output of the FIS at time \(t\) will take the form \[f\left(\mathbf{x}_{t}\right)=\sum\limits_{j=1}^{N}\bar{w}_{j}k_{j} \tag{3}\] where \(\bar{w}_{j}=w_{j}\Bigg{(}\sum\limits_{j=1}^{N}w_{j}\Bigg{)}^{-1}\) is the normalized weight, and \(N=180\) is the number of rules. In the proposed algorithm, we make \(k_{j}\)s adaptable using Q-learning. Another set of adaptable parameters are the mean values of MFs to be discussed later. For the baseline FIS, we will use the \(k_{j}\) values presented in Tab. 3. The weights associated with each rule represent the degree of activation or membership of the input variables in the corresponding fuzzy sets. The weights will be kept constant during Q-learning adaptation. To provide an insight into the characteristics of the proposed FIS, we present the fuzzy surface plots for DGI and chronotype, and DGI and age versus light intensity in Fig. 3. ## 3 Reinforcement learning In this section, we augment the aforementioned FIS with RL, to create the QFIS algorithm with learning and adaptation capabilities. As discussed earlier, given the relatively small, discrete, and model-free characteristics of the problem domain, we will base our approach \begin{table} \begin{tabular}{l l} \hline FIS output category & \(k\) \\ \hline Lights Off (D5) & 0 \\ Darken 4 (D4) & 12.5 \\ Darken 3 (D3) & 25 \\ Darken 2 (D2) & 37.5 \\ Darken 1 (D1) & 50 \\ Light up 1 (LU1) & 62.5 \\ Light up 2 (LU2) & 75 \\ Light up 3 (LU3) & 87.5 \\ Light up 4 (LU4) & 100 \\ \hline \end{tabular} \end{table} Table 3: Output values chosen for the zero-order TS FIS on Q-learning. Similar to conventional Q-learning algorithms, we represent the state space using Q tables. Let \(\mathbf{Q}_{t}\) be the Q-table at time \(t\). Given the state \(\mathbf{x}_{t}=\left[x_{1t},x_{2t},x_{3t},x_{4t}\right]^{T}\in\mathbf{Q}_{t}\), the FIS module takes the action \(f\left(\mathbf{x}_{t}\right)\), leading to a reward \(r_{t}\) that is defined using the following proposed reward function \[r_{t}=-\frac{2}{\pi}\arctan\left(\epsilon(f\left(\mathbf{x}_{t}\right)-a) \right), \tag{4}\] where \(\epsilon\) is a bias value that suggests how harsh or forgiving the reward function is, and \(a\) represents the desired light intensity change as determined by user preferences. Using \(r_{t}\), we will update the Q-table as follows \[\mathbf{Q}_{t+1}\left(\mathbf{x}_{t},\;f\left(\mathbf{x}_{t}\right)\right)= \;\mathbf{Q}_{t}\left(\mathbf{x}_{t},\;f\left(\mathbf{x}_{t}\right)\right)+\; \eta\Delta_{t} \tag{5}\] where \(\eta>0\) is the learning rate, and \(\Delta_{t}\) is the temporal difference error defined as follows \[\Delta_{t}=r_{t}\left(\mathbf{x}_{t},\;f\left(\mathbf{x}_{t}\right)\right)+\; \gamma V_{t}(\mathbf{x}_{t})-\;\mathbf{Q}_{t}\left(\mathbf{x}_{t},\;f\left( \mathbf{x}_{t}\right)\right), \tag{6}\] where \(\gamma>0\) is known as the discount factor and \(V_{t}(\mathbf{x}_{t})=\max\limits_{f(\mathbf{x}_{t})}\;\mathbf{Q}_{t}\left( \mathbf{x}_{t},\;f\left(\mathbf{x}_{t}\right)\right)\). To adapt the FIS module to the user preferences, we construct adaptation rules for the MF mean values \(m\) and the output values \(k\). Let \(\mathbf{m}\) and \(\mathbf{k}\) be two vectors consisting of all MFs' mean and FIS output values to be tuned. Then, the vector of all adaptable parameters is \(\boldsymbol{\phi}=[\mathbf{m}^{T}\;\mathbf{k}^{T}]^{T}\). To derive adaptation laws for these parameters, let us define the following objective function \[E=\frac{1}{2}\Delta_{t}^{2}. \tag{7}\] Using the gradient decent approach, the parameters can be updated as follows \[\phi\left(t+1\right)=\phi\left(t\right)-\eta\frac{\partial E}{\partial \boldsymbol{\phi}} \tag{8}\] We note that \[\frac{\partial E}{\partial\boldsymbol{\phi}}=\Delta_{t}\frac{\partial\Delta_ {t}}{\partial\boldsymbol{\phi}}. \tag{9}\] Substituting (6) into (9) yields \[\frac{\partial E}{\partial\boldsymbol{\phi}}=-\Delta_{t}\frac{\partial Q_{t} \left(\mathbf{x}_{t},f(\mathbf{x}_{t})\right)}{\partial\boldsymbol{\phi}} \tag{10}\] Figure 3: Fuzzy surface diagrams for the proposed FIS module Using (3), we have \[\frac{\partial\mathbf{Q}_{t}\left(\mathbf{x}_{t},f(\mathbf{x}_{t})\right)}{ \partial k_{j}}=\bar{w}_{j}, \tag{11}\] where \(j\) corresponds to the \(j-\)th rule. Moreover, we have \[\frac{\partial\mathbf{Q}_{t}\left(\mathbf{x}_{t},f(\mathbf{x}_{t})\right)}{ \partial m_{i}^{j}}=\frac{\partial\mathbf{Q}_{t}\left(\mathbf{x}_{t},f( \mathbf{x}_{t})\right)}{\partial w_{j}}\frac{\partial w_{j}}{\partial m_{i}^{j }}=\frac{k_{j}-\mathbf{Q}_{t}\left(\mathbf{x}_{t},f(\mathbf{x}_{t})\right)}{ \sum_{j}w_{j}}w_{j}\frac{x_{i}-m_{i}^{j}}{\left(\sigma_{i}^{j}\right)^{2}}, \tag{12}\] where the superscript refers to the \(j\)-th rule. Note that in driving (11), we use (1). With the above developments, the parameters update rule can be summarized as \[\left[\begin{array}{c}m_{i,t+1}^{j}\\ k_{j,t+1}\end{array}\right]=\left[\begin{array}{c}m_{i,t}^{j}\\ k_{j,t}\end{array}\right]+\eta\Delta_{t}\left[\begin{array}{c}\frac{\left(k_ {j}-\mathbf{Q}_{t}(\mathbf{x}_{t},f(\mathbf{x}_{t}))\right)}{\sum_{j}w_{j}}w _{j}\frac{x_{i}-m_{i}^{j}}{\left(\sigma_{i}^{j}\right)^{2}}\\ \bar{w}_{j}\end{array}\right]. \tag{13}\] ### Q-table formation For our intelligent light management algorithm, three scenarios are possible, including when the FIS output is: (1) correct i.e. it is the same as the user's preference, (2) too bright, and a negative reward is given to lower the intensity levels, and (c) too dark, and a negative reward is given to raise the intensity levels. Here, a single Q-table will be ineffective as it can not differentiate between two negative reward scenarios. Therefore, we use two Q-tables as follows to account for both scenarios. This way, we will be able to appropriately reward the algorithm for suggesting lighting intensities that are either too bright or too dark. The combinations of all four inputs of the algorithm and their corresponding MFs create 180 different states in the Q-tables. ## 4 Implementation and results We conducted an extensive user study in an aircraft cabin mockup to assess the performance of the proposed algorithm. This section details our experimental setup followed by the test procedures and their results. ### Experiments setup For this purpose, we established an experimental setup using a real aircraft fuselage within a research laboratory at Toronto Metropolitan University, as depicted in Fig. 4. To emulate the effect of external sunlight inside the cabin, we positioned a 1000W Colortran LQF6 floodlight, featuring an output of roughly 57000 lm and a color temperature of 5000K, outside the aircraft fuselage. This setup can be seen in Fig. 5. Inside the cabin, illumination was provided by an intelligent LED strip, producing 1600 lm, which was controlled via a Raspberry Pi. We employed a photopic sensor (PMA 1130-S-420-150K) procured from Solar Light Company, Glenside, PA, to measure the Direct Glare Index (DGI). The backbone of our system's computation was the NVIDIA Jetson TX2, operating on a Linux system. To make the testing process more efficient and enhance user engagement, we developed a mobile application. Using this platform, participants were able the FIS utilized these inputs to determine the light intensity output, which was subsequently conveyed to the LED strip in the form of a duty cycle. Participants were then exposed to the resulting lighting conditions. They were given the flexibility to either adjust the light intensity through the application or continue with the preset configurations. Such feedback became instrumental for the Q-learning algorithm, enabling it to optimize the parameters of the FIS. Figure 6 presents a comprehensive outline of our experimental procedures. We engaged 10 participants from the 20-40 age bracket and 8 from the 40-60 age bracket. These age groups were recommended by our industry partner, as most passengers, especially for business jets, belong to these two age groups. ### FIS results Before we test the proposed QFIS algorithm, it is important to evaluate the FIS module of the algorithm to ensure it generates appropriate light settings for an average individual in each age group. For this purpose, each participant was placed in a set scenario and asked to correct their preference for light intensity, using the aforementioned mobile application. The experiments encompassed three distinct activities, including entertainment, eating, and Figure 4: Aircraft cabin mockup for the experiments Figure 5: The setup for simulating exterior lighting sleeping. Based on the recommendations given in [9], the target light settings for these activities include 450 lux for entertainment, 150-200 lux for eating, and less than 50 lux for sleeping. Figure 7 illustrates the light intensity prescribed by the FIS module for each participant across different activities for the two age group categories. Following the completion of the trials, the results for each activity were averaged to obtain an understanding of the preferred average lighting settings in different activities and age ranges (Tab. 4). This data highlights the difference between the light intensity comfort levels in different circumstances. Additionally, this data plays an important role in understanding passengers whose preferences do not fall under the average patterns in each category. Figure 6: Flowchart of experimental procedure Figure 7: The baseline FIS light intensity output for the 20-40 years age group (left), and 40-60 years age group (right) ### QFIS Results This section explores the learning capabilities of the proposed algorithm. Detailed below are three different experimental sets, each encompassing participants from varying age groups with notably distinct lighting preferences and engaged in different activities: 1. _Experiments set 1_, featuring a participant aged between 20-40 years who prefers significantly dimmer light settings relative to the typical preferences of this age group, while engaged in entertainment. 2. _Experiments set 2_, including a participant aged between 40-60 years who leans towards much brighter light settings compared to the standard settings of this age range, while having a meal. 3. _Experiments set 3_, focusing on a participant from the 20-40 years age bracket, while attempting to sleep. The success criteria for the algorithm constituted two factors: 1. The correctness of the tuning: Does the algorithm correctly tune the output towards the desired passenger light intensity? 2. The effectiveness of the tuning: comparing different learning rates \(\eta\) and the number of trials required to tune different scenarios. #### 4.3.1 Experiments set 1 Table 5 presents the input state, the initial algorithm output, and the participant's preference in this set of experiments. For the first test case, the objective was to quickly tune the \(k\) parameters to align with user preferences. The user used the aforementioned mobile application to interact with the algorithm and provide feedback. The algorithm calculated a reward from the feedback at every iteration and adapted the \(k\) parameter, leaning toward the user preference. To explore the effects of varying learning rates, we evaluated learning rates from 0.1 to 0.5. Figure 8 illustrates experimental results. While the algorithm starts with the initial output of 75, it learns from the user inputs and adapts its parameters to reach the user preferences over time. It is evident that the value of \(\eta\) dramatically influences the number of trials necessary. For instance, when \(\eta=0.5\), the algorithm tuned with approximately 22 percent less number of trials compared to the case of \(\eta=0.1\). This highlights that by choosing an appropriate value for \(\eta\), one can effectively control the learning behavior of the algorithm. Exploring the variations in the \(m\) parameters during these trials revealed they behaved in three distinct manners: \begin{table} \begin{tabular}{l l l} \hline \hline Activity & 20 – 40 years age group & 40 - 60 years age group \\ \cline{2-3} Entertainment & 72.4 & 83.125 \\ Eating & 62.4 & 67.375 \\ Sleeping & 8.4 & 8.125 \\ \hline \hline \end{tabular} \end{table} Table 4: The average of light intensity values generated by the baseline FIS module 1. When the input directly corresponds to the mean of MF, the mean remains unchanged because it represents the desired value for the input. 2. If the input is positioned closely between two MFs, the specific state oscillates between these two MFs, thus fine-tuning both \(k\) parameters concurrently. 3. When the input predominantly aligns with one MF, the mean of that function undergoes sole adjustment to expedite the tuning, ensuring the output reflects the passenger's preference. To further explore the role of \(m\) parameters, we conducted additional experiments, observing the \(m\) values during prolonged trials. We set \(\eta=0.1\) for tuning all \(m\) and \(k\) parameters. Table 6 shows the final values of \(m\) for the age MFs and their corresponding \(k\) values. To elaborate on the results, recall that while this particular participant belongs to the age group 20 - 40 years old, their lighting preference aligns more with a younger age bracket. Therefore, the algorithm struggles to identify an optimal MF for the user. As \(\eta\) of all \(m\) and \(k\) parameters are identical, the algorithm tunes all the parameters simultaneously to generate the optimal output. However, this leads to large shifts in MFs, ultimately centering them around either 10 or 34. While this adaptation finally results in an output aligned with the user preference, a significant drawback is the extended duration required to reach the desired performance. This can be alleviated by setting a low learning rate (e.g., 0.002) for the \begin{table} \begin{tabular}{l l l} \hline Variable & Category & Value \\ \hline Age & 20-40 & 22 \\ DGI & Comfortable & 22 \\ Chronotype & Evening & 25 \\ Activity & Entertainment & 5 \\ Baseline FIS output & LU2 & 75 \\ Passenger Preference & \(\sim\)LU1 & \(\sim\)62 \\ \hline \end{tabular} \end{table} Table 5: Parameters for the experiments set 1 Figure 8: The change in the algorithm output with varying learning rates in experiments set 1 parameters while maintaining a relatively larger learning rate for the \(k\) parameters (e.g., 0.1). With such choices of learning rates, \(k\) parameters can quickly converge to a neighborhood of desired behavior, while slight changes in the \(m\) values will fine-tune the algorithm behavior. It is noteworthy that the DGI MFs were not affected in this test because the DGI in the test environment was 22, falling directly on the mean of the _Comfortable_ DGI MF. #### 4.3.2 Experiments set 2 Table 7 presents the parameters for this set of experiments. Note that this set of experiments involves a different level of DGI, and new participants with a different age category, chronotype, and activity compared to experiments set 1. Furthermore, this participant's lighting preferences align more with an older age bracket; therefore, the new experiments evaluate the algorithm in scenarios opposite to the ones mentioned in the previous section. We observed the learning behavior of the algorithm for different \(\eta\) values ranging from 0.1 to 0.5. Figure 9 illustrates the results, showing that the algorithm can effectively learn the passenger preferences and adapt its parameters correctly for cases where the user prefers more lighting. Furthermore, when comparing the varying learning rates, using \(\eta=0.5\) allowed for 25 percent faster tuning compared to the case of \(\eta=0.1\). These results, coupled with outcomes from experiments set 1, verify the algorithm's effectiveness in adapting to the preferences of users with varied characteristics. Further, upon closely examining the algorithm's performance near the target behavior, it is evident that there is minimal or no overshoot at all. Table 8 presents the final three trials from experiments set 2 with \(\eta=0.2\). The algorithm exceeds the desired value of 100 by a marginal amount, but this leads to another negative reward that is accounted for by our dual Q-table configuration. Consequently, once the algorithm exceeds the desired value, it promptly readjusts to the target in the subsequent trial. It is important to highlight that while higher learning rates might lead to more noticeable overshoots; these can be effectively handled with adaptive learning rates. \begin{table} \begin{tabular}{l l l} \hline \hline Number of trials & \(\eta\) & \(m\) \\ \hline 48 & 0.5 & 30 to 27.732 \\ 59 & 0.4 & 30 to 27.409 \\ 71 & 0.3 & 30 to 27.995 \\ 87 & 0.2 & 30 to 27.498 \\ 136 & 0.1 & 30 to 27.066 \\ \hline \hline \end{tabular} \end{table} Table 6: The shift in \(m\) values with different learning rates in experiments set 1 \begin{table} \begin{tabular}{l l l} \hline \hline Variable & Category & Value \\ \hline Age & 40-60 & 50 \\ DGI & Negligible & 14 \\ Chronotype & Morning & 5 \\ Activity & Eating & 3 \\ Baseline FIS output & LU3 & 87.5 \\ Passenger Preference & \(\sim\)LU4 & \(\sim\)100 \\ \hline \hline \end{tabular} \end{table} Table 7: Parameters for the experiments set 2 #### 4.3.3 Experiments set 3 This set of experiments focuses on sleeping, as it is a unique activity in which the interior lights need to shut off or function as a nightlight. Through the FIS tests, it was found that the majority of individuals prefer a light intensity of around 10. However, to challenge the algorithm, we asked a participant who prefers brighter lighting for sleeping to conduct this set of experiments. Table 9 shows the parameters for these trials. Figure 10 displays the test results highlighting the effects of various learning rates. This test displays a higher range of tuning as the participant required a higher light intensity when compared to the baseline FIS's initial output. Observing the behavior \(m\) parameters, it was revealed that the user age predominantly aligned with one MF, and the mean of that MF was tuned solely to accelerate tuning to match the user preference. This can be seen in Fig. 10 by the slight increase in the slope of the curves over Table 10 tabulates the shift of the mean from the first test to the last for different \(\eta\) values. ### Summary of results Through the various experiments conducted, the algorithm's effectiveness in adapting to users with a wide spectrum of preferences and characteristics was verified. Furthermore, the effect of various learning rates was explored, showing that the learning behavior of the algorithm can be controlled easily using the learning rate. The detailed examination of the algorithm behavior revealed that the learning rate for the \(m\) parameters should be smaller than the ones for the \(k\) parameters, crucial for a balanced \begin{table} \begin{tabular}{l l l l} \hline \hline Trial number & \(\eta\) & \(r_{t}\) & \(f\left(\mathbf{x}_{t}\right)\) \\ \hline 103 & 0.2 & -0.0254 & 99.99203 \\ 104 & 0.2 & -0.0159 & 100.005 \\ 105 & 0.2 & 0 & 100 \\ \hline \hline \end{tabular} \end{table} Table 8: Algorithm overshoot and its correction Figure 9: The change in the algorithm output with varying learning rates in experiments set 2 adaptation of algorithm parameters and minimizing the number of trials needed to match the user preferences. The number of trials also depends on the difference between the baseline FIS output, and the passenger preference. This underscores the importance of using the domain knowledge to build a fine-tuned baseline FIS. ## 5 Conclusion This paper developed a QFIS algorithm for intelligent lighting management with a focus on aircraft interiors. Through a comprehensive user study, we have demonstrated the algorithm's efficacy in accommodating a diverse spectrum of user preferences and characteristics. Furthermore, we have conducted an in-depth analysis of the algorithm's behavior and studied the impact of its design parameters. While our focus was on aircraft interiors, the developed algorithm can be used in any environment that can benefit from a smart lighting algorithm. As revealed in our literature review, the existing work on intelligent lighting systems has primarily focused on hardware architecture, and the software developments are limited to on-off light settings with no capability to learn user preferences. As such, our work presents several elements of novelty. To \begin{table} \begin{tabular}{l l l} \hline \hline Variable & Category & Value \\ \hline Age & 20-40 & 27 \\ DGI & Comfortable & 22 \\ Chronotype & Night & 25 \\ Activity & Sleeping & 2 \\ Baseline FIS output & D4 & 12.5 \\ Passenger Preference & D2 & 35 \\ \hline \hline \end{tabular} \end{table} Table 9: Parameters for the experiments set 3 Figure 10: The change in the algorithm output with varying learning rates in experiments set 3 our knowledge, it is the first to explore the application of FIS and RL for light management. It uses the domain knowledge to develop an intelligent fuzzy controller, and at the same time, provides a mechanism for the user to interact with the algorithm, and thus create a learning behaviour. We were not able to find similar work with respect to both FIS and RL in similar applications. It is worth noting that the results presented in this paper can be extended in several directions. For example, an adaptive learning rate can be utilized to accelerate the learning process of the algorithm when the baseline FIS and user preferences are significantly different. Mimicking human decisions is very complex and there are numerous factors that stimulate visual comfort. While the baseline FIS developed here was based on thorough research in the literature and discussions with an industry partner active in developing aircraft lighting systems, a deeper study of human lighting preferences (e.g., light color, human eye conditions) can help design a more effective baseline FIS, subsequently reducing the gap between user preferences and the algorithm output, and reducing the need for prolonged learning. Moreover, our user study included 18 participants. While this relatively limited number of participants does not compromise the rigor of our developments, a larger user study can help improve our knowledge about humans' lighting preferences and potentially inform new design directions that can help accelerate the adoption of such algorithms in real practice. In the future, we will work on integrating the developed algorithm with smart window systems and will conduct a larger user study. ## Acknowledgement This research was partially supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grants program.
2309.04634
Enhanced extra mixing in low-mass stars approaching the RGB tip and the problem of Li-rich red-clump stars
A few percent of red giants are enriched in Lithium with $A(\mathrm{Li}) > 1.5$. The evolutionary phase of the Li-rich red giants has remained uncertain because they could be placed both on the red-giant branch (RGB) near the bump luminosity and in the red clump (RC) region. However, thanks to asteroseismology, it has been found that most of them are actually RC stars. Starting at the bump luminosity, RGB progenitors of the RC stars experience extra mixing in the radiative zone separating the H-burning shell from the convective envelope followed by a series of convective He-shell flashes at the RGB tip, known as the He-core flash. Therefore, the He-core flash was proposed to cause fast extra mixing in the stars at the RGB tip that is needed for the Cameron-Fowler mechanism to produce Li. Alternatively, we propose that the RGB stars are getting enriched in Li by the same extra mixing that starts at the bump luminosity and initially leads to a decrease of the surface Li abundance but that is getting enhanced and begins to produce Li when the stars are approaching the RGB tip. We discuss five mechanisms of the RGB extra mixing, namely, the joint operation of rotation-driven meridional circulation and turbulent diffusion, the Azimuthal Magneto-Rotational Instability (AMRI), thermohaline convection, buoyancy of magnetic flux tubes, and internal gravity waves, and, based on results of (magneto-) hydrodynamics simulations, come to the conclusion that it is the mechanism of the AMRI that is most likely to support our hypothesis.
Pavel A. Denissenkov, Simon Blouin, Falk Herwig, Jacob Stott, Paul R. Woodward
2023-09-08T23:10:17Z
http://arxiv.org/abs/2309.04634v1
Enhanced extra mixing in low-mass stars approaching the RGB tip and the problem of Li-rich red-clump stars ###### Abstract A few percent of red giants are enriched in Lithium with \(A(\mathrm{Li})>1.5\). The evolutionary phase of the Li-rich red giants has remained uncertain because they could be placed both on the red-giant branch (RGB) near the bump luminosity and in the red clump (RC) region. However, thanks to asteroseismology, it has been found that most of them are actually RC stars. Starting at the bump luminosity, RGB progenitors of the RC stars experience extra mixing in the radiative zone separating the H-burning shell from the convective envelope followed by a series of convective He-shell flashes at the RGB tip, known as the He-core flash. Therefore, the He-core flash was proposed to cause fast extra mixing in the stars at the RGB tip that is needed for the Cameron-Fowler mechanism to produce Li. Alternatively, we propose that the RGB stars are getting enriched in Li by the same extra mixing that starts at the bump luminosity and initially leads to a decrease of the surface Li abundance but that is getting enhanced and begins to produce Li when the stars are approaching the RGB tip. We discuss five mechanisms of the RGB extra mixing, namely, the joint operation of rotation-driven meridional circulation and turbulent diffusion, the Azimuthal Magneto-Rotational Instability (AMRI), thermohaline convection, buoyancy of magnetic flux tubes, and internal gravity waves, and, based on results of (magneto-) hydrodynamics simulations, come to the conclusion that it is the mechanism of the AMRI that is most likely to support our hypothesis. keywords: stars: interiors, stars: evolution, stars: low-mass, stars: chemically peculiar, hydrodynamics, turbulence, waves, diffusion ## 1 Introduction The standard stellar evolution theory predicts that the surface Li abundance has to decrease in a star with a mass close to the solar one when it leaves the main sequence (MS) phase of H-core burning and begins to ascend the red giant branch (RGB). This happens during the first dredge-up (FDU) episode, when the base of the deepening convective envelope of the star on the lower RGB reaches the layers in which temperature was high enough, above 2.5 MK, to destroy Li on the MS, and now Li remaining in the surface layers gets diluted by this convective mixing (Iben, 1967). The FDU also reduces the surface \({}^{12}\)C to \({}^{13}\)C isotopic and C to N elemental abundance ratios. At the end of the FDU, when the base of the convective envelope stops deepening and begins retreating in front of the H-burning shell advancing in mass, it leaves behind a small discontinuity in the H and other abundance profiles at the mass coordinate of its deepest penetration. Later, when the H-burning shell crosses the discontinuity it has to adapt to the slightly different chemical composition forcing the star to make a small zigzag on the Hertzsprung-Russell diagram towards a lower luminosity and then to resume its RGB ascent (Figure 1a). This temporarily slows down its evolution and can be observed as a pile-up (a bump) of stars at the corresponding bump luminosity_ in luminosity functions of populous globular clusters. Observations show that above the bump luminosity, on the upper RGB, low-mass stars experience non-convective extra mixing in their radiative zones separating the H-burning shell from the base of the convective envelope (e.g., Gilroy & Brown, 1991; Gratton et al., 2000; Smith & Martell, 2003; Shetrone et al., 2019) that further reduces their surface Li abundances along with the \({}^{12}\)C/\({}^{13}\)C and C/N ratios. Later, at the tip of the RGB, the temperature in the He core becomes sufficiently high at some distance from the center to ignite the triple-\(\alpha\) reaction under electron-degenerate conditions. This _He-core flash_ consists of a series of convective He-shell burning events that gradually approach the center, lift the degeneracy and end up when He starts burning in a non-degenerate convective core. By this time, the star arrives at the red-clump (RC) region of the horizontal branch (HB). Finally, when He gets exhausted in the core, the star leaves the HB and begins to climb the asymptotic giant branch (AGB) where it will experience intermittent H- and He-shell burning, the latter occuring in the form of thermal pulses. The outlined standard scenario of the evolution of a low-mass star, which is illustrated by Figure 1, with observationally constrained mean rate and depth of extra mixing on the upper RGB (Denissenkov & VandenBerg, 2003) leads to a conclusion that the surface Li abundances in low-mass RC stars have to be significantly reduced compared to their initial MS values of \(A(\mathrm{Li})=\log_{10}[N(\mathrm{Li})/N(\mathrm{H})]+12=3.3\) for the solar composition. This conclusion does not even take into account the fact that the surface Li abundance in solar-type stars is observed to decline already on the MS, e.g. by nearly 2 orders of magnitude in the Sun (Carlos et al., 2019), as a result of another extra mixing of as yet unknown nature operating below their convective envelopes (Richard et al., 2005; Dumont et al., 2021). Therefore, it was a surprise when Kumar et al. (2020) discovered that all RC stars studied by them had much higher Li abundances, by the factor of 40 on average, than those predicted by the standard stellar evolution theory, and that the previously known population of a few percent of red giants with \(A(\mathrm{Li})>1.5\), the so-called _Li-rich giants_, all belonged to an extended tail of the Li abundance distribution in RC stars. Until about a decade ago, at least some of the Li-rich giants were thought to be RGB stars at the bump luminosity, because their luminosities and effective temperatures did not allow to distinguish them from the RC stars. Their high Li abundances used to be interpreted either as a consequence of their swallowing a giant planet or a brown dwarf with a preserved initial Li abundance (Siess & Livio, 1999) or as a result of enhanced RGB extra mixing with a rate significantly exceeding the one required for the explanation of the observational evidence of the operation of RGB extra mixing in the majority of low-mass stars (Charbonnel & Balachandran, 2000; Denissenkov & Weiss, 2000; Denissenkov & Herwig, 2004; Denissenkov, 2012). In the second case, Li is synthesized via the mechanism proposed by Cameron & Fowler (1971) in which \({}^{7}\)Be produced in the vicinity of the H-burning shell in the reaction \({}^{3}\)He\((\alpha,\gamma)^{7}\)Be has to be quickly transported by convection or extra mixing to colder layers where it will have enough time to capture electrons to make \({}^{7}\)Li, instead of being destroyed by proton captures. The identification of most of the Li-rich giants with RC stars was made possible thanks to asteroseismology that demonstrated a clear separation of RGB and RC stars on the diagram displaying their gravity-mode period spacing \(\Delta\Pi_{1}\) versus large frequency separation \(\Delta\nu\) constructed using photometric data obtained with the space telescope _Kepler_(Borucki et al., 2010; Bedding et al., 2011; Mosser et al., 2014; Kumar et al., 2020; Deepak & Lambert, 2021; Mallick et al., 2023). Li abundances in these stars were determined using ground-based stellar spectroscopy data from the LAMOST and GALAH surveys (Cui et al., 2012; De Silva et al., 2015). For reviews of Li-rich giants, readers are referred to Martell et al. (2021) and Yan & Shi (2022). Recently, Chaname et al. (2022) have questioned the conclusion that most of the Li-rich giants are low-mass RC stars. Their doubt is based on the fact that the RC region can be occupied not only by low-mass stars with \(M\la 2\,M_{\odot}\), that are indeed expected to undergo the He-core flash preceded by extra mixing on the upper RGB, but also by more massive stars with \(M\ga 2\,M_{\odot}\) that may only experience the FDU resulting in a moderate decrease of their surface Li abundances. However, this controversy has soon been resolved by Mallick et al. (2023) who show that whereas in Figure 1: The track (panel a) and Kippenhahn diagram (panel b) for the evolution of a star with the initial mass \(1.2\,M_{\odot}\) and metallicity \(\mathrm{[Fe/H]=-0.3}\) with model numbers (circles and vertical line segments of same colors) indicating its main phases, such as the main sequence (680), the end of the first dredge-up (1390), the bump luminosity on the red giant branch (2040), the first He-shell flash (12660), the He-core burning in the red-clump region of the horizontal branch (14540), and the beginning of the He thermal pulses on the asymptotic giant branch (16390). The gray-shaded regions in panel b are convective zones (envelope, cores and shells). their smaller sample of 109 RC stars with \(M\ga 2\,M_{\odot}\) the observed Li abundances do agree with predictions of the standard stellar evolution theory, the presence of Li-rich giants, including a few super Li-rich ones with \(A(\rm Li)\geq 3.3\) that were not explained by Chaname et al. (2022), in their larger sample of 668 RC stars with \(M\la 2\,M_{\odot}\) indicates that Li is indeed produced in their progenitors somewhere between the end of RGB extra mixing and the RC phase. In this work, we put forward and substantiate a hypothesis that Li may be produced in low-mass stars by the same RGB extra mixing that begins to manifest itself at the bump luminosity and initially destroys Li but that is getting enhanced and starting to produce Li when the stars are approaching the RGB tip. All stellar evolution computations in this work, e.g. those results of which are shown in Figure 1, have been done for a star with the initial mass \(1.2\,M_{\odot}\) and metallicity \(\rm[Fe/H]=-0.3\)1, close to those of the majority of Li-rich RC stars studied by Deepak and Lambert (2021), using the MESA revision 7624 code (Paxton et al., 2011, 2013) with the same input physics as described in Section 2 of Densenselkov et al. (2017), except that here we have included extra mixing on the upper RGB and used the parameter \(\eta_{\rm R}=0.36\) in the Reimers formula for the RGB mass-loss rate (Reimers, 1975). Footnote 1: We use the standard stellar spectroscopy notation \(\rm[A/B]=\log_{10}[N_{\star}(A)/N_{\star}(B)]-\log_{10}[N_{\odot}(A)/N_{ \odot}(B)]\), where \(N_{\star}\) and \(N_{\odot}\) are number densities of elements A and B in a star and the Sun. Our paper is organized as follows. In Section 2, we review various physical mechanisms that were proposed to explain RGB extra mixing. Section 3 summarizes the previously proposed hypotheses explaining the Li enrichment of RC stars. In Section 4, we identify the RGB extra mixing mechanisms with diffusion coefficients that are expected to be rapidly increasing with the luminosity and select the Azimuthal Magneto-Rotational Instability (AMRI; Rudiger et al., 2014, 2015) as the most promising one to support our hypothesis. Then, we present and discuss results of our computations of the evolution of a low-mass stellar model representing the Li-rich RC stars with RGB extra mixing getting enhanced when it approaches the RGB tip. Section 5 concludes the paper with a brief discussion of our hypothesis and its supporting arguments. ## 2 Mechanisms for the RGB extra mixing The radiative zones of low-mass stars, where the operation of extra mixing on the upper RGB is evidenced by the evolutionary declines of the surface Li abundance, the isotopic \({}^{12}\)C/\({}^{13}\)C and the elemental C/N abundance ratios in field, open and globular cluster stars, are convectively stable. Therefore, rising and sinking fluid elements in these zones have to exchange heat by radiation diffusion with their surrounding stably stratified (with a positive entropy gradient) stellar layers to be able to reduce the buoyancy force trying to keep them in place. This means that the corresponding rate of mixing, expressed as a diffusion coefficient \(D_{\rm mix}=fK\), should be proportional to the radiative diffusivity \[K=\frac{4acT^{3}}{3\varkappa C_{P}\rho^{2}}, \tag{1}\] where \(a\) is the radiation density constant, \(c\) the speed of light in vacuum, \(T\) the temperature, \(\varkappa\) the Rosseland mean opacity, \(C_{P}\) the specific heat at constant pressure, and \(\rho\) the density. The factor \(f\), which should be smaller than one, depends on physical parameters associated with a specific mixing mechanism. ### Rotationally-induced mixing Sweigart and Mengel (1979) and Smith and Tout (1992) considered rotationally-induced meridional circulation as a mechanism for the RGB extra mixing. In fact, Sweigart and Mengel (1979) were the first to conclude that RGB extra mixing could begin to reach the H burning shell only after the latter had crossed and erased the chemical composition discontinuity left behind by the base of the convective envelope at the end of the FDU. After that, above the bump luminosity, the mean molecular weight gradient \(\nabla_{\mu}=(\partial\ln\mu/\partial\ln P)\) would remain zero in the bulk of the radiative zone down to the vicinity of the H-burning shell where its increasing positive value serves as a barrier for all types of extra mixing, thereby establishing their maximum depth. Zahn (1992) conjectured that in rotating stellar radiative zones the meridional circulation had to compete in transporting angular momentum with rotationally-induced turbulent diffusion. He assumed that the latter was much stronger in the horizontal than in vertical direction and that, as a result, the radiative zones were in a state of shellular rotation with the angular velocity \(\Omega\) depending only on the radius \(r\). In the radiative zone of an upper RGB star this competition is described by the following differential equation of the angular momentum transport in Eulerian coordinates: \[\frac{\partial}{\partial t}\left(\rho r^{2}\Omega\right) =\frac{1}{5r^{2}}\frac{\partial}{\partial r}\left[\rho r^{4} \Omega(U-5\dot{r})\right]\] \[+\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(\rho r^{4}\nu_{ \nu}\frac{\partial\Omega}{\partial r}\right), \tag{2}\] where \(\dot{r}=(\partial r/\partial t)_{M_{r}}\) is a rate with which a mass shell \(M_{r}\) is approaching the H-burning shell, \(U\) is the radial component of the meridional circulation velocity, and \[\nu_{\rm v}=D_{\rm v}=\eta\frac{(r\frac{\partial\Omega}{\partial r})^{2}}{N^{ 2}}K \tag{3}\] is the vertical component of both turbulent viscosity (\(\nu_{\rm v}\)) and turbulent diffusion coefficient (\(D_{\rm v}\)) produced by the differential shellular rotation in a radiative zone with \(\nabla_{\mu}=0\). In Equation (3), \(\eta\sim 0.01-0.1\) is a parameter, whose values in the indicated range were confirmed by hydrodynamics simulations (Prat and Lignieres, 2013, 2014; Prat et al., 2016; Garaud et al., 2017), and \[N^{2}=N_{T}^{2}+N_{\mu}^{2}=\frac{g}{H_{P}}\delta(\nabla_{\rm ad}-\nabla_{\rm rad })+\frac{g}{H_{P}}\varphi\nabla_{\mu} \tag{4}\] is the square of the Brunt-Vaisala (buoyancy) frequency represented as a sum of its thermal (\(N_{T}^{2}\)) and chemical composition (\(N_{\mu}^{2}\)) parts. In the expressions for the last two terms, \(g\) is the local gravity, \(H_{P}\) the pressure-scale height, \(\nabla_{\rm ad}\) and \(\nabla_{\rm rad}\) the adiabatic red radiative temperature gradients, logarithmic and with respect to pressure, while \(\delta=-(\partial\ln\rho/\partial\ln T)_{P,\mu}\) and \(\varphi=(\partial\ln\rho/\partial\ln\mu)_{P,T}\) are determined by the equation of state. At the same time, Chaboyer & Zahn (1992) showed that the strong horizontal turbulence had to reduce the efficiency of radial mixing by meridional circulation making it possible to describe it as a diffusion, rather than advection, process with the following coefficient: \[D_{\rm eff}=\frac{|rU|^{2}}{30D_{\rm h}}, \tag{5}\] where the coefficient of horizontal turbulent diffusion \(D_{\rm h}\) was calculated using the radial and horizontal components of the meridional circulation velocity, assuming that \(D_{\rm h}\gg D_{\rm v}\) and \(D_{\rm h}\gg|rU|\). Denissenkov & Tout (2000, hereafter DT00) solved Equation (2) with the prescriptions for the meridional circulation velocity and turbulent diffusion from Zahn (1992) and their updates by Maeder & Meynet (1996) and Maeder & Zahn (1998) for the radiative zone of an upper RGB star and found that for reasonable surface rotational velocities of RGB stars the combined diffusion coefficient \(D_{\rm mix}=D_{\rm v}+D_{\rm eff}\), with the first term dominating, provided a sufficiently fast rate for the RGB extra mixing. The magnitude of the circulation velocity calculated by DT00, \(2\times 10^{-3}\) cm s\({}^{-1}\), was very close to its value estimated by Smith & Tout (1992), \(3.6\times 10^{-3}\) cm s\({}^{-1}\), necessary to reproduce the evolutionary decline of the carbon abundance on the upper RGB in the globular cluster M92 measured by Carbon et al. (1982). Those results were obtained by DT00 under the assumption that the convective envelopes of RGB stars rotated differentially keeping the specific angular momentum conserved through them. This assumption is supported by the results of 3D hydrodynamics simulations of turbulent convection in the convective envelope of a low-mass RGB star reported by Brun & Palacios (2009) and by the models of the evolution of low-mass stars with rotation that explain the relatively fast rotational velocities of HB stars in the globular cluster M13 (Sills & Pinsonneault, 2000). The assumption of solid-body rotation in their convective envelopes would require unrealistically high surface rotational velocities for RGB stars. This conclusion agrees with the simulations of rotationally-induced mixing by Charbonnel & Lagarde (2010) who used equations similar to those of Zahn (1992) but assumed solid-body rotation in convective envelopes and, as a result, found that their rotationally-induced mixing on the upper RGB was too slow to explain the observational data. Palacios et al. (2006) questioned the results of DT00 on the efficiency of rotationally-induced mixing in low-mass stars on the upper RGB. They solved the equation of the angular momentum transport by the meridional circulation and turbulent diffusion in Lagrangian coordinates for the entire evolution of a low-mass star from the MS to the upper RGB, taking into account magnetic breaking of its envelope rotation on the MS and using different prescriptions for the coefficient of turbulent diffusion. The angular velocity profiles in the radiative zones of the RGB bump luminosity models M2 and M6 displayed by Palacios et al. (2006) in their Fig. 2 agree surprisingly well with the corresponding profile from Fig. 6 of DT00. Therefore, it is not clear why the coefficient of vertical turbulent diffusion presented in Fig. 9 of Palacios et al. (2006) for the model M4, which is the closest one to the model considered by DT00, is smaller by at least a factor of 10 than its counterpart presented in Fig. 5b of DT00. In any case, new data on internal rotation of low-mass subgiant and lower RGB stars obtained by asteroseismology have revealed much flatter angular velocity profiles than those predicted by all 1D models that include only rotationally-induced transport of angular momentum. ### Angular momentum transport and mixing driven by the azimuthal magneto-rotational instability Beck et al. (2012) detected rotational splittings of mixed modes of solar-like oscillations in three lower RGB stars observed by the _Kepler_ space telescope. From an analysis of dipole modes they figured out that the cores of those stars rotate at least ten times faster than their envelopes (for rotationally-induced RGB extra mixing this difference has to be nearly two orders of magnitude larger, e.g. see Fig. 6 of DT00), assuming that the former and latter each rotate as a solid body. That conclusion was soon confirmed by Mosser et al. (2012) who measured rotational splittings for a much larger sample of low-mass red giants. Deheuvels et al. (2014) added six low-mass subgiant stars to the sample of lower RGB stars studied by Mosser et al. (2012) and showed that their cores rotated by an order of magnitude faster than the cores of the RGB stars. They interpreted that as a signature of a more efficient than rotationally-induced transport of angular momentum occurring in the latter. Spada et al. (2016) modelled that transport as a diffusion process and demonstrated that the observed evolutionary changes of the radiative core and convective envelope angular velocities \(\Omega_{\rm core}\) and \(\Omega_{\rm env}\) of the subgiant and RGB stars could be reproduced simultaneously with the angular momentum transport (AMT) diffusion coefficient \[D_{\rm AMT}=D_{0}\left(\frac{\Omega_{\rm core}}{\Omega_{\rm env}}\right)^{ \alpha}, \tag{6}\] where \(D_{0}\approx 1\) cm\({}^{2}\) s\({}^{-1}\) and \(\alpha\approx 3\). They argued that such a power-law scaling with \(\alpha\approx 2\) - 3 was consistent with the dependence of a coefficient of turbulent viscosity on differential rotation obtained in numerical simulations of the Azimuthal Magneto-Rotational Instability (AMRI) by Rudiger et al. (2015). According to results reported by Spada et al. (2016), this angular momentum transport begins to manifest itself approximately at the age when the H-burning shell is just established, i.e. long before the RGB star has reached the bump luminosity. An extrapolation of the \(\alpha=3\) curve from their Fig. 4 to a value of \(\log g=2.3\) at the bump luminosity of our \(1.2M_{\odot}\) model star leads to an estimate of \(D_{\rm AMT}\sim 10^{6}\) cm\({}^{2}\) s\({}^{-1}\) which is comparable to the value of \(D_{\rm mix}\) near the H-burning shell in Fig. 5d of DT00. According to Rudiger et al. (2014), \(D_{\rm AMT}/D_{\rm mix}\propto\sqrt{\rm Pm}\), where Pm is the magnetic Prandtl number. Its value is between 0.01 and 10 in the radiative zone of low-mass RGB stars (Rudiger et al., 2015), therefore \(D_{\rm mix}\sim D_{\rm AMT}\) for the AMRI turbulence, unless a stable thermal stratification prevents mixing. Moyano et al. (2023) have recently found additional observational support for the AMRI as the possible mechanism of angular momentum transport in both low- and intermediate-mass stars on different evolutionary phases, including the HB. With their updated values of \(D_{0}\approx 50\) cm\({}^{2}\) s\({}^{-1}\) and \(\alpha\approx 2\) for the low-mass stars the value of \(D_{\rm AMT}\approx 5\times 10^{5}\) cm\({}^{2}\) s\({}^{-1}\) at the bump luminosity still remains sufficiently high to be consistent with the AMRI as the RGB extra mixing mechanism. ### Thermohaline mixing While doing 3D hydrodynamics simulations of convection triggered by the He-core flash in a \(1\,M_{\odot}\) star at the RGB tip Dearborn et al. (2006) noticed some fluid motion outside the H-burning shell. In their follow-up paper Eggleton et al. (2006) found that a local inversion of the mean molecular weight profile \(\mu(r)\) at the outer part of the H-burning shell produced by the reaction \({}^{3}\)He(\({}^{3}\)He,2p)\({}^{4}\)He was driving that fluid motion, but they mistakenly attributed the cause of it to the Rayleigh-Taylor instability. Charbonnel & Zahn (2007) correctly interpreted that motion as thermohaline, or salt-fingering, convection driven by a double-diffusive instability. It develops when diffusion of a destabilizing ingredient (salt in the ocean, nuclei contributing to the difference in \(\mu\) in the star) is less efficient than diffusion of heat that reduces the stabilizing effect of a difference in temperature between rising (and sinking) salt fingers and their surroundings. However, for a model of thermohaline convection to be able to reproduce the observed evolutionary declines of the \({}^{12}\)C/\({}^{13}\)C and C/N ratios in upper RGB stars its salt fingers should have an aspect ratio of their radial length to diameter \(\alpha\ga 7\). For the ideal gas equation of state, a simple linear analysis gives the following expression for the thermohaline diffusion coefficient: \[D_{\rm th}=C_{\rm th}\frac{\nabla_{\mu}}{\nabla_{\rm rad}-\nabla_{\rm ad}}K, \tag{7}\] where \(C_{\rm th}=2\pi^{2}a^{2}\)(Denissenkov, 2010) or \(C_{\rm th}=(8/3)\pi^{2}\alpha^{2}\)(Ulrich, 1972; Charbonnel & Zahn, 2007), with \(\alpha\) also representing the salt-finger aspect ratio. The MESA stellar evolution code that we have used in this work uses the parameterization \(C_{\rm th}=(3/2)\alpha_{\rm th}\) referring to it as the "Kippenhahn" option motivated by the work of Kippenhahn et al. (1980). The same observationally constrained value of \(D_{\rm th}\) is obtained with \(C_{\rm th}\approx 1000\) (for \(a\approx 7\) or \(\alpha\approx 6\)), and \(\alpha_{\rm th}\approx 667\). When using the MESA "Kippenhahn" prescription for thermohaline mixing with \(\alpha_{\rm th}\approx 667\) in our \(1.2\,M_{\odot}\) model star we have obtained a too steep decline of the surface [C/N] ratio on the upper RGB, with its total decrease produced by the FDU and RGB extra mixing between the bump luminosity and the RGB tip \(\Delta\)[C/N] \(\approx-0.8\) (dot-dashed red line in Figure 2). This contradicts to a moderate change of \(-0.3\la 2\)[C/N]\(\la-0.2\) in these mixing events observed in low-mass RGB stars with metallicities in the range of \(-0.4\leq\) [Fe/H] \(\leq-0.2\)(Fig. 3 and Table 2 in Shetrone et al., 2019). We have found that this discrepancy is caused by the MESA revision 7624 code overestimating the depth of thermohaline mixing because it uses only the H abundance to calculate \(\nabla_{\mu}\) in Equation (7). As illustrated by the left and right vertical dotted lines in Figure 3, the radius in the vicinity of the H-burning shell, below the local inversion of \(\mu(r)\), at which the now increasing \(\mu\) approaches its value in the bulk of the radiative zone, outside the H-burning shell, changes from \(r_{\rm mix}\approx 0.045\,R_{\odot}\) to \(r_{\rm mix}\approx 0.055\,R_{\odot}\) when \(\mu\) is calculated using all available isotopes. Therefore, in this work we fix the depth of RGB extra mixing in all our models at \(r_{\rm mix}\approx 0.055\,R_{\odot}\), assuming that gas at \(r<r_{\rm mix}\) with a higher \(\mu\) than in the bulk of the radiative envelope cannot rise up. A relatively small variation of [C/N] on the upper RGB obtained with the "Kippenhahn" prescription using this reduced mixing depth is shown as a dot-dashed gray line in Figure 2. In lower-metallicity or in C-enhanced low-mass stars the depth of the RGB thermohaline mixing has a different value, a smaller \(r_{\rm mix}\) for a lower [Fe/H] and a larger \(r_{\rm mix}\) for a C-enhanced mixture, but in all cases it remains in a range between \(r_{\rm mix}\approx 0.045\,R_{\odot}\) and \(r_{\rm mix}\approx 0.06\,R_{\odot}\)(Denissenkov & Pinsonneault, 2008; Denissenkov et al., 2009; Denissenkov, 2010). We have checked that the same algorithm for the calculation of \(\nabla_{\mu}\) is implemented in MESA at least up to the revision 15140, while the most recent MESA revisions employ a different algorithm. Note that the problem with MESA overestimating the depth of RGB extra mixing could also be revealed by comparing the observed and predicted carbon isotopic ratios (Figure 4). Besides the technical difficulty in estimating a correct depth of the RGB thermohaline mixing, for which some observational constraints still need to be used, a bigger problem is that it needs a too large salt-finger aspect ratio of \(a\ga 7\) to reproduce the observational data. In the very low viscosity environment of stellar radiative zones, the rising and sinking salt fingers are subject to the shear instability at their separating boundaries that should destroy their radially elongated structure. Such "self destruction" of salt fingers was first predicted by Kippenhahn et al. (1980) based on a simple analytical model and then it was demonstrated in 2D and 3D numerical simulations by Denissenkov (2010) and Denissenkov & Merryfield (2011), results of which were independently confirmed by Traxler et al. (2011). In those simulations it was shown that the effective salt-finger aspect ratio is \(\sim 0.5\), which means that the efficiency of the RGB thermohaline mixing is \(\sim 200\) times lower than what is required to explain the observations. Therefore, at present we do not consider thermohaline convection as a suitable model of RGB extra mixing. ### Magnetic-buoyancy mixing Harrington & Garaud (2019) invoked a radial magnetic field of \(\sim 100\) G to show that it could stabilize the growth of salt fingers driven by the primary double-diffusive instability against their destruction by the secondary shear instability in the radiative zone of an upper RGB star. That would increase the efficiency of RGB thermohaline mixing by two orders of magnitude in agreement with the observations. However, Fraser & Garaud (2023) have found that a decrease of the magnetic Prandtl number from the value of Pm = 1 that was used by Harrington & Garaud (2019) to Pm = 0.01 or lower values that are typical for the bulk of the radiative zone reduces the salt-finger growth to amplitudes similar to those obtained in the absence of magnetic fields. It is interesting that even much stronger radial magnetic fields, with magnitudes of \(\sim 100\) kG, have recently been measured in the vicinity of the H-burning shell in 11 _Kepler_ low-mass RGB stars using asteroseismology methods (Deheuvels et al., 2023). Therefore, it is possible that magnetism has an important role to play in RGB extra mixing. One of such magnetic mixing mechanisms was proposed by Busso et al. (2007) and developed by Nucci & Busso (2014). It assumes that the radiative zones of upper RGB stars are filled with thin donut-shaped magnetized flux tubes that are rising thanks to a difference in the densities inside and outside them caused by a magnetic pressure contribution that makes them buoyant. However, the main drawback of those works was their omission to consider the relatively lengthy process of radiative heat exchange between the rising flux tubes and their surroundings that was needed to maintain the difference in the densities. As a result of that omission, the radial velocity of the flux tubes was overestimated by Busso et al. (2007) by several orders of magnitude (Denissenkov et al., 2009). Indeed, for the initial values of the radius \(a_{0}\) of the smaller circle of the donut-shaped flux tube and its radial velocity to presented in Table 2 of Busso et al. (2007) for the two RGB cases, and for the radiative diffusivity \(K\sim 10^{8}\)cm\({}^{2}\)s\({}^{-1}\) at the RGB mixing depth estimated from Figure 5 or taken from Table 1 of Denissenkov (2010), we find that their corresponding ratios (Peclet numbers) of the thermal diffusion time \(\sim a_{0}^{2}/K\) to the advection time \(\sim a_{0}/v_{0}\) per unit length of the larger circle of the flux tube are equal to 2184 and 930. This means that the flux tubes with the parameters adopted by Busso et al. (2007) rise too fast to be able to exchange heat with their surroundings, which contradicts the assumption made by Busso et al. (2007) that there is no difference in the temperature inside and outside the tubes. Another drawback was that neither of those works discussed how the magnetized flux tubes could be formed in the vicinity of the H-burning shell. Denissenkov et al. (2009) suggested that such flux tubes were products of the undular buoyancy instability (Fan, 2001, and references therein), and that the azimuthal magnetic field Figure 4: Same as in Figure 2, but for the surface carbon isotopic ratio. Green circles with errorbars are APOGEE data for open-cluster RGB stars with initial masses lower than \(2M_{\odot}\), \(\log_{10}g<2\), and metallicities in the range \(-0.36\la\)[Fe/H] \(\la\) 0.28 from McCormick et al. (2023). Figure 3: The stellar evolution code of MESA revision 7624 used in this work overestimates the depth of thermohaline mixing placing it at \(r_{\rm mix}\approx 0.045\,R_{\odot}\) (the left vertical dotted line) because it uses for this the mean molecular weight based only on the H abundance. This results in too low [C/N] elemental abundance ratios predicted for the metallicity [Fe/H]\(=-0.3\) (Figure 2) compared to those observed in upper RGB stars with metallicities in the range \(-0.4\leq\) [Fe/H] \(\leq-0.2\) (Shetrone et al., 2019). Therefore, we fix the RGB extra mixing depth at \(r_{\rm mix}\approx 0.055\,R_{\odot}\) (the right vertical dotted line) for all our models in this work. Figure 2: The evolutionary changes (the surface gravity \(g\) decreases with time) of the surface [C/N] elemental abundance ratio in our \(1.2M_{\odot}\) model star between the MS and the RGB tip produced by the first dredge-up (FPU) only (the solid blue curve followed by the dashed orange line) and by the RGB extra mixing modelled as thermohaline convection using the MESA “Kippenhahn” option with the efficiency parameter \(\alpha_{\rm th}=667\) (the dot-dashed red curve), as the same thermohaline convection but with the mixing depth fixed at \(r_{\rm mix}=0.055R_{\odot}\) (the dot-dashed gray line), and with the diffusion coefficient \(D_{\rm mix}=f(L_{\rm bump})(L/L_{\rm bump})^{4/3}K\) for \(f(L_{\rm bump})=0.0025\) (the dotted blue line), \(f(L_{\rm bump})=0.005\) (the solid black line), and \(f(L_{\rm bump})=0.010\) (the dashed blue line), and \(r_{\rm mix}=0.055R_{\odot}\). For comparison, we also show a model with the fixed \(D_{\rm mix}=0.02K\) and \(r_{\rm mix}=0.055R_{\odot}\) (the dot-dashed orange line). The last five lines overlap each other in this Figure, but they can all be seen detached in Figures 4 and 6. of \(\sim 100\) kG needed for its development was generated by a strong differential rotation acting on and winding up a relatively weak (\(\sim 10\) G) poloidal field. Denissenkov et al. (2009) also took into account heat exchange by radiative diffusion between rising flux tubes and their surroundings and showed that it slowed down the magnetic-buoyancy mixing by nearly 5 orders of magnitude compared to the estimates of Busso et al. (2007). At the same time, they found that a reduced mean molecular weight inside flux tubes formed in the region of its local inversion produced by the reaction \({}^{3}\)He(\({}^{3}\)He,2p)\({}^{4}\)He compensated a substantial part of the negative effect of slow heat exchange. However, given that the magnetic-buoyancy model needs a rather strong differential rotation to be present in the radiative zone, similar to that discussed in Section 2.1, which is not supported by the recent asteroseismology data, we are forced to reject it as a plausible mechanism for RGB extra mixing. ### Mixing by internal-gravity waves Internal gravity waves (IGWs) or g modes of stellar oscillations are stochastically excited by turbulent fluid motion at a convective-zone boundary and some of them can propagate through an adjacent radiative zone (Press, 1981). Following the work of Garcia Lopez & Spruit (1991), Denissenkov & Tout (2003) proposed that IGWs could produce partial mixing of a He- and C-rich radiative zone below the convective envelope in low-mass AGB stars. It could gently inject protons into that zone, where the reactions \({}^{12}\)C(p,\(\gamma\))\({}^{13}\)N and \({}^{13}\)N(e\({}^{+}\nu\))\({}^{13}\)C would then form a sufficiently wide \({}^{13}\)C pocket necessary for the main slow (\(s\)) neutron-capture process to occur there under radiative conditions between He-shell thermal pulses (Straniero et al., 1995). For a g mode with the angular frequency \(\omega\) and degree \(l\), the horizontal wave number is \(k_{\rm h}=\sqrt{l(l+1)}/r\), while the vertical wave number \(k_{\rm v}\) can be determined from the well-known dispersion relation for IGWs \[\frac{\omega^{2}}{N^{2}}=\cos^{2}\theta=\frac{k_{\rm h}^{2}}{k_{\rm h}^{2}+k _{\rm l}^{2}}, \tag{8}\] where \(N\) is again the Brunt-Vaisala (buoyancy) frequency, and \(\theta\) is the angle between the vertical (radial) direction and a plane of constant phase, the latter being parallel to directions of both wave's fluid oscillation and group velocity. IGWs can propagate only in a region where \(\omega<N\) and \(\omega<S_{l}\), where \(S_{l}=k_{\rm h}c_{s}=\sqrt{l(l+1)}c_{s}/r\) is the Lamb frequency for a p mode (sound wave) with the angular degree \(l\). For our \(1.2\,M_{\odot}\) model star the radial profiles of \(N\) and \(S_{1}\) are plotted in Figures 5 and 7 for the evolutionary stages immediately above the bump luminosity and at the beginning of the He-core flash, respectively. For progressive and standing IGWs with \(\omega\ll N\) their transverse nature and Equation (8) lead to the following relation between their vertical and horizontal velocity components: \[\frac{u_{\rm v}}{u_{\rm h}}=\frac{k_{\rm h}}{k_{\rm v}}\approx\frac{\omega}{N }\ll 1. \tag{9}\] Hence, fluid in such waves oscillates in nearly horizontal directions, thus producing a horizontal velocity shear. Like in the case of differential shellular rotation that we discussed in Section 2.1, in the presence of a strong radiative heat diffusion this shear may drive small-scale turbulent mixing in the radial direction. Using Equation (3), its diffusion coefficient can be estimated as \[D_{\rm IGW}=\eta\frac{(k_{\rm v}u_{\rm h})^{2}}{N^{2}}K\approx\eta\frac{( \nabla\times{\bf u})^{2}}{N^{2}}K. \tag{10}\] The second mechanism by which IGWs may produce some mixing was discussed in detail by Schatzman (1996). It considered IGW mixing as a result of a random walk of tracer particles being pushed by an ensemble of g-mode oscillations with different wavelengths, frequencies, and amplitudes. In the ideal case of adiabatic oscillations, the root-mean square displacements of the tracer particles are all equal to zero, but in the presence of heat exchange between oscillating g modes and their surroundings by radiative diffusion this is not necessarily true. An analytical prescription for the diffusion coefficient associated with this mechanism of IGW mixing was developed and implemented to interpret the Li and Be depletions in the Sun and other low-mass MS stars of different ages by Montalban (1994) and Montalban & Schatzman (2000), and it was also used to model the formation of the \({}^{13}\)C pocket in low-mass AGB stars by Denissenkov & Tout (2003). The weakest part of such analytical prescriptions, including the one based on Equation (10), is that their estimated final diffusion coefficients strongly depend on power (or velocity) spectra of IGWs generated at the convective-zone boundary and on their attenuation in the adjacent radiative zone through which they propagate. Until recently, only simple semi-empirical and analytical approaches have been employed to model IGW spectra and attenuation. This work has motivated us to perform the Figure 5: A snapshot of the internal structure of our \(1.2M_{\odot}\) model star, immediately above the bump luminosity, that includes an outer part of its He electron-degenerate core (where the mean molecular weight \(\mu\) has reached its maximum value and stopped changing) and an inner part of its convective envelope (above the vertical dashed red line representing the convective boundary). The radial profiles of the stellar structure parameters relevant to the different mechanisms of RGB extra mixing discussed in text are plotted, including the buoyancy frequency \(N\) and the radiative diffusivity \(K\). In the radiative zone, the diffusion coeffient \(D_{\rm mix}\) is calculated using the MESA “Kippenhahn” option with the efficiency parameter \(\alpha_{\rm th}=667\) (the dashed blue curve at \(r<r_{\rm cb}\)) but with the mixing depth fixed at \(r_{\rm mix}=0.055R_{\odot}\), as guided by Figure 3. first 3D hydrodynamics simulations of turbulent convection and IGWs in a \(4\pi\) sphere encompassing the lower part of the convective envelope and a substantial part of the radiative zone of our \(1.2\,M_{\odot}\) upper RGB model star. Results of these simulations with nominal heating and realistic opacities for the RGB-tip model are presented elsewhere (Blouin et al., 2023), while here we summarize only the most important of them that are relevant for the present work. First, the mean vorticity of fluid motion in radiative layers at about one pressure scale height below the convective envelope scales with the total luminosity as \(|\nabla\times{\bf u}|\propto L^{1/4}\), rather than obeying our anticipated scaling law \(|\nabla\times{\bf u}|\propto L^{2/3}\) (see Section 4.1). Second, when the vorticities measured in the radiative zone of the RGB-tip model from its highest-resolution 3D hydrodynamics simulation are substituted in Equation (10) with a probably upper-limit estimate of \(\eta=0.1\) the resulting values of \(D_{\rm IGW}\) are at least 1 order of magnitude smaller and they decrease with a depth much faster than what is needed to reproduce the observational data. Third, a diffusion coefficient in the radiative zone measured directly using the tracer fluid Gaussians has even smaller values. Therefore, at present it seems unlikely that the RGB extra mixing is produced by IGWs either. One caveat of our hydrodynamics simulations is that they only include a small portion of the convective envelope, and simulations with a larger portion of the convective envelope have higher IGW velocities. ## 3 Previous hypotheses proposed to explain the lithium enrichment of RC stars In Figure 6 we compare the evolution of the surface Li abundance in our \(1.2\,M_{\odot}\) model star between the bump luminosity and the RGB tip or its arrival at the RC region predicted using different prescriptions for extra mixing on the upper RGB and during the He-core flash with the Li abundances in RC stars having masses in the range \(0.77\leq M/M_{\odot}\leq 1.96\) and metallicities close to [Fe/H] \(=0\) taken from Deepak and Lambert (2021). The dashed orange and dot-dashed gray lines in this figure show that neither the evolution with no extra mixing nor the one with thermohaline mixing on the upper RGB can reproduce the observed Li enrichment of RC stars. In this Section we briefly discuss previous hypotheses that were put forward to explain this discrepancy, while our own alternative explanation of it will be presented in the next Section. ### Lithium enrichment by IGW mixing triggered by the He-core flash Schwab (2020) put forward the hypothesis that the Li enrichment of RC stars could be a result of IGW mixing that occurred in their precursors when they experienced the He-core flash at the RGB tip. Those IGWs could be excited by the first and strongest He-shell flash at its top convective boundary. However, for the IGW mixing to be able to produce, via the Cameron-Fowler mechanism, sufficiently high Li abundances with \(A({\rm Li})\ga 1.5\), as observed in many RC stars (e.g., Deepak and Lambert 2021), during this relatively short event it has to be very fast. We have repeated the calculations of Schwab (2020) for our \(1.2\,M_{\odot}\) RGB-tip model using his assumed values of IGW mixing rate and depth, namely, with the diffusion coefficient \(D_{\rm mix}=10^{14}\) cm\({}^{2}\) s\({}^{-1}\) being kept constant in the radiative zone above the radius \(r_{\rm mix}(X=0.5)\), where the H mass fraction has dropped to \(X=0.5\), while the luminosity of the He-burning shell is exceeding \(10^{4}\,L_{\odot}\). This extra-mixing setup and its corresponding evolution of the surface Li abundance are shown in Figure 7 and Figure 6 (the solid brown line). In order to understand if this hypothesis is plausible we first note that its required IGW diffusion coefficient is significantly larger than the radiative diffusivity \(K\) in the entire radiative zone at \(r>r_{\rm mix}(X=0.5)\) (compare the dashed blue and solid black curves in Figure 7). This means that the IGW mixing should proceed on a dynamical, rather than on a thermal timescale and, like convection, it should then dominate in the radial heat transport over the radiative diffusion, which would change the thermal structure of the radiative zone, making it closer to adiabatic. As a result, the evolution of the star during the He-core flash could change in a way that may have never been observed. Such drastic changes were discussed by Denissenkov (2012) for low-mass RGB stars at the bump luminosity in which Li was assumed to be produced by extra mixing with \(D_{\rm mix}\gg K\). Second, for IGW mixing to operate on a dynamical timescale, the waves should become non-linear and break. This could only happen if the Richardson number associated with the IGW horizontal velocity shear \(Ri\approx N^{2}/|\nabla\times{\bf u}|^{2}\) would drop below its critical value \(Ri_{\rm crit}=0.25\)(Press, 1981). We can use the results of the 3D hydrodynamics simulation of IGW mixing in our RGB-tip model, some of which Figure 6: Similar to Figure 2, but for the surface Li abundance and with the surface gravity replaced by the luminosity that increases with time. Additionally, the solid brown line represents the results of our attempt to repeat the IGW mixing calculations of Schwab (2020). We used his diffusion coefficient \(D_{\rm mix}=10^{14}\) cm\({}^{2}\) s\({}^{-1}\) in the radiative zone of our \(1.2M_{\odot}\) model star where the H mass fraction was \(X>0.5\) and kept it constant while the luminosity of the first convective He-shell flash remained between \(10^{14}L_{\odot}\) and \(10^{9}L_{\odot}\) during the He-core flash at the RGB tip. Our predicted Li abundances are compared with those measured by Deepak and Lambert (2021) in RC stars that have masses between \(0.77M_{\odot}\) and \(1.96M_{\odot}\) and metallicities close to [Fe/H] \(=0\). are summarized at the end of Section 2.5, to demonstrate that this is highly unlikely. Indeed, from Figure 19 of Blouin et al. (2023) we read a value of \(D\sim 10^{8}\) cm\({}^{2}\) s\({}^{-1}\) for the IGW diffusion coefficient estimated using the tracer fluid Gaussians at \(r=450\) Mm. It is lower by a factor of 10 than a value of the diffusion coefficient provided by Equation (10) and the measured vorticities at the same radius. This difference vanishes if we reduce the value of the parameter \(\eta\) in Equation (10) by the same factor, which is still a reasonable choice. Now, if the scaling law \(|\nabla\times\mathbf{u}|\propto L^{1/4}\) obtained for the bump luminosity model with IGWs generated by the envelope convection can be applied to IGWs generated by the He-shell convection in the RGB-tip model then from Equation (10) for the ratio \(\sim 10^{6}\) of the maximum luminosity of the first He-shell flash to the luminosity of our star at the RGB tip we obtain an estimate of \(D_{\rm IGW}\propto L^{1/2}\sim 10^{3}D\sim 10^{11}\) cm\({}^{2}\) s\({}^{-1}\ll D_{\rm mix}\sim 10^{14}\) cm\({}^{2}\) s\({}^{-1}\). From the same equation we can also estimate values of the IGW Richardson number \(Ri\sim\eta(K/D_{\rm IGW})\). For \(K\approx 3.2\times 10^{12}\) cm\({}^{2}\) s\({}^{-1}\) at \(r=450\) Mm taken from our RGB-tip model (Figure 7) and \(D_{\rm IGW}\sim 10^{14}\) cm\({}^{2}\) s\({}^{-1}\), we obtain \(Ri\approx 3.2\) and \(Ri\approx 0.32\) for \(\eta=0.1\) and \(\eta=0.01\), respectively. The second Richardson number is actually close to the critical value of \(Ri_{\rm crit}=0.25\), but it was obtained for the He-shell luminosity at its maximum value of \(\sim 10^{9}L_{\odot}\), while the IGW mixing model proposed by Schwab (2020) assumed that the value of \(D_{\rm mix}\sim 10^{14}\) cm\({}^{2}\) s\({}^{-1}\) was maintained all the time while the He-shell luminosity exceeded the much lower value of \(10^{4}L_{\odot}\), so the IGW mixing with this high diffusion coefficient had to start operating with \(Ri\approx 95\gg R_{\rm crit}\). We find this highly unlikely, therefore we doubt that such fast IGW mixing is actually activated during the He-core flash. ### Lithium enrichment by rotation-induced mixing in binaries tidally locked on the RGB Denissenkov et al. (2006) proposed that tidal interaction of an RGB star with its close binary companion leading to a spin-up of that star via synchronization of its rotational and orbital periods could enhance rotationally-induced mixing in its radiative zone and, as a result, enrich its convective envelope in Li via the Cameron-Fowler mechanism. Casey et al. (2019) elaborated on that idea to argue that the Li-rich RC stars had been tidally spun-up on the RGB, then their internal rotation was further accelerated during their contraction between the RGB tip and the RC phase with the total angular momentum conserved, and Li was produced by enhanced rotational mixing. Their conclusion is based on the facts that most of the Li-rich giants are RC stars and that the Li enrichment caused by enhanced extra mixing on the RGB cannot last longer than a few million years (Denissenkov & Herwig, 2004). However, they seem to still admit the possibility that in the upper RGB stars not spun-up by tidal synchronization the mechanism for extra mixing is thermohaline convection. Besides, their hypothesis does not explain why most, if not all, of the RC stars have higher Li abundances than the ones predicted by the stellar evolution theory with RGB extra mixing. ## 4 Our hypothesis for the RGB extra mixing and Li enrichment of RC stars If Casey et al. (2019) were right we would have to invoke at least three different angular-momentum and chemical-element transport mechanisms in low-mass stars, namely, something like the AMRI (Section 2.2), thermohaline and rotationally-induced mixing, to explain, respectively, their post-MS moderate core-envelope differential rotation, RGB extra mixing and its enhanced Li-producing mode. Here, applying the principle of Occam's razor, we assume that the same physical mechanism is responsible for both the coupled evolutionary changes of the core and envelope rotation in low-mass subgiant and early-RGB stars, as measured by asteroseismology, and the evolutionary declines of the \({}^{12}\)C/\({}^{13}\)C and C/N ratios in upper RGB stars, as revealed by stellar spectroscopy. Furthermore, we speculate that the same RGB extra mixing mechanism that begins to manifest itself at the bump luminosity and initially leads to a further depletion of the surface Li abundance in upper RGB stars gets enhanced and begins to produce Li in all these stars when they approach the RGB tip. Initially, we focused only on the upper RGB extra mixing and considered the following two candidates for its mechanism: mixing by small-scale turbulence driven by horizontal velocity shear produced by internal gravity waves (IGWs), as described in Section 2.5, and the joint operation of rotation-driven meridional circulation and turbulent diffusion, as described by DT00. We have excluded thermohaline convection from our consideration from the onset because it results in a strong depletion of the surface Li abundance in upper RGB stars (the dot-dashed gray curve in Figure 6), even when its diffusion coefficient is artificially magnified by a large factor of \(\sim 200\) not supported by hydrodynamics simulations (Denissenkov, 2010; Traxler et al., 2011; Denissenkov & Merryfield, 2011), and because it is not clear how it can be enhanced and produce Li near the RGB tip to support the main idea of this work. Diffusion coefficients corresponding to these mechanisms can all be represented as \(D_{\rm mix}=fK\) Figure 7: Similar to Figure 5, but for the RGB-tip model with the first convective He-shell zone developing and with IGW mixing in the radiative zone modelled as explained in caption to Figure 6. where \(f=f(L_{\rm bce})\) for mixing by IGWs generated by the envelope convection, \(f=f(\Omega_{\rm bce},(\partial\Omega/\partial r)_{\rm rad})\) for rotation-driven mixing, and \(f=f(X_{\rm bce}({}^{3}{\rm He}),(\partial X({}^{3}{\rm He})/\partial r)_{\rm rad})\) for thermohaline convection, where the parameters \((\ldots)_{\rm bce}\) and \((\ldots)_{\rm rad}\) refer to the base of the convective envelope and to the radiative zone, and \(L_{\rm bce}=L\). Here, we use the same MESA model of a low-mass star with the initial mass \(1.2M_{\odot}\) and metallicity [Fe/H]\(=-0.3\) that was introduced in Section 2.3 as a representative for the Li-rich RC stars and whose evolution from the zero-age MS through to the thermally-pulsing AGB phase was shown in Figure 1. Like for the case of thermohaline convection, the mixing depth is fixed at the radius \(r_{\rm mix}=0.055R_{\odot}\) in our parametric models of RGB extra mixing. The initial value of the factor \(f\) at the bump luminosity \(f(L_{\rm bump})\) is treated as a free parameter. Our goal is to find reasonable physically-motivated scalings of \(f\) with the luminosity, e.g. a power-law form of \(f(L)=f(L_{\rm bump})(L/L_{\rm bump})^{p}\) with \(p>1\), for IGW, rotationally-induced, and AMRI (Section 2.2) mixing mechanisms and demonstrate that with these scalings they can produce the surface Li abundances in our models, by the time they will reach the RGB tip, comparable to those measured in RC stars. We artificially limit the value of \(f\leq 1\), so that extra mixing does not change thermal stratification of the radiative zone. Obviously, thermohaline mixing does not satisfy the requirement of our hypothesis that its corresponding factor \(f\) should increase with the luminosity on the upper RGB, so that Li is produced by the Cameron-Fowler mechanism when the factor \(f\), and therefore the RGB extra mixing diffusion coefficient \(D_{\rm mix}=fK\), increases in the star approaching the RGB tip. Instead, \(f=f(X_{\rm bce}({}^{3}{\rm He}),(\partial X({}^{3}{\rm He})/\partial r)_{\rm rad})\) decreases when a low-mass star climbs the upper RGB following a decline of both \(X_{\rm bce}({}^{3}{\rm He})\) and \((\partial X({}^{3}{\rm He})/\partial r)_{\rm rad}\) caused by the \({}^{3}{\rm He}\) burning in the vicinity of the H-burning shell and RGB extra mixing. In Figure 6, its predicted Li-production curve is compared with the one for the model with \(D_{\rm mix}=0.02K\) and the same value of \(r_{\rm mix}=0.055R_{\odot}\). Both of the models actually begin to produce some Li towards the RGB tip simply because the radiative diffusivity \(K\) is proportional to \(L\), but the latter model (DK02const_rmx055) makes more Li because it has a constant value of \(f=0.02\), whereas \(f\) decreases with \(L\) in the former model. ### IGW mixing on the upper RGB enhanced by the increasing luminosity At the beginning of this work, before we had done the corresponding hydrodynamics simulations, we thought that IGWs could provide a mechanism for both the RGB extra mixing and Li enrichment by its enhanced efficiency at the RGB tip. Our line of reasoning was based on the following data. Under certain assumptions, it can be anticipated that the horizontal and vertical components of IGW velocity, and therefore its vorticity in Equation (10), are all proportional to \(L^{2/3}\) (e.g., Section 5.2 in Herwig et al.2023). Hence, the diffusion coefficient for IGW mixing driven by the IGW horizontal velocity shear was expected to increase with the luminosity as \(D_{\rm IGW}=f(L)K\), where \(f(L)\propto L^{4/3}\). Schwab (2020) needed a diffusion coefficient \(D_{\rm mix}\sim 10^{14}\) cm\({}^{2}\)s\({}^{-1}\) for IGW mixing triggered by the first He-shell flash at the RGB tip and then maintained at this high value for luminosities between \(L_{\rm He}\sim 10^{4}L_{\odot}\) and \(L_{\rm He}\sim 10^{9}L_{\odot}\) to be able to produce Li in amounts comparable to those observed in RC stars. If we take a middle value of the He-shell luminosity \(L_{\rm He,mid}\sim 10^{6}L_{\odot}\) and use the power law \(D_{\rm IGW}\propto L^{4/3}K\) then we obtain estimates of \(D_{\rm IGW}(L_{\rm tip})\sim 10^{14}(L_{\rm tip}/L_{\rm He,mid})^{4/3}\approx 2.5 \times 10^{10}\) cm\({}^{2}\)s\({}^{-1}\) and \(D_{\rm IGW}(L_{\rm bump})\sim 10^{14}(L_{\rm bump}/L_{\rm He,mid})^{4/3}\approx 2.5 \times 10^{8}\) cm\({}^{2}\)s\({}^{-1}\) at the RGB tip and bump luminosities, \(\log_{10}(L_{\rm tip}/L_{\odot})=3.3\) and \(\log_{10}(L_{\rm bump})/L_{\odot})=1.8\), respectively. These IGW diffusion coefficients would be sufficiently large to explain the RGB extra mixing (Denissenkov & VandenBerg, 2003). Moreover, extra mixing on the upper RGB with the diffusion coefficient \(D_{\rm mix}=D_{\rm IGW}=f(L)K\) could reproduce the evolutionary declines of [C/N] and \({}^{12}\)C/\({}^{13}\)C as well as the Li enrichment of RC stars for the fixed mixing depth \(r_{\rm mix}=0.055R_{\odot}\) and \(f(L)=f(L_{\rm bump})(L/L_{\rm bump})^{4/3}\) with \(f(L_{\rm bump})=0.0025\), \(f(L_{\rm bump})=0.005\), and \(f(L_{\rm bump})=0.010\) (the dotted blue, solid black, and dashed blue curves in Figures 2, 4, and 6). Note that on the Li abundance plot these three models arrive at values of \(A({\rm Li})\ga 0.5\) only when \(\log_{10}(L/L_{\odot})\ga 2.75\) where there are almost no measurements of the Li abundance in field low-mass stars (e.g., Figure 7 in Deepak & Lambert2021b) to verify or reject our hypothesis. However, our recent 3D hydrodynamics simulations of convection and IGWs in the \(1.2M_{\odot}\) upper RGB star (Blouin et al.2023) have shown that even in its highest-luminosity RGB-tip model the efficiency of IGW mixing is much lower than the observationally constrained rates of the RGB extra mixing and that the vorticity of IGW motion in the radiative zone is proportional to \(L^{1/4}\) (again, if this scaling law obtained for the bump luminosity model can be applied to the RGB-tip model), rather than to \(L^{2/3}\) as we anticipated. Therefore, IGW mixing is probably the wrong mechanism for our Li-enrichment hypothesis to work. ### Rotationally-induced mixing on the upper RGB enhanced by the increasing mass loss When a rotating star does not lose any mass, and therefore angular momentum, the competition between angular momentum transport by the rotationally-induced meridional circulation and turbulent diffusion in its radiative zone, as described by Equation (2), may reach a state of equilibrium (e.g. Denissenkov et al.1999; Denissenkov & Tout 2000). Zahn (1992) suggested that such equilibrium could be broken by a magnetized stellar wind and its associated strong angular momentum loss by the star, in which case the angular velocity profile \(\Omega(r)\) for the shellular rotation in the radiative zone would become steeper and that would enhance the efficiency of rotational mixing. For the case of moderate wind, Zahn (1992) obtained the following estimate of an effective diffusivity for rotational mixing in the asymptotic regime: \[D_{\rm eff}=\frac{C_{\rm h}}{50}\frac{r|U|}{\alpha}=\frac{C_{\rm h}}{20}\frac{ \Omega_{\rm s}}{\Omega(r)}\frac{k^{2}}{\alpha}\frac{R^{2}}{t_{3}}\frac{\rho_{ \rm m}}{\rho}, \tag{11}\] where \(C_{\rm h}\la 1\) is a free parameter, \(\alpha=d\ln(r^{2}\Omega)/d\ln r\), \(\Omega_{\rm s}=\Omega(R)\) the surface angular velocity, \(t_{3}=k^{2}MR^{2}\Omega_{\rm s}/(-dJ/dt)\) the timescale of the angular-momentum loss with \(k^{2}=(2/3)\int r^{2}dM_{r}/(MR^{2})\) representing a dimensionless moment of inertia of the star, and \(\rho_{\rm m}=3M_{r}/(4\pi r^{3})\) the mean density of a sphere of the radius \(r\). Charbonnel (1995) used Equation (11) in a model of RGB extra mixing that was able to successfully reproduce the observed \({}^{12}\)C/\({}^{13}\)C ratios in globular-cluster and field population-II upper RGB stars simultaneously with an initial strong depletion of their surface Li abundances. Our hypothesis assumes that Li enrichment of RC stars occurs in their progenitors when they approach the RGB tip where large amounts of Li are produced via the Cameron-Fowler mechanism by enhanced RGB extra mixing. For IGW mixing such enhancement could be a direct consequence of the increasing luminosity but, according to the results of 3D hydrodynamics simulations of Blouin et al. (2023), this does not seem to be the case (Section 4.1). Alternatively, if it could still be possible to associate RGB extra mixing with rotationally-induced meridional circulation and turbulent diffusion, as we discussed in Section 2.1, then Equation (11) would provide a mechanism for the RGB extra mixing to get enhanced near the RGB tip. Indeed, this equation can be written in the following form: \[D_{\rm eff}=\frac{C_{\rm h}}{20\alpha}\frac{\Omega_{\rm s}}{\Omega(r)}\frac{R ^{2}}{M}\frac{\rho_{\rm m}}{\rho}\left(-\frac{dM}{dt}\right), \tag{12}\] where \(dM=dJ/(R^{2}\Omega_{\rm s})\) is a mass lost with the angular momentum \(dJ\). There is a number of different prescriptions for the mass-loss rate \(dM/dt\) by low-mass stars on the RGB. Six of them are listed in Table 1 of Catelan (2009) and presented in his Figure 4 in the form of an integrated mass loss along the RGB as a function of metallicity [Fe/H] for a fixed age of 12 Gyr (also, see Appendix in Catelan, 2000). The widely-used Reimers formula \(dM/dt=-4\times 10^{-13}\eta_{\rm R}\,(L/gR)\,\,M_{\odot}\,{\rm yr}^{-1}\)(Reimers, 1975), where the surface luminosity \(L\), radius \(R\), and gravity \(g\) are all expressed in solar units, and \(\eta\) is a free parameter (we use the value of \(\eta_{\rm R}=0.36\)), gives the minimum RGB-integrated mass loss. For this prescription, our stellar evolution calculations predict the scaling relation \(dM/dt\propto L^{5/3}\propto L^{1/3}K\), since \(K\propto L\), and therefore \(D_{\rm eff}\propto L^{1/3}K\). However, if we take the modified Reimers mass-loss rate from Table 1 of Catelan (2009), \(dM/dt\propto(L/gR)^{1.4}\), that gives the second-largest RGB-integrated mass loss, then from the result obtained for the standard Reimers rate we immediately find that it increases with the luminosity as \(dM/dt\propto L^{7/3}\propto L^{4/3}K\), and \(D_{\rm eff}\propto L^{4/3}K\). This is similar to our anticipated scaling relation for the IGW diffusion coefficient that we discussed in the previous section, therefore the results of our stellar evolution computations with the RGB extra mixing diffusion coefficient \(D_{\rm mix}=f(L_{\rm bump})(L/L_{\rm bump})^{4/3}K\) that are presented there, in particular the dotted blue, solid black, and dashed blue Li-production curves in Figure 6, can directly be applied here. Note that in the case of a magnetized stellar wind \(dJ\) can significantly exceed \((R^{2}\Omega_{\rm s})dM\) because its magnetic coupling with the stellar envelope can extend to distances \(r\gg R\), then even the standard Reimers mass-loss rate may be accompanied by a sufficiently strong loss of angular momentum to make Zahn's rotationally-induced mixing in the asymptotic regime of moderate stellar wind fast enough to produce large amounts of Li at the RGB tip. ### AMRI as a possible mechanism for the RGB extra mixing and Li enrichment of RC stars In Section 2.2, we suggested that the azimuthal magneto-rotational instability (AMRI) could actually be the right physical mechanism not only for the angular-momentum transport in low-mass stars but also for mixing in their radiative zones on the upper RGB. The dependence of its diffusion coefficient (6) on the ratio of the mean angular velocities of the radiative core \(\Omega_{\rm core}\) and convective envelope \(\Omega_{\rm env}\) indicates that the AMRI mixing could be significantly enhanced towards the RGB tip. Indeed, between the bump luminosity and the RGB tip the radius of our \(1.2M_{\odot}\) model star increases by an order of magnitude on a timescale of 45 Myr. If the characteristic timescale of core-envelope rotational coupling (e.g., Denissenkov et al., 2010) is longer than this then rotation of the expanding convective envelope would slow down by a factor of 100, because of the conservation of its angular momentum, and the AMRI diffusion coefficient might increase by a factor of \(10^{4}\). That would be sufficient for the AMRI mixing to produce large amounts of Li at the RGB tip, provided that it could start to manifest itselfs as the RGB extra mixing at the bump luminosity. Obviously, the efficiency of the AMRI mixing on the upper RGB should depend not only on the \(\Omega_{\rm core}/\Omega_{\rm env}\) ratio, but also on the initial rotational velocity of the star as well as on its mass and metallicity. ## 5 Discussion We have discussed the five different mechanisms of extra mixing in the radiative zones of low-mass stars on the upper RGB, above the bump luminosity, namely, the rotationally-induced meridional circulation with turbulent diffusion and their enhanced mixing mode in the asymptotic regime of moderate stellar wind, the azimuthal magneto-rotational instability (AMRI), thermohaline convection, buoyancy of magnetic flux tubes, and mixing by internal gravity waves. We have come to the conclusion that among these five options only AMRI can potentially explain not only the angular momentum transport between the rapidly-rotating radiative cores and the slower-rotating convective envelopes in these stars during their evolution from the MS through to the HB, as revealed by asteroseismology (e.g., Dumont, 2023; Spada et al., 2016; Moyano et al., 2023), but also the RGB extra mixing and potentially its enhancement near the RGB tip that may be responsible for the Li-enrichment of RC stars. It is interesting that approximately at the same time, in the autumn of 2021, when we had started to develop our idea of Li production by the Cameron-Fowler mechanism with IGW mixing in low-mass stars on the upper RGB, assuming that its efficiency should increase with the luminosity as \(D_{\rm mix}=f(L_{\rm bump})(L/L_{\rm bump})^{4/3}K\), Li et al. (2023) had submitted a paper where they put forward a similar hypothesis. However, instead of mixing by IGWs driven by their produced horizontal velocity shear that has led us to the above scaling relation, they used the model of IGW mixing based on the consideration of a random walk of tracer particles pushed by an ensemble of IGWs and assisted by radiative heat diffusion that had been proposed and developed by Montalban (1994) and Montalban & Schatzman (2000). Like in our case, it is not a surprise that for their IGW mixing they have found a diffusion coefficient increasing with the luminosity along the upper RGB, since the radiative diffusivity and kinetic energy of IGWs are both increasing with \(L\). Because, following Kumar et al. (2020), they adopted a very low efficiency of thermohaline convection with \(\alpha_{\rm th}=100\), which corresponds to the finger aspect ratio \(a\approx 1\) supported by the hydrodynamics simulations of Denissenkov (2010), Traxler et al. (2011), and Denissenkov & Merryfield (2011), they had to adjust parameters for their model of IGW mixing that would make it as efficient as the RGB extra mixing. Therefore, in this respect their model is also similar to our parametric model of IGW mixing for which we have used the observationally constrained values of the parameter \(f(L_{\rm bump})\). However, we have decided to postpone the completion of our work until the results of our 3D hydrodynamics simulations of convection and IGWs in our \(1.2M_{\odot}\) upper RGB model star were obtained and published (Blouin et al., 2023). These results have shown that even in the highest-luminosity RGB-tip model the rate of IGW mixing is much slower than what is required to identify it with the RGB extra mixing, therefore we have rejected IGWs as the main mechanism of extra mixing in upper RGB stars and Li-enrichment of RC stars. Our \(1.2M_{\odot}\) model star spends only 14% of its upper RGB lifetime (6 Myr out of 44 Myr) between the luminosities \(\log_{10}\left(L/L_{\odot}\right)\approx 2.75\) and \(\log_{10}\left(L/L_{\odot}\right)=3.3\), where \(A\)(Li) in its atmosphere becomes positive and continues to grow in our parametric model of RGB extra mixing (Figure 6). Values of \(A\)(Li) exceed 1.5, i.e. the star becomes a Li-rich red giant, only at \(\log_{10}\left(L/L_{\odot}\right)\gtrsim 2.9\) during the last 9% of its upper RGB lifetime (4 Myr out of 44 Myr). These lifetimes are reduced to 6% and 4%, respectively, if we compare them with the RGB evolutionary time between the end of the FDU and the RGB tip (100 Myr). The time our star has \(A\)(Li) \(>1.5\) on the upper RGB is only 7% of its HB lifetime (56 Myr), which is not very different from 17% of Li-rich objects among the "highly-evolved AGB or RGB stars" in the sample of high-resolution spectroscopic Li-rich red-giant targets studied by Yan et al. (2021). Note that it is difficult to distinguish AGB and RGB stars by asteroseismology methods because they have a similar structure. This similarity also means that if the IGW mixing were as efficient in upper RGB stars as the RGB extra mixing it would prevent the formation of the \({}^{13}\)C pockets for the main \(s\) process (Straniero et al., 1995) in their AGB descendants that needs much slower mixing to gently inject protons into the \({}^{4}\)He- and \({}^{12}\)C-rich layers (Denissenkov & Tout, 2003). The 3D hydrodynamics simulations of IGW mixing in upper RGB stars have not ruled out a possibility of such slow IGW mixing in AGB stars (Blouin et al., 2023). Unfortunately, there is not yet a model or magneto-hydrodynamics simulations of AMRI applicable to the magnetic and thermodynamic conditions in the radiative zones of low-mass RGB stars. The simulations of AMRI by Rudiger et al. (2015) assumed a uniform density distribution, therefore they neglected the stabilizing effect of the buoyancy produced by the stable thermal stratification in the radiative zone of an upper RGB star. To compensate for this deficiency of the simulations, we have assumed that the AMRI diffusion coefficient is proportional to the radiative diffusivity \(K\) that reduces the stabilizing buoyancy force via the exchange of heat between rising fluid and its surroundings. By the same reason, we have also assumed that the AMRI mixing, like the other RGB extra mixing mechanisms, may become active only in the chemically-uniform radiative zone above the bump luminosity. It is not clear yet how the empirically constrained equation \(D_{\rm mix}\propto(\Omega_{\rm core}/\Omega_{\rm env})^{2}\) for the AMRI diffusion coefficient is transformed into the scaling relation \(D_{\rm mix}\propto(L/L_{\rm bump})^{p}\) used in our parametric model of RGB extra mixing. The value of \(p\) for the AMRI mechanism probably depends on the ratio of the core-envelope rotational coupling time to the evolutionary time on the RGB. Given that \(R\propto L\) on the upper RGB, it may become as large as \(p\approx 4\), when the RGB evolution significantly speeds up near the RGB tip, and the AMRI angular-momentum transport may be not fast enough to decrease the resulting differential rotation between the rapidly-rotating core and the convective envelope whose rotation is slowed down by its expansion and conservation of angular momentum. On the other hand, the much slower evolution of the star at the beginning of its ascent of the upper RGB may give the AMRI enough time to reduce the ratio \((\Omega_{\rm core}/\Omega_{\rm env})\), which could result in \(p<4\). Therefore, in this work we have used the parametric model only with the value of \(p=4/3\), that we expected to be appropriate for the models of IGW and rotational mixing with the modified Reimers mass-loss rate, just as a proof of concept. When we look for observational data that could be used to substantiate or disapprove our hypothesis of Li-enrichment of RC stars based on the AMRI mechanism of RGB extra mixing, we find the correlations of enhanced Li abundances in field and open-cluster red giants with their rotation (e.g., Drake et al., 2002; Ming-hao et al., 2021; Tsantaki et al., 2023) to be a supporting evidence. On the other hand, the scarcity of Li-rich HB stars in both globular clusters and open clusters with low MS turn-off masses (Kirby et al., 2016; Sanna et al., 2020; Tsantaki et al., 2023) can be considered as an argument against our hypothesis. However, the last observational fact could be even more difficult to explain if the Li-enrichment of RC stars were associated with the He-core flash in their RGB progenitors because this is a universal physical process occurring in all low-mass stars, whereas the AMRI mixing mechanism may depend on different rotational, mass-loss, and magnetic-field evolution histories of low-mass stars which may be responsible for the observed diversity of Li-abundance distributions in HB stars in the field and in stellar clusters. ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Acknowledgements SB is a Banting Postdoctoral Fellow and a CITA National Fellow, supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). FH acknowledges funding through an NSERC Discovery Grant. PRW acknowledges funding through NSF grants 1814181 and 2032010. FH and PRW have been supported through NSF award PHY-1430152 (JINA Center for the Evolution of the Elements). The computations and data analysis were carried on the Astrohub online virtual research environment ([https://astrohub.uvic.ca](https://astrohub.uvic.ca)) developed and operated by the Computational Stellar Astrophysics group ([https://csa.phys.uvic.ca](https://csa.phys.uvic.ca)) at the University of Victoria and hosted on the Compute Canada Arbutus Cloud at the University of Victoria.
2309.13348
Accelerating Particle and Fluid Simulations with Differentiable Graph Networks for Solving Forward and Inverse Problems
We leverage physics-embedded differentiable graph network simulators (GNS) to accelerate particulate and fluid simulations to solve forward and inverse problems. GNS represents the domain as a graph with particles as nodes and learned interactions as edges. Compared to modeling global dynamics, GNS enables learning local interaction laws through edge messages, improving its generalization to new environments. GNS achieves over 165x speedup for granular flow prediction compared to parallel CPU numerical simulations. We propose a novel hybrid GNS/Material Point Method (MPM) to accelerate forward simulations by minimizing error on a pure surrogate model by interleaving MPM in GNS rollouts to satisfy conservation laws and minimize errors achieving 24x speedup compared to pure numerical simulations. The differentiable GNS enables solving inverse problems through automatic differentiation, identifying material parameters that result in target runout distances. We demonstrate the ability of GNS to solve inverse problems by iteratively updating the friction angle (a material property) by computing the gradient of a loss function based on the final and target runouts, thereby identifying the friction angle that best matches the observed runout. The physics-embedded and differentiable simulators open an exciting new paradigm for AI-accelerated design, control, and optimization.
Krishna Kumar, Yongjin Choi
2023-09-23T11:52:43Z
http://arxiv.org/abs/2309.13348v1
Accelerating Particle and Fluid Simulations with Differentiable Graph Networks for Solving Forward and Inverse Problems ###### Abstract. We leverage physics-embedded differentiable graph network simulators (GNS) to accelerate particulate and fluid simulations to solve forward and inverse problems. GNS represents the domain as a graph with particles as nodes and learned interactions as edges. Compared to modeling global dynamics, GNS enables learning local interaction laws through edge messages, improving its generalization to new environments. GNS achieves over 165x speedup for granular flow prediction compared to parallel CPU numerical simulations. We propose a novel hybrid GNS/Material Point Method (MPM) to accelerate forward simulations by minimizing error on a pure surrogate model by interleaving MP in GNS rollouts to satisfy conservation laws and minimize errors achieving 24x speedup compared to pure numerical simulations. The differentiable GNS enables solving inverse problems through automatic differentiation, identifying material parameters that result in target runout distances. We demonstrate the ability of GNS to solve inverse problems iteratively updating the friction angle (a material property) by computing the gradient of a loss function based on the final and target runouts, thereby identifying the friction angle that best matches the observed runout. The physics-embedded and differentiable simulators open an exciting new paradigm for AI-accelerated design, control, and optimization. 2023 ACM ISBN 978-x-xxxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn.mnmnmn](https://doi.org/10.1145/mnmnmn.mnmnmn) 3 GNS, MPM, in situ viz, simulation ## 1. Introduction Simulators that realistically capture complex physics, such as particulate and fluid flow, provide immense value across scientific Figure 1. Graph network and MeshNet simulator for accelerating particulate and fluid simulations (modified after (Krishnakumar and Choi, 2023)) and engineering fields. Particulate systems such as granular media show complex transitionary behavior between solid-like and fluid-like responses. Additionally, the turbulent behavior of fluid flow poses unique challenges in modeling their flow around complex boundaries. These simulations require fine-mesh resolutions to capture intricate geometries and long compute times to converge on solutions. To make such simulations more practical approaches like reduced-order models are often used but sacrifice accuracy for efficiency. Conventional continuum-based simulation techniques, such as the finite element or finite difference methods, can model small-strain problems and face mesh distortion issues in modeling large-deformation flow problems (Kumar and Cunduz, 2017). Although hybrid Eulerian-Lagrangian methods such as the Material Point Method (MPM) can simulate large deformation particulate flow problems, such as landslides, they are computationally expensive and are limited to representative elemental volume with at most 1M particles; in contrast, a cubic meter of soil has more than 1 billion grains. AI algorithms are widely adopted in building data-only surrogate models; however, they are often used as black boxes to predict a single outcome, such as failure or no failure, and lack physics (Bianchi et al., 2017). We develop a physics-embedded graph network simulator (GNS) that represents the domain as nodes and interactions as learned edge functions, allowing it to generalize beyond training regimes. Using an attention-based GNS surrogate, we propose a novel physics-embedded framework for developing surrogate models. Furthermore, we accelerate the computational efficiency of traditional simulators while minimizing the errors in a data-only surrogate by proposing a novel hybrid GNS/MPM. The hybrid GNS/MPM combines the best of both works by interleaving MPM with GNS to conserve physics laws while accelerating the forward simulations with GNS, offering an order of magnitude better computational efficiency over a pure numerical simulation. A major challenge in studying particulate and fluid flow is solving optimization and inverse problems. This inverse analysis of identifying the optimal configuration or material properties that yield a specific response requires incrementally varying input parameters/model configuration and rerunning models to match observations - an inefficient trial-and-error approach. By exploiting automatic differentiation in GNS, we solve the inverse analysis with gradient-based optimization. Furthermore, The NextGen differentiable GNS opens a new AI-embedded design, control, and optimization paradigm. ## 2. State of the Art Numerical methods provide approximate solutions to partial differential equations (PDEs) by discretizing the solution space into finite entities. Particle-based approaches like the discrete element method (DEM) offer the advantage of modeling the microscale grain-grain interactions, albeit constrained to representative elemental volumes (Kumar and Cunduz, 2017). Traditional continuum-based methodologies, such as the finite element method (FEM), are proficient in predicting failure initiation but fall short due to mesh distortions when handling large-deformation runouts (Kumar and Cunduz, 2017). Hybrid Eulerian-Lagrangian methods like the material point method (MPM) alleviate mesh distortion issues but necessitate grid and material point tracking, proving computationally expensive (Kumar and Cunduz, 2017). However, these methods only leverage CPU parallelization, and the hybrid particle-mesh transfer degrades the scaling performance of MPM, limiting its applicability for exa-scale simulations. Furthermore, traditional forward simulators cannot solve inverse and design problems, as they are limited to computing gradients in the forward mode. Solving inverse problems requires a special adjoint method that manually defines the derivative of the forward model equations. The lack of reverse-mode differentiation limits the AI-embedded simulation paradigm. Neural network (NN)-based ML models have shown promising results in predicting soil deformations under specific load conditions (Bianchi et al., 2017; Kumar and Cunduz, 2017; Cunduz, 2017; Cunduz, 2017). However, these models 'black-box' nature impedes interpretability, necessitating significant training data and leaving them vulnerable to adversarial attacks. Physics-informed neural networks (PINNs) embed prior knowledge, such as PDEs and boundary conditions, as a loss function in model training (Hai et al., 2017). However, PINNs are limited to the boundary conditions of the training data and may not yield PDE-compliant predictions during extrapolation. Graph network simulators (GNS) offer a promising alternative that exploits graph networks to represent the underlying domain and learn the local interaction rather than the global dynamics, thus allowing extrapolation to geometries beyond the training regime (Bianchi et al., 2017; Kumar and Cunduz, 2017; Cunduz, 2017). Haeri and Skonieczny (Hai et al., 2017) reduced the dimensionality of the data using Principal Component Analysis to model graph networks. Mayr et al. (Mayr et al., 2018) developed a contact boundary in GNS to model complex boundary interactions with granular media. Kumar and Vantassel (Kumar and Vantassel, 2017) developed a multi-GPU parallel GNS to achieve linear strong scaling during GNS training. Kumar et al. (Kumar and Cunduz, 2017) exploited GNS as an oracle for large-scale in situ visualization of regional-scale landslides. A new class of differentiable simulators offers a promising solution to solve complex inverse problems by enabling differentiation in forward and reverse modes through automatic differentiation. Initiatives like JAX-MD (Kumar and Vantassel, 2017) and JAX-FLUIDS have made strides towards creating differentiable simulators (DiffSim) for particulate and fluid systems (Bianchi et al., 2017). Differentiable simulation allows the incorporation of physics and domain knowledge into ML models, leading to better generalization. They enable end-to-end gradient-based optimization facilitating continuous adaptation and meta-learning. Nevertheless, the object-oriented design of numerical methods, replete with branching conditions, poses challenges in automatic differentiation, requiring stateless implementations for acceleration with Just-In-Time compilations. Integrating AI acceleration with traditional numerical methods can revolutionize numerical simulations and achieve new simulation frontiers. ## 3. Graph Network Simulation Graphs can represent interactions in physical systems (Bianchi et al., 2017; Kumar and Cunduz, 2017). We represent the particulate media as a graph \(G=(\mathbf{V},\mathbf{E})\) consisting of a set of vertices \((\mathbf{v}_{i}\in\mathbf{V})\) representing the particles or aggregation of particles and edges \((\mathbf{e}_{i,j}\in\mathbf{E})\) connecting a pair of vertices \((\mathbf{v}_{i}\) and \(\mathbf{v}_{j})\) representing the interaction between them. Graphs offer a permutation-invariant form of encoding data, where the interaction between vertices is independent of the order of vertices or their position in Euclidean space. Graph neural network (GNN) takes a graph \(G=(\mathbf{V},\mathbf{E})\) as an input, computes properties and updates the graph, and outputs an updated graph \(G^{\prime}=(\mathbf{V}^{\prime},\mathbf{E}^{\prime})\) with an identical structure, where \(\mathbf{V}^{\prime}\) and \(\mathbf{E}^{\prime}\) are the set of updated vertex and edge features (\(\mathbf{v}^{\prime}_{i}\) and \(\mathbf{e}^{\prime}_{i,f}\)). GNN generates an updated graph by propagating information through the graph, termed _message passing_. Graph Network Simulators (GNS) (Garon et al., 2016; Chen et al., 2017; Li et al., 2018; Li et al., 2018) operate on graphs to learn the physics of the dynamic system and predict rollouts. The graph network spans the system domain with nodes representing a collection of particles and the links connecting the nodes representing the local interaction between particles or clusters of particles. The GNS learns the physics of the system dynamics, such as momentum and energy exchange, through message passing on the graph. GNS has three components (see fig. 1a): (a) Encoder, which embeds particle information to a latent graph, the edges are learned functions; (b) Processor, which allows data propagation and computes the nodal interactions across steps; and (c) Decoder, which extracts the relevant dynamics (e.g., particle acceleration) from the graph. We introduce physics-inspired inductive biases, such as an inertial frame that allows learning algorithms to prioritize one solution (constant gravitational acceleration) over another, reducing learning time. The GNS implementation uses semi-implicit Euler integration to update the next state based on the predicted accelerations. We extend GNS with an attention mechanism to focus on the local interaction law and generate physically consistent predictions by enforcing conservation laws (mass, momentum, and energy) as soft constraints. The attention coefficient between nodes is defined as a weighted function of the feature over its neighbors. The graph attention mechanism improves predictions over long-time scales with weight-sharing properties to represent dynamically changing neighbors typical in large-deformation particulate flows. ### Training and rollout The training datasets include 26 square-shaped granular mass flow trajectories in a two-dimensional box boundary simulated using the Material Point Method (CB-Geo MPM) code (Li et al., 2018). Each simulation has a different initial configuration regarding the size of the square granular mass, position, and velocity. We used a learning rate \(\eta=1E-4\) and trained for 20M epochs on Nvidia A100 GPU nodes on TACC LoneStar6. GNS successfully predicts the rollout of granular media within 5% particle location error compared to MPM simulations (see fig. 3). Additionally, GNS achieves a speed-up greater than 165x compared with distributed memory parallel CB-Geo MPM code. ### MeshGraphNet We describe the state of the system at time t using a simulation mesh \(M_{t}=(V,E_{M})\) with nodes \(V\) connected by mesh edges \(E_{M}\). Each node \(i\in V\) is associated with a reference mesh-space coordinate \(x_{i}\), which spans the simulation mesh and additional dynamical quantities \(q_{i}\) that we want to model. The task is to learn a forward model of the dynamic quantities of the mesh at time \(t+1\) given the current mesh state \(M_{t}\) and a history of previous meshes \(M_{t-1},\dots,M_{t-n}\). We employ a similar architecture to the graph neural network of an Encode-Process-Decode architecture, followed by an integrator, as shown in fig. 1b. Figure 2 shows the prediction of a von Karman vortex shedding from the MeshGraphNet compared with a ground truth Computational Fluid Dynamics (CFD) solution. ## 4. Accelerating forward problems with GNS We develop a hybrid GNS-accelerated numerical simulation with the Material Point Method for a fast solution to forward problems. We design a hybrid GNS-MPM approach incorporating domain-specific knowledge and conservation laws to achieve improved convergence. Figure 3 shows the hybrid GNS-MPM framework, which includes three main stages. _Warm-up_: GNS prediction requires the previous five steps to predict a rollout. We first generate the initial five velocity steps using MPM with specified boundary conditions. We run the physics solver with a predefined 'K' of five steps. _GNS rollout_: After the warm-up step, we predict the rollout, as described in section 3, based on the previous K timesteps for further '\(M\)' steps. _Iterative Refinement_: The output of the GNS rollout may not satisfy known conservation laws, despite inductive biases and constraints. We feed the output of the GNN rollout to the MPM physics solver to perform 'K' iterations. The data-driven model integrated with MPM will generate physics-conserving simulations in less time. We achieve a speed-up of 20x compared to traditional explicit simulation, while most of the computation time is still spent on the '\(n*K\)' runs. Figure 3 shows that the hybrid GNS+MPM reduces displacement errors compared to pure GNS-only runs. Figure 4 shows the effect of hybrid GNS+MPM in reducing the final error in pure-GNS-only models. Further research could explore different criteria for adaptive-switching between GNS/MPM based on error metrics. ## 5. Accelerating inverse problems with differentiable GNS A critical challenge in engineering design and optimization is solving the inverse problem, which involves identifying the parameters that lead to a desired result. Traditional simulators like MPM can differentiate in the forward mode to compute gradients of PDEs. However, they cannot compute gradients needed for inverse problems using reverse-mode differentiation. Inverse problems require techniques like the adjoint method to manually define derivatives of the forward model equations to calculate gradients in reverse mode. We leverage automatic differentiation (AD) in the PyTorch version of GNS to solve inverse problems. AD uses the chain rule to compute gradients of complex differentiable functions efficiently. Figure 2. MeshNet for simulating fluid flow. AD enables accurate and fast gradient calculations by breaking down functions into elementary operations. Our goal is to solve an inverse problem in granular flow to identify material properties that, given an initial geometry, result in a desired runout. We demonstrate this in the granular column collapse experiment. In this experiment, a rectangular granular column is released on a flat surface and collapses under gravity. The runout depends on the initial aspect ratio and material properties like friction angle. The inverse problem is to find the optimal friction angle (\(\phi\)) that gives a target runout distance (\(l_{f}^{\phi_{target}}\)) for a given initial column geometry (aspect ratio \(a\)). An optimizer computes the squared error between the target and simulated runout distances (\(J(L_{f}^{\phi_{target}},L_{f}^{\phi})=(L_{f}^{\phi_{target}}-L_{f}^{\phi})^{2}\)) and updates \(\phi\) to minimize the error. We use AD to directly compute \(\frac{\partial J}{\partial\phi}\) rather than finite differences. The downside of using AD is that it requires a significant amount of memory for large-scale inversion of neural networks (Beng et al., 2015) because it retains all the gradients of parameters for all the intermediate layers during the backward pass. Since GNS contains multiple MLPs with multiple layers, and the entire simulation even entails the accumulation of positions \(GNS(\mathbf{X}_{t}\rightarrow\mathbf{X}_{t+1})\) for \(k\) steps, computing \(\frac{\partial J}{\partial\phi}\) requires extensive memory capacity. We found that conducting AD for entire timesteps is not feasible in the currently available GPU memory capacity (40 GB). For this reason, we conduct the AD on the CPU and restrict the forward pass to \(k\)=30 steps in the optimization process. Accordingly, our target runout corresponds to the runout at 30 steps, not at the final timestep when the flow reaches static equilibrium. Figure 4(a) shows the target profile for a friction angle \(\phi=30^{\circ}\). We use an initial guess of \(\phi=45^{\circ}\) to solve the inverse problem of estimating the friction angle based on the final runout profile. We use a simple gradient descent algorithm to update the friction angle at each step based on the gradient of the loss function with respect to the friction angle. After 17 iterations, the solution converges to \(\phi=30.7^{\circ}\) (see fig. 4(b)). Figure 4 shows the evolution of friction with each inverse iteration step. The friction angle converges quickly in about six iterations. We demonstrate that a single-parameter inversion based on the runout distance successfully identifies the initial material properties based only on the final runout by computing gradients using automatic differentiation. ## 6. Interpretable Gns When a GNS successfully replicates a physics system's dynamics, we hypothesize that the messages encoding the latent information preserve the interaction laws. The sparse representation of the GNS messages (\(\mathbf{e}_{k}^{\prime}\leftarrow\phi^{\mathcal{E}}(\mathbf{e}_{k},v_{\mathbf{r}_{k}},v_{ \mathbf{s}_{k}},u)\)) is a learned linear combination of the true forces. We predict the n-body dynamics using the open-source data-parallel PyTorch GNS code developed by the PI (Peters et al., 2017; Krizhevsky et al., 2017). The GNN is trained on 30 different trajectories of n-body dynamics (\(\sim\) 10 Figure 4. Hybrid GNS/MPM error evolution compared to GNS. Figure 3. Accelerating forward simulation with hybrid GNS/MPM. particles) for 1 million steps. For particle \(i\), \(m_{i}\) is the mass, \(k_{n_{i}}\) is its stiffness, \(r_{i}\) is the radius, \(x_{i}\) is the position. The message \(\sigma_{k}^{\prime}\leftarrow\phi^{\epsilon}(e_{k},v_{r_{k}},v_{s_{k}},u)\) contains the edge features (\(e_{k}=f(k_{n},r_{i},\bar{u}_{\bar{n}})\)), source and receiver vertex features \(v_{s_{k}},v_{r_{k}}=g(m_{i},r_{i},x_{i})\) and no global features. We fit the most significant features by enforcing a sparsity constraint on the messages through L1 regularization, which forces us to learn the minimal vector space required to describe the messages. Furthermore, we restrict the number of message components by sorting them based on the largest standard deviation. We then take 10,000 randomly selected outputs from our testing set to derive the force law from the output of vertex and edge neural networks. We derive the physics laws by approximating the message data from the testing set with symbolic regression. Symbolic regression fits a function \(\psi\) using the following edge and vertex features \((m_{i},m_{j},r_{i},r_{j},x_{i},x_{j},k_{ij},\gamma_{ij})\) by minimizing the mean absolute error (MAE) through brute force genetic algorithm. The symbolic regression considers the following operators \(\star\), \(\neg\), \(\star\), \(/\), \(>\), \(<\), \(pow\), \(exp\), \(inv\), \(\log\) as well as real constants in its solutions. This task uses a simple algorithm to quantify the complexity \(c\) by counting the number of occurrences of each operator, constant and variable. We weigh \(pow,exp,inv,\log\) as three times the other operators to consider the complexity \(C_{\pi}\) of the operation. We use an approach analogous to Occam's razor to find the "best" algebraic model that minimizes errors at different complexity levels. We used a simple weighted counting model to quantify the complexity of the expression. We identify the symbolic expression as the one that maximizes the fractional drop in MAE over an increase in complexity from the next best model (\(-\Delta\log(MAE_{c})/\Delta c\)). In this work, we extract the GNS edge messages of a small-scale system (10 bodies) interacting via linear springs. We then apply SR on GNS messages to identify the most accurate closed-form expression that describes the encoded interaction law as shown in table 1. SR on GNS messages successfully derived (Eq 8 in table 1) the force interaction law \(F_{n}=k_{n}*abs(\Delta x-r_{i}-r_{j})\) of a linear spring with stiffness \(k_{n}=100\), relative position \(\Delta x\) between two particles (\(i\) and \(j\)) and their radii \(r\). ## 7. Limitations While the graph neural network simulator demonstrates promising acceleration for simple particulate systems, applying it to diverse large-scale multi-physics problems poses significant research challenges. The current node-level attention mechanism needs further analysis on its ability to learn interaction physics and generalize across problems effectively. Scaling GNS using graph partitioning and advanced sampling techniques are essential for training GNS on millions of particles. Furthermore, orchestrating hybrid Figure 5. Solving inverse problems with GNS. Figure 6. GNN simulation of N-body dynamics and Symbolic Regression explanation of edge interaction. GNS/MPM framework using accurate error metrics to determine when to switch between data-driven prediction and physical solvers is an important direction. Overcoming these limitations in generalization, scalability, physical fidelity, and hybrid modeling will be vital to unlocking the potential of differentiable GNS for accelerating scientific discoveries. ## 8. Conclusions This work introduces novel physics-embedded differentiable graph network simulators (GNS) to accelerate particle and fluid simulations and solve challenging inverse problems. The graph representation allows learning localized physics interactions compared to global dynamics, improving generalization. GNS achieves over 165x speedup compared to parallel CPU MPM simulations for granular flow prediction. The differentiable GNS enables solving inverse problems through automatic differentiation, identifying material parameters that result in target runout distances. The physics-embedded and differentiable simulators open an exciting new paradigm for AI-accelerated design, control, and optimization. ## 9. Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No.\(\pi\)2103937. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2309.10127
Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration
Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.
Parag Khanna, Elmira Yadollahi, Mårten Björkman, Iolanda Leite, Christian Smith
2023-09-18T20:04:16Z
http://arxiv.org/abs/2309.10127v1
# Effects of Explanation Strategies to Resolve Failures ###### Abstract Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot. ## I Introduction Robots and artificial agents' capabilities rapidly grow as they are deployed in real-world environments like factories, hospitals, and schools. Nevertheless, failures inevitably occur during task execution and collaboration [1], and with the increasing use of robots in in-the-wild environments, where robots are more prone to collaborate with novice and non-expert users, the study of failures, and mitigating their impact becomes imminent. While in many failure scenarios, robots can recover by themselves, there are cases where human assistance is required to resolve failures for task continuity [2]. For a non-expert user, understanding _why_ a robot failure has occurred and if and _how_ they could contribute to the recovery is essential for smooth human-robot collaboration. The emergence of studies on robot failures attests to the evolution of research from exploring people's perception and resolution of failures to the robot's role in identifying and mitigating them [3]. While the topic has expanded to include the use of holistic approaches such as explanation, apology, denial, and promise, that identify, resolve, and mitigate failures for untrained users [1, 4, 5], few have studied the effect of these approaches in repeated interactions to the best of our knowledge [6]. Providing an explanation is a practical approach to mitigating failures in collaborative scenarios, particularly when failures require human intervention or assistance. Advances in the field of Explainable AI (XAI) [7], and its extension to goal-driven explanations [8] for robots and agents contribute to research on explanation generation for failures. Currently, the literature on the topic of resolving failures via explanations focuses on determining _what_ type of information should be presented [4] and _how_ the explanations should be automatically generated [9, 10]. In our research, we address the missing link between explanation generation and participant satisfaction in repeated interactions with recurring failures, for example, do we need to be consistent with the explanations as the failures reoccur, or should we provide more details early on and reduce as the interaction continues? As a result, we developed a study to understand how different strategies of providing explanations in repeated interaction influence non-expert users' performance and satisfaction. We developed a collaborative pick-and-place task, where the robot and human had to place objects from four baskets on a shelf. We counted each basket as one round of interaction and aimed to have four rounds of interaction. We designed two types of strategies for providing the explanations: 1) maintaining the details of the explanation during the rounds, i.e. _fixed strategy_ and 2) reducing the details of the explanation, i.e. _decaying strategy_. To develop the strategies, we first defined the explanation levels inspired by the previous work by [4] and labeled them as low, mid, and high. Subsequently, we conducted a between-subject user study where participants experienced either of these strategies in four rounds of interaction. We aimed at evaluating how participants' performance in resolving failures and satisfaction with the explanations were impacted by the explanation levels and strategies which lead us to the following research questions: Fig. 1: The pick and place task: Human places the objects on the table, then the robot’s goal is to place them on the shelf while providing an explanation in case a failure occurs. The zoomed views show (top right) two levels of the shelf and (bottom right) the markers for object placement on the table. * _RQ1: How does explanation level impact participants' performance in the task and satisfaction with the explanation?_ * _RQ2: Which explanation strategy (fixed vs. decaying) leads to better performance and satisfaction in participants?_ ## II Related Work Several studies in the fields of human-robot interaction (HRI) and collaboration (HRC) have addressed the importance of understanding the effect of failures on trust [11] and perception [12], and mitigating its impact through failure recovery [13], explainability [4], and promise [5]. In [11], a user study was designed to investigate the effects of a collaborative robot's failure on human trust and the impact of justification strategies. Altogether, the results indicated that a faulty robot is regarded as far less trustworthy. It is also shown that the impact of failures was reduced with justifications when the consequence of failure was less significant. With the change of trend in using holistic approaches to identify and resolve failures, one of the approaches used more commonly in recent years is generating and providing explanations, studied on both the computational front, e.g., XAI [4, 14], and the social front, e.g., behavioral [15]. Several of the research in explainability has been inspired by the sociocognitive definitions of explanations in various fields and their social implications [15]. A recent review by Wallkotter et al. [16] identified three research directions on the topic that contribute to understanding the explainability mechanisms and how they can be integrated into the interaction context with occasional overlaps with the field of XAI. On the topic of studying explainability in robot failures, a study by Das et. al. investigated the types of explanation that helped non-experts to identify robot failures and assist the recovery by extending the XAIP algorithms via introducing failure explanation [4]. The goal was to produce explanations for unexpected failures in a pick-and-place task for a robot in a household environment. Failure and solution identification has been observed to be most effective when explanations include the context of the failure action and the history of previous actions. Another study in [14] used machine-learning models to predict robot grasp failures and study the tradeoff between accuracy using black-box models and interpretability using explainable models. They showed an explanation of predicted faults could contribute to the efficiency of designing the robot and avoiding future failures. Diel et al. proposed a causal-based method to develop explanations for robot failures in collaborative scenarios [10]. Their approach incorporated learning from a causal Bayesian network that enabled the robot to generate the explanation by contrasting a failure state against the closest successful state and by using a breadth-first search. Beyond the studies focusing on generating explanation, the effects of different types and amounts of explanation by an XAI system on human understanding of the system were discussed in [17], where an increase in the information contained in the explanation resulted in the users' better understanding and prediction of the system behavior, as well as increased user performance. However, this came at a cost of increased time and attention needed by users to comprehend the explanation. ## III Design ### _Collaborative Task Design_ We designed a pick-and-place task where a Baxter robot and a human had the goal of collaboratively picking objects from a basket and placing them on the shelf (Fig. 1). We created four baskets, (numbered 1 to 4), each including a combination of four household items presented in Fig 2. This resulted in a total of 16 objects that needed to be placed on the shelf during the whole duration of the experiment. In our design, each round of the experiment started by picking the items from one basket, putting them on the table, and placing them on the shelf, when the task was successfully executed. The placement of an object was deemed unsuccessful if it was not placed on the shelf. We marked each object in the basket with an A, B, C, or D tag on one face and a fiducial tag [18] on the other to let the robot detect the object. At the start of each round, the human collaborator placed all objects from the basket in corresponding positions as they are marked (see Fig. 1). For handling each object, the robot executed the following steps: _detect_ the object, _pick_ it up, _carry_ it, and finally _place_ it on the shelf. A possible failure could happen at each step during collaboration with the robot. As result, we defined the following failures and possible resolutions that could help complete the task despite a failure. In the next section, the explanations generated based on these failures and resolutions are provided. 1. **Detect Failure (\(f_{0}\)):** Robot failed to detect the object on the table, e.g. not being able to scan the tag. **Resolution Action (\(r_{0}\))** Human moves or rotate the object to ensure the tag is visible to the robot. 2. **Pick Failure (\(f_{1}\))**: Robot failed to pick up an object, e.g. not fitting in the gripper, based on its placement or size. **Resolution Action (\(r_{1}\))**: Human picks up and hands over the object to the robot. Fig. 2: Objects as they were required to be placed in front of the robot 3. **Carry Failure (\(f_{2}\))**: Robot failed to carry an object, e.g. weight beyond the limit robot can handle. **Resolution Action (\(r_{2}\))**: Robot hands over the object to the human and the human places it on the shelf. 4. **Place Failure (\(f_{3}\))**: Robot failed to place an object, e.g. the desired destination is beyond the robot's reach. **Resolution Action (\(r_{3}\))**: Robot hands over the object to the human, and they place it at the desired location. Fig. 3 shows the workflow for placement of an object with possible failures denoted in red, which was accompanied by an explanation from the robot and resolved from the human side. In the task design, we included 7 objects for which the robot successfully executes all steps and 9 objects involving some robotic failures, spread out across the four rounds as shown in Table II. We are not intentionally incorporating the detection failures (\(f_{0}\)), but as they might occur due to the way the object is placed on the table, we provide the appropriate resolution. For any other unintended failure (\(f_{4}\)), the resolution (\(r_{4}\)) in the form of asking the human to place the object on the shelf was also integrated. ### _Explainability Mechanisms_ We considered three verbal explanation levels: low, medium, and high. Additionally, we included a nonverbal baseline to complement the explanations as a result of initial pilot studies where we noticed users needed some baseline behaviors to understand the failures, particularly when given low-level explanations. As a result, we designed the following explanation levels inspired by [4] and Table I presents each explanation for each failure type. * Low Level: Based on _action-based_ explanation in [19]. After the failure, the robot states the failure action and its resolution. * Medium Level: Based on _context-based_ explanation in [19]. Post failure, the robot states the failed action and the cause of failure, followed by a resolution statement. Fig. 3: Description of human-robot collaborative task with the robot and human action spaces. Arrows in green represent transitions due to action success. Arrow in red represents transitions due to action failure. * High Level: Based on _context-based + history-based_ explanation in [19]. After failure, the robot states the previous successfully completed action, the current failure action, and its cause. The resolution statement also includes the resolution action. Informed by our pilot studies, we included a nonverbal baseline to help with identifying the failure in lower explanation levels. * Zero (Nonverbal): This only includes the robot head shaking at each failure with more specific robotic actions based on the failure type. ### _Interaction Details_ The Baxter robot was programmed in ROS and only used its left arm. More detail on the technical developments and the interaction is available in [20] and the accompanying video with this work. Each round started with the robot receiving verbal confirmation that all objects are placed on the table, where the robot proceeded to pick up the objects by following the action sequence depicted in Fig. 3. Once a failure occurred, the robot exhibited non-verbal actions described in Table I. followed by an explanation based on the current strategy and waiting for the human to resolve the failure before moving to the next step. If the failure was not resolved in a predefined amount of time, the robot repeated itself up to five times spaced with three-second intervals. The system is autonomous, but the experimenter (unbeknown to the participant) made the decision to move to the next step when they failed to complete the task after five repetitions (something that might happen in low explanation cases). To avoid handover failures, the human-to-robot handover was completed after the robot received a verbal confirmation to close its gripper after the human handed over the object, and the robot-to-human handover used sufficient pull-force, in line with a prior study [21]. ## IV Methodology ### _Experiment Design_ We investigated two explanation strategies (fixed and decaying) using the three levels of explanations (low, mid, and high). For the fixed explanation strategy, we tested each explanation level using the three conditions: C1, C2, and C3 presented in Table III. For the decaying explanation strategy, we focused on the rate of decay. Given four rounds of interactions, we defined two types of decay: _slow_ (D1) and _rapid_ (D2). Slow decay was implemented by reducing the level of explanation once per round, which resulted in the following combination: high, medium, low, and none. In Rapid decay, the explanation was reduced from high to low and keep it in a low level as presented in Table III. ### _Hypotheses_ Prior research on the topic of XAI and explainability in robotics has shown mixed results in how humans perceive explanations. In the study by Das et. al. [4], explanations that encompassed context and history of past successful interactions were able to improve failure identification and failure. Their context-based including history corresponds to our high-level explanation. On the other hand, with regard to our first research question, we have the following hypotheses: * _H1a: participants show better performance e.g. shorter task resolution time and successfully resolving the failure in the high explanation level compared to low and mid-levels._ * _H1b: participants are more satisfied when given more detailed explanations compared to lower or intermediate explanations._ Given our second research question, we hypothesize: * _H2a: In final rounds, participants' performance and satisfaction in decaying conditions (with low explanations) is better than the fixed-low explanation condition._ * _H2b: In final rounds, participants have comparable performance and satisfaction in decaying conditions (with low explanations) compared to fixed-high explanation conditions (with high explanations)._ For H2a, we specifically focus on Low-level explanations in round 3 and expect participants to perform better in the decaying conditions (D1, D2) compared to the fixed low explanation condition C1 as they were given higher explanations in the previous rounds. We also compare the performance in round 3 of C3 with (D1, D2) for which we expect participants to have similar perceptions and performances compared to C3, despite the low level of explanation, as they have already been exposed to higher explanations in earlier rounds (H2b). The level of explanation is a between-subject variable for fixed strategy conditions (C1, C2, and C3). The level of explanation also varies within the decaying strategy conditions (D1 and D2) as it changes both between conditions and within the decaying conditions. The dependent variables are participants' performance in the task and their explanation satisfaction rating. ### _Measures_ In line with our hypotheses, the measures for this experiment were _participants' performance_ and _participants' satisfaction_, collected through multiple variables and after the completion of each round of interaction. #### Iv-C1 Participants' performance We measure the performance over two dimensions corresponding to the instances where failure occurs, 1) the time they take to intervene and resolve a failure, 2) their success rate in resolving the failure, e.g. placing the object on the shelf. **Failure resolution time:**\(T_{res}\) is calculated from when the robot completes the explanation statement to when the participant completes the resolution, e.g. placing the object on the shelf. **Success rate of failure resolution:** This is a measure of the successful resolution of each failure and it is measured differently depending on the type of failure as presented in the Collaborative Task Design section. #### Iv-C2 Participants perception The participant's perception was measured using an explanation satisfaction survey and some task-related questions. The task-related questions included more open-ended questions, designed to understand participants' approaches to resolving the failure beyond the robot's explanations. We are not reporting the qualitative analyses of the responses in this paper. **Explanation satisfaction scale:** To measure how participants were satisfied with the explanations at each round, we asked them to respond to 8 questions after completing each round. The questions were originally introduced and evaluated in [22]. They define explanation satisfaction as "the degree to which users feel that they understand the AI system or process being explained to them". The questions were derived from the psychological literature on explanation and include several key attributes of explanations: _understandability_, _feeling of satisfaction_, _sufficiency of detail_, _completeness_, _usefulness_, _accuracy_, _trustworthiness_. ### _Participants and Procedure_ We recruited sixty-nine participants via advertisement on campus. Our main criterion was that the participants had no prior experience in physical collaboration with a robot. Twelve participants had to be excluded from the analysis due to unaccounted robot failures beyond the failures designed for the experiment. The final sample size was N = 55 (\(M=26.63,SD=7.42\)) (21 Female, 33 Male, 1 Other) resulting in 11 participants per condition. At the start, the participants filled out the consent form for data and video collection and reading procedural instructions. They were briefed about their role to place objects on the table and the robot's role to pick them up and place them on the shelf; however, no mention of the possible failures and related resolutions was presented. After the completion of the experiment, they were given a debriefing sheet describing the aim of the study. ## V Results To prepare the data, first, we evaluated the internal consistency of the questionnaires using Cronbach's alpha. The _explanation satisfaction_ questionnaire presented high internal consistency with Cronbach's \(\alpha=0.79\), \(\alpha=0.91\), \(\alpha=0.92\), and \(\alpha=0.92\) for each round, respectively. ### _Impact of Explanation Level_ To investigate _H1a_ and _H1b_, we only looked at the first round of interaction and grouped participants into groups of low, mid, and high explanation levels. This implied grouping the participants in conditions C3, D1, and D2 into _High-level_, C2 into _Mid-level_, and C1 into _Low-level_. This decision was made to get a baseline for the explanation levels, additionally, to analyze strategies we need multiple rounds of interaction which we address in the next section. Given that each failure type required a different resolution and intervention to resolve that failure successfully, we analyzed the performances separately for each failure type. Table IV(a) shows the success rate in resolving the failures for each failure type in all three levels. For carry failures, Fisher's exact test \(p=0.0023\) shows a significant difference between the low, mid, and high explanation levels in successfully resolving the failure (Fig. 3(a)). According to post hoc tests \(p=0.0022\) this difference is significant between _High-level_ compared to the _Mid-level_. For place failure (Fig 3(c)), according to Fisher's exact test \(p=0.0339\), participants that received the _High-level_ explanation were significantly more successful than the ones receiving the _Low-level_ explanation. For our second measure of analyzing performance, we looked at the time participants took to resolve failure cases. For pick and place failures, we observed no significant difference in the resolution times based on the explanation levels. For carry failures, Kruskal-Wallis chi-squared test \(H(2)=x,p=0.0075\) indicated that the resolution time significantly differed based on the explanation level. Post hoc tests and Figure 3(b) show the difference is significant between Low-level and Mid-level \(p=0.0061\), and Mid-level and High-level \(p=0.045\). **Results for _H1a_:** Overall, the results partially support _H1a_, where we expected participants to perform better in High-level explanations compared to Mid and Low-level. However, the analyses show that failure type and how much the immediate resolution could be inferred from the environment, irrespective of the explanation, are important factors in the participants' performance. **Results for _H1b_:** Regarding _H1b_, we analyzed participants' responses to the explanation satisfaction questionnaire Fig. 4: Performance in terms of success rate and resolution time for round 1 after the first round. Kruskal-Wallis chi-squared test indicated no significant difference in the explanation satisfaction between the explanation levels \(H(2)=2.47,p=0.2903\), rejecting our hypothesis. The distribution of the satisfaction rating in round 1 is presented in Fig. 6(a). ### _Impact of Explanation Strategy_ To analyze the impact of the explanation strategy, we looked at participants' performance and satisfaction ratings in rounds 3 and 4 for conditions C1, C2, C3, D1, and D2. In _H2a_, we are comparing the final round performances in the _decaying_ conditions, i.e. D1 and D2, versus the _fixed_ condition C1. In round three of these conditions, participants are receiving Low-level explanations with different a priori. In round four, participants are receiving Low-level explanations in conditions C1 and D2, and Zero-level explanations in D1. The percentages for the success rates in rounds 3 and round 4 are presented in Table IV-(b),(c). For pick and carry failures, participants showed success rates above \(80\%\) in all conditions. For place failures, while we observed better performances in D1 and D2 conditions compared to C1 as shown in Fig. 4(c), the difference was not significantly different. Regarding the failure resolution times in round 3, no significant difference was observed for pick and carry failures between C1, D1, and D2 conditions. However, for place failures, Kruskal-Wallis chi-squared test indicated that there was an overall difference in the resolution times between the three conditions \(H(2)=2.47,p=0.2903\). The pairwise comparison confirmed that this difference was significant between C1 and D2 conditions \(H(2)=2.47,p=0.0386\). Furthermore, we explored the data in round 4, where the explanation level for condition D1 was reduced to baseline or none. As shown in Fig. 6, the performances for the D1 condition have decreased for all failure types, with a significant difference for place failure cases. By only looking at the performances for place failures in condition D1, we observe that after three rounds of interaction, participants were still not ready to resolve the failures without any explanation. Furthermore, the explanation satisfaction ratings for rounds 3 and 4 are presented in Fig. 6(b), and 6(c), and show Fig. 5: Performance in terms of success rate and resolution time for round 3 Fig. 6: Performance in terms of success rate and resolution time for round 4 no significant difference between the discussed conditions. **Results for _H2a_ and _H2b_:** Overall, based on the performance and satisfaction results in the last rounds, we reject _H2a_. However, we can accept _H2b_, proving that participants in the last rounds of decaying explanation conditions; i.e. D1 and D2, showed comparable performances to the fixed-high explanation, i.e. C3. ## VI Discussion ### _Impact of Explanation Level_ We observed that there is a significant effect of explanation level on the participants' performance. Participants showed an overall higher success rate in resolving failures when given context-based high-level explanations with the history of past successful actions which was also evident from the shorter time in resolving the failures and completing the task. This is aligned with the results from [4], where participants watched videos of the failures and respective explanations and their performance was evaluated based on success in identifying the cause of the failure and its resolution. Nevertheless, we noticed that the results are not generalizable over different types of failures. For example, participants have shown above \(80\%\) success rates in resolving _pick failures_ irrespective of the given level of explanation. One reason lies in the nature of the failure to pick an object, which regardless of its cause, e.g. size, shape, or slippery edges, can be easily detected by a collaborator. On the other hand, the performances in resolving carry and place failures exhibited some significant differences based on explanation level. We identify that in _carry failure_ cases, the cause was not explicit, i.e. object weight was beyond the robot arm's limit. However, in contrast to our expectations, participants in _Mid-level_ conditions, had the worst performances compared to _Low_ and _High-levels_ (Fig. 3(a), 3(b)). This finding contributes to the argument that giving additional information without pointing to a cause or resolution can hinder human performance which is also aligned with Tha-gard's theory of explanatory coherence [23], where people prefer simpler explanations with fewer causes and more general explanations. In _place failure_ cases, we observed significant performance improvement with the increase of explanation (Fig. 3(c), 3(d)). Several factors could contribute to this, including the harder detection of the resolution without receiving the appropriate explanation. It is plausible that participants understood the robot's failure to place the object on the shelf, however, they missed the exact reason, i.e., the inaccessibility of the lower shelf, and managed to place the object on the upper shelf, which was not the goal. Overall our findings guide us to further investigate factors such as _failure type_, with respect to its severity and _information availability_ as critical factors in estimating the need for explanation and generating the appropriate explanation upon failure. While the literature on robot failures and trust evaluation considers failure severity to be an important factor that influences trust [24], we further observe that situational awareness [25] and the information availability play an important role too. As a result, we conclude that: 1) if people can understand the failure and its resolution from the onset of failure, their performance is not influenced by the amount of provided explanations, and 2) more explanation does not automatically lead to better performance. ### _Impact of Explanation Strategy_ To understand how different explanation strategies performed, we analyzed participants' performances in later rounds, i.e., 3 and 4. In round 3, conditions C1, D1, and D2 had low-level explanations with different prior explanation levels. We observed that for _carry failures_, performances were not significantly impacted by the explanation strategy. At this point, participants were already familiarized with the cause of this type of failure and resolution, and given their quite high success rate in the first round, they just kept improving. On the other hand, we noticed that for _place failure_, where the _High-level_ explanation was crucial to understanding the resolution, having a prior High-level explanation in conditions D1 and D2 improved the success rate. Consequently, the same improvement was observed in completing the task in a shorter time which was significant between conditions C1 and D2. Overall, we conclude that in a repeated interaction scenario, a user responds better to a low level of explanation after being exposed to a higher level of explanation in prior rounds. This presents a strong justification for explanation strategies that reduce the level of explanations which reduces the overall task completion time. Considering the results in condition D1, which included a Zero-level explanation in round 4, we conclude that not only the rate of reducing the explanation is important, but also the baseline where the explanation level reduces to. Fig. 7: Explanation satisfaction ratings for all rounds ### _Limitations and Future Work_ Due to the exploratory nature of the study, we limited the number of possible conditions via pilot testing. Nevertheless, testing 5 conditions with 55 participants restricted us from drawing firm conclusions. While we observed some trends in the satisfaction ratings, having more participants will enable us to surpass participants' personal differences. This study was the first step in identifying the variables involved in how non-expert users perceive explanations after robot failures and the findings help us to improve our understanding of robotic failures and explanations strategies. Next, we plan to isolate some of these variables to determine the optimal adaptation that leads to higher human satisfaction and performance. to better evaluate how humans perceive the explanation and what type of adaptation is needed. We are extending the research by analyzing the dataset composed of participants' behaviors from their participation in the study when encountering failures. We are aiming at using social cues to recognize if participants have detected the failures [26] and utilize that information in a closed-loop system to adapt the explanation in response to the human's reaction to the failure. Furthermore, we plan to conduct more user studies, investigating the conditions showing high varieties in performance and satisfaction ratings in more detail, including a detailed comparison of conditions C1, D1, and D2. ## VII Conclusion In this work, we investigate what levels of explanation and what explanation strategies in repeated interactions help non-experts to assist a robot to recover from failures in a collaborative task. We introduce two types of explanation strategies in the context of repeated interactions i.e. fixed and decaying and designed a collaborative task i.e., picking and placing objects from a table to shelves, where we incorporated three types of commonly occurring failures in such tasks. A user study with 55 participants evaluated three variations of the fixed and two variations of the decaying strategies, with failures in four rounds of interaction. The results portrayed a bigger picture of how participants' performances in resolving the failures and their satisfaction with the robot's explanation is a function of types of failure, level of explanation, and strategy. We observed, for failures with a more explicit resolution, the level of explanation did not influence participants' performance or satisfaction. However, for failures where the cause of the failure contributed to resolving it, performance in the task and satisfaction were directly impacted by the context of the explanation. With regard to explanation strategies, we noticed that specifically for complex failures that can be resolved with the high explanation, we can aim for decaying strategies, where we can avoid repetitions and reduce overall collaboration times. However, more modalities could be incorporated to decide the reduction rates, e.g. success rate in the previous rounds. ## Acknowledgment This work was partially funded by Digital Futures Research Center and Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications at KTH.
2309.11239
Data-Driven Analysis of Gender Fairness in the Software Engineering Academic Landscape
Gender bias in education gained considerable relevance in the literature over the years. However, while the problem of gender bias in education has been widely addressed from a student perspective, it is still not fully analysed from an academic point of view. In this work, we study the problem of gender bias in academic promotions (i.e., from Researcher to Associated Professor and from Associated to Full Professor) in the informatics (INF) and software engineering (SE) Italian communities. In particular, we first conduct a literature review to assess how the problem of gender bias in academia has been addressed so far. Next, we describe a process to collect and preprocess the INF and SE data needed to analyse gender bias in Italian academic promotions. Subsequently, we apply a formal bias metric to these data to assess the amount of bias and look at its variation over time. From the conducted analysis, we observe how the SE community presents a higher bias in promotions to Associate Professors and a smaller bias in promotions to Full Professors compared to the overall INF community.
Giordano d'Aloisio, Andrea D'Angelo, Francesca Marzi, Diana Di Marco, Giovanni Stilo, Antinisca Di Marco
2023-09-20T12:04:56Z
http://arxiv.org/abs/2309.11239v1
# Data-Driven Analysis of Gender Fairness ###### Abstract Gender bias in education gained considerable relevance in the literature over the years. However, while the problem of gender bias in education has been widely addressed from a student perspective, it is still not fully analysed from an academic point of view. In this work, we study the problem of gender bias in academic promotions (i.e., from Researcher to Associated Professor and from Associated to Full Professor) in the informatics (INF) and software engineering (SE) Italian communities. In particular, we first conduct a literature review to assess how the problem of gender bias in academia has been addressed so far. Next, we describe a process to collect and preprocess the INF and SE data needed to analyse gender bias in Italian academic promotions. Subsequently, we apply a formal bias metric to these data to assess the amount of bias and look at its variation over time. From the conducted analysis, we observe how the SE community presents a higher bias in promotions to Associate Professors and a smaller bias in promotions to Full Professors compared to the overall INF community. Keywords:Gender bias Academia Italy Informatics Software Engineering : 1 Footnote 1: This work is partially supported by European Union - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di Rípresa e Resilienza, PNRR) - Project: “SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics” - Prot. IR0000013 - Avvíso n. 3264 del 28/12/2021, by ”FAIR-EDU: Promote FAIRness in EDUcation institutions” a project founded by the University of L’Aquila, 2022, and by COST Action CA19122 – European Network Balance in Informatics (EUGAIN). All the numerical simulations have been realized on the Linux HPC cluster Caliban of the High-Performance Computing Laboratory of the Department of Information Engineering, Computer Science and Mathematics (DISIM) at the University of L'Aquila. \({}^{**}\) These authors contributed equally to the paper \({}^{**}\) Corresponding Author ## 1 Introduction Nowadays, the problem of _gender bias_ has been widely considered and analysed in the literature under several contexts and domains, like health [27], justice [4], or education [23]. Concerning the latter, the problem of gender bias in education gained considerable relevance over the years, and several papers studied this issue from both a technical and sociological point of view [6, 22]. However, most works focus on gender bias in students' education, not considering other relevant contexts [5]. In this work, we want to analyze the issue of gender bias in education from the academic point of view by analyzing if there is a gender bias in academic promotions (i.e., from Researcher to Associated Professor and from Associated to Full Professor) in the Italian academic context, in Italian informatics (INF) in general and software engineering (SE) in particular. In particular, we first perform a literature review to assess how the issue of gender bias in academia has been addressed so far. Next, we perform an empirical analysis of gender bias in academic promotions in the Italian informatics (INF) community. We first extract all the needed data from several open repositories and process them to make them suitable for the analysis. Then, by applying a formal bias metric, we show the trend of bias over the years, starting from 2018 to 2022. Finally, we compare the overall trend with the sole software engineering (SE) Italian community highlighting how the trend for the latter exhibits similar behaviour, albeit considerably more biased towards researchers and less biased towards associate professors, compared to the overall INF community. Hence, the main contributions of this work are the following: * We perform a literature review of the most relevant papers addressing the issue of gender bias in academia by also highlighting the main weaknesses of the current approaches (Section 2); * We describe a process to collect and preprocess data useful to assess the amount of gender bias in academic promotions in Italy (Section 3); * We depict the trend of gender bias in academic promotions in Italy over the years by relying on a formal bias metric, and we compare the trend of bias of the overall INF Italian community with the sole SE Italian community (Section 4). The paper concludes in Section 5 which describes some future works and wraps up the paper. ## 2 Gender Bias in Classic Academic Systems This section describes the literature review process, focused on those works that address the problem of gender bias in academia. The search process involved research of conference proceedings and journal papers on Google Scholar by relying on the search string shown in listing 1.1. ``` allintitle:genderbiasORacademicrecruitmentORgenderdiscriminationOR Women'sfacultyrecruitmentORfacultyequityORcareeradvancementsOR ItalianuniversitiesOReslectionprocesses ``` Listing 1.1: Search string Among the results, we selected papers that studied and analysed gender bias in the context of Italian educational systems. Papers discussing practices and techniques utilised in foreign universities were also included to gain a broader perspective and compare different approaches and methods. We mainly focus on works related to the recruitment, promotion and productivity level of academic staff, i.e., full professors, associate professors and researchers. Articles about specific faculties or that address the gender bias problem in the general working world are excluded. This process yields 21 papers that have been carefully analysed to highlight these main features: the _context_ (i.e. the country where the study was conducted), the _process_ (i.e., recruitment, promotions or productivity) in which the gender bias has been studied, if the data used are _public_ or not, the _analytical method_ employed (i.e., whether descriptive or inferential statistics are used to analyze the data), and the _year_ of the paper. Table 1 summarises such features for each paper. Note that papers with the same features have been grouped in the same row. Concerning the context, most of the papers focus on specific countries, while the rest of them are generic and unrelated to particular academic system. In the table, we use the official national abbreviation to specify each country, while papers with unspecified countries are labeled with _UNK_. Concerning the process, most papers address the problem of gender bias either in _recruitment_ or _promotions_, while only two papers (i.e., [16, 3]) address the issue of gender bias in _productivity_. Gender bias in recruitment is mainly addressed by providing recommendations, practices, and strategies to minimize the impact of bias and reach gender equity in the recruitment process. Instead, the problem of gender bias in academic promotions is mainly addressed by estimating the probability of promotion by looking at the number of female and male academicians across different career stages or focusing on women in university leadership. Finally, the problem of gender bias in productivity is addressed by investigating the causes that lead to lower productivity by women. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Paper** & **Context** & **Process** & **Source Data** & **Analytical Method** & **Year** \\ \hline **[**32**]** & AU & Prom. & Priv. & Descr. & 2000 \\ \hline **[**33**]** & U.K. & Prom. & Priv. & Inf. & 2001 \\ \hline **[**31**]** & U.S. & Recr./Prom. & Priv. & Inf./Descr. & 2002 \\ \hline **[**16**]** & U.S. & Prod. & Priv. & Inf./Descr. & 2005 \\ \hline **[**7**]** & NL & Recr. & Priv. & Descr. & 2006 \\ \hline **[**2**]** & IT & Prod. & Pub. & Descr. & 2009 \\ \hline **[**18**]** & _UNK_ & Resr./Prom. & Priv. & Inf./Descr. & 2010 \\ \hline **[**24**]** & IT & Prom. & Pub. & Inf./Descr. & 2011 \\ \hline **[**3, 25**]** & IT & Recr. & Pub. & Inf./Descr. & 2016, 2019 \\ \hline **[**12, 15**]** & IT & Prom. & Pub. & Descr. & 2017, 2021 \\ \hline **[**21**]** & IT & Prom. & Pub. & Inf. & 2018 \\ \hline **[**9, 28**]** & U.S. & Recr./Prom & Priv. & Descr. & 2020, 2019 \\ \hline **[**30**]** & U.S. & Recr. & Priv. & Descr. & 2019 \\ \hline **[**17**]** & IT & Recr./Priv. & Pub. & Descr. & 2020 \\ \hline **[**8**]** & IT & Prom. & Priv. & Inf./Descr. & 2021 \\ \hline **[**10**]** & IS,NO,SE & Recr. & Priv. & Inf. & 2021 \\ \hline **[**19**]** & DE,AT,CH & Prom. & Priv. & Inf./Descr. & 2022 \\ \hline **[**20**]** & _UNK_ & Prom. & Priv. & Descr. & 2022 \\ \hline \end{tabular} \end{table} Table 1: Summary of the Literature Review. Concerning the source data, public data comes mainly from institutional repositories like the _Ministero dell'Universita e della Ricerca (MIUR)_ (i.e., the Italian Ministry of University and Research) and the National Scientific Qualification website (for Italian works)[26, 11]. Private data were instead collected through different methods, for instance interviews [2, 21], questionnaires [3] and compilation of surveys [7, 8, 30, 30, 32]. Other papers collected data directly from internal private university databases. Concerning the analytical methods, papers using classical descriptive analysis typically measure the percentages of males and females across career stages and institutions, means, standard deviations or comparisons using t-tests between men and women. In addition to these indicators, cross-tables [18], frequency distributions and segregation indexes [17] were used. Papers that perform inferential statistical analysis use different regressions methods, such as ordinary least squares regressions, multiple logistic regressions, and multilevel logistic regressions. Works like [25] use quantitative analysis with the glass ceiling index and the glass door index to measure and compare the effects of gender practices, and [33] relies on a static discrete-choice model for rank attainment. From this review of the existing literature, it is clear how there is an interest in analysing the issue of gender bias in academia. However, some examined works are old, and the reported conclusions may be outdated. Moreover, we have seen a lack of analyses using formal metrics to measure bias, and none of the reported papers analyses the issue of gender bias in academic promotions inside the informatics community (and thereby software engineering). In this paper, we aim to overcome these lacks by formally analysing gender bias in academic promotions in the informatics (and software engineering) Italian communities. ## 3 Analysis Description This section presents the analysis conducted to evaluate the level of gender bias in the academic positions within the overall informatics (INF) and software engineering (SE) Italian communities. The informatics community is the conjunction of Areas 1 and 9 of the MIUR scientific areas classification. [1] We first report the dataset creation and filtering procedure. Next, we describe the performed experiment. ### Data Collection and Filtering Figure 1 reports the full data collection and filtering pipeline used to collect the datasets of the INF and SE Italian communities for our analysis1. In the figure, we report the different sources (Italian and international) where we gathered the needed information. Footnote 1: The source code will be released in the camera-ready version of the paper The first step of the pipeline is the dataset collection and aggregation. Specifically, data was gathered between 2015 and 2022 with the aim of identifying the following information: **Personal Data:** i.e., information such as age and gender. These data have been gathered from the MIUR website, which contains all the information about people employed in the Italian academia [26]. **Academic Career:** i.e., information such as the university and department of affiliation, career advancements, academic seniority, macro disciplinary area, scientific sub-sector they belong to, area of expertise, current academic appointment, academics managerial appointments, teaching activities, funded projects, committees, salaries, and sabbatical period. These data have been gathered from the MIUR and National Scientific Qualification (ASN) websites [11]. **Scientific Productivity:** i.e., information such as the list of publications, the total number of papers, total citations, the h-index, publication range, papers per year, citations per year, publication types, journal metrics, and research area. These data have been scraped from Scopus [13] and Google Scholar [29]. Note that not all the reported information is used in the following analysis, but we choose to gather them for future works. The data have then been aggregated into a single dataset \(D^{\prime}\) using the _name_, _surname_, _email_, and _affiliation_ as join keys. This aggregated dataset \(D^{\prime}\) was then thoroughly anonymized to protect the University employees' privacy. As a result, no references to names, surnames, or other sensitive or personal data are stored, as they are neither relevant nor valuable for computing bias metrics. This collected dataset can not be publicly released for legal reasons, however, it can be recreated by gathering the same data from the sources mentioned above. Starting from the anonymized dataset \(D^{\prime}\), we performed a set of filtering operations to obtain the final datasets that we used to compute bias metrics yearly. The filtering procedure is depicted in Figure 2. Since we are interested in the evolution of bias in academic promotions year by year, the anonymized dataset \(D^{\prime}\) was split according to a sliding time window of fixed size. In particular, we considered a sliding window of three years, starting from 2015. Hence, to gather metrics for 2019, with the sliding window size set to 3, we would slice \(D^{\prime}\) to obtain only the columns referencing data collected from 2016 to 2019. After this operation, we obtain a partially filtered dataset \(D^{\prime\prime}\) for each sliding window. Figure 1: Data collection and filtering pipeline The subsequent step was selecting only specific scientific areas from \(D^{\prime\prime}\). Because different domains have different promotion criteria, it would be incorrect to consider them all together. Our study only focused on Areas 1 and 9 of the MIUR scientific areas classification, which refers broadly to Science, technology, engineering, and mathematics [1]. In this study, we refer to the conjunction of these two areas as the Informatics community. From this further filtering, we obtain a dataset \(D^{\prime\prime\prime}\). From \(D^{\prime\prime\prime}\), we perform two different branches of operations. In the first branch, \(D^{\prime\prime\prime}\) is split into two versions: one without records representing researchers (\(INF_{AF}\)) and one without Full Professors (\(INF_{RA}\)). In the second phase, \(D^{\prime\prime\prime}\) is refined by selecting individuals who work specifically in the SE field. To achieve this, we use Google Scholar to find individuals who have expressed interest in _software engineering_ or related topics such as _software architecture_, _model-driven engineering_, _software quality_, and _software testing_. The SE dataset is then divided into two sub-datasets as done above: one consisting of only researchers and associate professors (\(SE_{RA}\)), and the other consisting of only associate and full professors (\(SE_{AF}\)). As a result of the data pre-processing pipeline, four distinct datasets were created. Two of them are for the overall Italian INF community (\(INF_{RA}\) and \(INF_{AF}\)), while the other two are for the Italian SE community (\(SE_{RA}\) and \(SE_{AF}\)). Finally, we only preserved data for workers employed at an Italian university for the entire time window. ### Analysis Setting Once the final yearly datasets \(INF_{RA}\), \(INF_{AF}\), \(SE_{RA}\), and \(SE_{AF}\) have been constructed, the experiments can occur. As already mentioned, the experiment aims to measure the amount of gender bias in academic promotions and analyze its variation over the years. To calculate the amount of bias, we use the _Disparate Impact (DI)_ metric [14]. This metric measures the probability of having a _positive outcome_ while being in the _privileged_ or _unprivileged_ group and is defined formally as: \[DI=\frac{P(Y=y_{p}|X=x_{unpriv})}{P(Y=y_{p}|X=x_{priv})} \tag{1}\] Figure 2: Filtering pipeline of the dataset. where \(Y\) is the label, \(y_{p}\) is the positive outcome, \(X\) is the sensitive variable, and \(x_{unpriv}\) and \(x_{priv}\) are the values identifying the unprivileged and privileged groups, respectively. The more this metric is close to one, the fairer the dataset. In our context, the label assigned to a person represents their position for that particular year. In the analysis between Researchers and Associate Professors, the positive label is _Associate Professor_, while in the analysis between Associate and Full Professors, it is _Full Professor_. The sensitive variable is _gender_, where _men_ and _women_ are the privileged and unprivileged groups, respectively. Hence, the experiment is performed as follows: for each final yearly dataset (\(INF_{RA}\), \(INF_{AF}\), \(SE_{RA}\), and \(SE_{AF}\)) and for each year in the considered range (2018-2022), we compute the DI between the two subsets contained in the dataset (either Researchers and Associate Professors or Associate Professors and Full Professors). We also compute the cardinality of each subset per year. ## 4 Experimental Results In this section, we present and discuss the Experimental Results. Figure 3 shows the Disparate Impact (DI) (left y-axis) and set cardinalities (right y-axis) for each of the datasets above (\(INF_{RA}\), \(INF_{AF}\), \(SE_{RA}\), and \(SE_{AF}\)) on a yearly basis in the reference period (2018-2022). In the figure, the charts on the left side show results for the Informatics (INF) Community datasets (\(INF_{RA}\), \(INF_{AF}\)), while the ones on the right side show results for the Software Engineering (SE) Community (\(SE_{RA}\), \(SE_{AF}\)). Concerning the full set cardinalities (i.e., of both men and women), they exhibit the same trend across all datasets. Since we only consider people that were in the Italian academic system for the entire reference period, we do not consider researchers that were acquired later than 2018, so their cardinality is bound to decrease. The number of Full professors is rising in both the INF and SE communities, but the increase in the SE community is significantly larger. In 2022, there are more Full professors than Associate professors specifically in the SE subset. This suggests that promotions to Full professorship are occurring at a higher rate among professors in the field of SE compared to the INF community. Concerning the gender bias in promotions to Associate Professor (\(INF_{RA}\) and \(SE_{RA}\) in the figure), in both the Informatics and Software Engineering communities the trend of Disparate Impact (DI) appears to be on an upward trajectory. However, the SE community seems to suffer from a higher bias w.r.t. the overall INF community. The DI for the SE community starts from a value of 0.75 in 2018 to a value of 0.8 in 2022. In contrast, the DI of the INF community starts from a value of 0.9 in 2018 to a value of almost 1 in 2022, meaning a nearly complete absence of bias in academic promotions. In general, we observe how the amount of bias in the SE community is about 20% higher than in the overall INF community. In contrast, concerning bias in promotions to Full Professors (\(INF_{AF}\) and \(SE_{AF}\) in the figure), the SE community exhibits a much lower bias concerning the INF community. DI for the SE community starts from 0.7 in 2018, then reaches a peak of 0.95 in 2020, to a final value of almost 0.8 in 2022. This downtrend from 2020 to 2022 can be partially explained by the small set cardinality, which makes the DI more sensitive to small changes (i.e., additions or deletions) in the groups. Instead, the DI for the overall INF community presents a slight increase over the period, starting from a value of 0.63 in 2018 to a value of 0.65 in 2022. In this case, the amount of bias in the INF community ranges from 15 to 35% greater than in the SE community throughout the observed period. ## 5 Conclusion and Future Work In this paper, we have studied the issue of gender bias in academic promotions. First, we performed a literature review to observe how the literature has addressed this issue so far. Then, we formally analyzed gender bias in academic promotions in the informatics (INF) and software engineering (SE) Italian communities. From the analysis, we observed that gender bias has been improving over the years in both communities, even though the SE community has a higher trend in promoting professors from Associated to Full compared to the broader INF community. In the future, we plan to extend this analysis to other countries by identifying valuable data sources to retrieve all the needed information. Next, we plan to analyze the behaviour of a Machine Learning classifier trained on such data to predict the position of a person. In particular, we want to study how a classifier is subject to learning a possible gender bias in the data and how we can mitigate it by relying on proper fairness methods. Figure 3: Year-by-year Disparate Impact and Set Cardinality for the Informatics Community (left column) and Software Engineering Community (right column).
2309.06745
VEATIC: Video-based Emotion and Affect Tracking in Context Dataset
Human affect recognition has been a significant topic in psychophysics and computer vision. However, the currently published datasets have many limitations. For example, most datasets contain frames that contain only information about facial expressions. Due to the limitations of previous datasets, it is very hard to either understand the mechanisms for affect recognition of humans or generalize well on common cases for computer vision models trained on those datasets. In this work, we introduce a brand new large dataset, the Video-based Emotion and Affect Tracking in Context Dataset (VEATIC), that can conquer the limitations of the previous datasets. VEATIC has 124 video clips from Hollywood movies, documentaries, and home videos with continuous valence and arousal ratings of each frame via real-time annotation. Along with the dataset, we propose a new computer vision task to infer the affect of the selected character via both context and character information in each video frame. Additionally, we propose a simple model to benchmark this new computer vision task. We also compare the performance of the pretrained model using our dataset with other similar datasets. Experiments show the competing results of our pretrained model via VEATIC, indicating the generalizability of VEATIC. Our dataset is available at https://veatic.github.io.
Zhihang Ren, Jefferson Ortega, Yifan Wang, Zhimin Chen, Yunhui Guo, Stella X. Yu, David Whitney
2023-09-13T06:31:35Z
http://arxiv.org/abs/2309.06745v3
# Veatic: Video-based Emotion and Affect Tracking in Context Dataset ###### Abstract Human affect recognition has been a significant topic in psychophysics and computer vision. However, the currently published datasets have many limitations. For example, most datasets contain frames that contain only information about facial expressions. Due to the limitations of previous datasets, it is very hard to either understand the mechanisms for affect recognition of humans or generalize well on common cases for computer vision models trained on those datasets. In this work, we introduce a brand new large dataset, the Video-based Emotion and Affect Tracking in Context Dataset (**VeatIC**), that can conquer the limitations of the previous datasets. VeatIC has \(124\) video clips from Hollywood movies, documentaries, and home videos with continuous valence and arousal ratings of each frame via real-time annotation. Along with the dataset, we propose a new computer vision task to infer the affect of the selected character via both context and character information in each video frame. Additionally, we propose a simple model to benchmark this new computer vision task. We also compare the performance of the pretrained model using our dataset with other similar datasets. Experiments show the competing results of our pretrained model via VeatIC, indicating the generalizability of VeatIC. Our dataset is available at [https://veatic.github.io](https://veatic.github.io). ## 1 Introduction Recognizing human affect is of vital importance in our daily life. We can infer people's feelings and predict their subsequent reactions based on their facial expressions, interactions with other people, and the context of the scene. It is an invaluable part of our communication. Thus, many studies are devoted to understanding the mechanism of affect recognition. With the emergence of Artificial Intelligence (AI), many studies have also proposed algorithms to automatically perceive and interpret human affect, with the potential implication that systems like robots and virtual humans may interact with people in a naturalistic way. When tasked with emotion recognition in the real world, humans have access to much more information than just facial expressions. Despite this, many studies that investigate emotion recognition often use static stimuli of facial expressions that are isolated from context, especially in assessments of psychological disorders [3, 18] and in computer vision models [60, 62]. Additionally, while previous studies continue to investigate the process by which humans perceive emotion, many of these studies fail to probe how emotion recognition is influenced by contextual factors like the visual scene, background information, body movements, other faces, and even our beliefs, desires, and conceptual processing [4, 34, 8, 42, 44]. Interestingly, visual contextual information has been found to be automatically and effortlessly integrated with facial expres Figure 1: Importance of context in emotion recognition. How does she feel? Look at the woman in picture (a). If you had to guess her emotion, you might say that she is sad or in grief. However, picture (b) reveals the context of the scene allowing us to correctly observe that she is very happy or excited.
2308.16657
Nature of the mixed-parity pairing of attractive fermions with spin-orbit coupling in optical lattice
The admixture of spin-singlet and spin-triplet pairing states in superconductors can be typically induced by breaking spatial inversion symmetry. Employing the {\it numerically exact} auxiliary-field Quantum Monte Carlo method, we study such mixed-parity pairing phenomena of attractive fermions with Rashba spin-orbit coupling (SOC) in two-dimensional optical lattice at finite temperature. We systematically demystify the evolution of the essential pairing structure in both singlet and triplet channels versus the temperature, fermion filling, SOC and interaction strengths, via computing the condensate fraction and pair wave function. Our numerical results reveal that the singlet channel dominates in the fermion pairing and the triplet pairing has relatively small contribution to the superfluidity for physically relevant parameters. In contrast to the singlet channel mainly consisted of the on-site Cooper pairs, the triplet pairing has plentiful patterns in real space with the largest contributions from several nearest neighbors. As the SOC strengh increases, the pairing correlation is firstly enhanced and then suppressed for triplet pairing while it's simply weakened in singlet channel. We have also obtained the Berezinskii-Kosterlitz-Thouless transition temperatures through the finite-size analysis of condensate fraction. Our results can serve as quantitative guide for future optical lattice experiments as well as accurate benchmarks for theories and other numerical methods.
Yu-Feng Song, Youjin Deng, Yuan-Yao He
2023-08-31T11:58:02Z
http://arxiv.org/abs/2308.16657v2
Demystify the mixed-parity pairing of attractive fermions with spin-orbit coupling in optical lattice ###### Abstract The admixture of spin-singlet and spin-triplet pairing states in superconductors can be typically induced by breaking spatial inversion symmetry. Employing the _numerically exact_ auxiliary-field Quantum Monte Carlo method, we study such mixed-parity pairing phenomena of attractive fermions with Rashba spin-orbit coupling (SOC) in two-dimensional optical lattice at finite temperature. We systematically demystify the evolution of the essential pairing structure in both singlet and triplet channels versus the temperature, fermion filling, SOC and interaction strengths, via computing the condensate fraction and pair wave function. Our numerical results reveal that the singlet channel dominates in the fermion pairing and the triplet pairing has relatively small contribution to the superfluidity for physically relevant parameters. In contrast to the singlet channel mainly consisted of the on-site Cooper pairs, the triplet pairing has plentiful patterns in real space with the largest contributions from several nearest neighbors. As the SOC strength increases, the pairing correlation is firstly enhanced and then suppressed for triplet pairing while it's simply weakened in singlet channel. We have also obtained the Berezinskii-Kosterlitz-Thouless transition temperatures through the finite-size analysis of condensate fraction. Our results can serve as quantitative guide for future optical lattice experiments as well as accurate benchmarks for theories and other numerical methods. ## I Introduction The fermion paring and corresponding superconductivity and superfluidity [1] are of great interest in condensed matter physics. The fundamental ingredient is the Cooper pair consisting of two spin-1/2 electrons [2]. Given the spatial inversion symmetry, the pair wave function can be decoupled into orbital and spin channels resulting in two states of Cooper pairs, even parity with spin-singlet and odd parity with spin-triplet [3]. Majority of known superconductors (SCs) fall into the spin-singlet case, such as simple metals [4] and high-\(T_{c}\) cuprates [5]. Nevertheless, the triplet paring has been observed or suggested to exist in far fewer realistic systems, e.g., superfluid \({}^{3}\)He [6], UPt\({}_{3}\)[7] and Sr\({}_{2}\)RuO\({}_{4}\)[7]. Without inversion symmetry, the parity conservation is broken and thus the mixing of singlet and triplet paring states can emerge [8; 9; 10; 11]. Such mixed-parity pairing state has been experimentally verified in various three-dimensional (3D) noncentrosymmetric SCs [12; 13; 14; 15; 16; 17], which induces intensive interests due to many exotic properties [10] including fertile superconducting gap structures [13; 18], anisotropic magnetic response [19; 20] and topological superconductivity [21]. The appearance of the mixed-parity pairing in noncentrosymmetric systems can be attributed to the arise of the antisymmetric spin-orbit coupling (SOC) [22], which has become one of the key elements for condensed matter physics [23]. For correlated fermion systems, SOC acts as another dimension and induces many exotic states of matter, including spintronics [24], topological phases [25] and unusual superconductivity [11]. Specifically, it typically breaks the spatial inversion symmetry and mixes the spin species, and thus renders the coexistence of spin-singlet even-parity and spin-triplet odd-parity pairing. Moreover, it was shown [14] that tunning the SOC strength can even change the dominant component of the mixed-parity pairing from the singlet in Li\({}_{2}\)Pd\({}_{3}\)B to the triplet in Li\({}_{2}\)Pt\({}_{3}\)B, as replacing the Pd atom by Pt atom. To date, most of the study for the SOC induced singlet-triplet mixed pairing phenomena concentrates on the 3D systems including the noncentrosymmetric SCs [11] and interacting Fermi gas [26; 27; 28]. In physically more relevant two-dimensional (2D) systems, the interplay between the reduced dimensionality and enhanced quantum fluctuations can induce fascinating and unique quantum phenomena [29; 30; 31]. A typical representative is the Berezinskii-Kosterlitz-Thouless (BKT) transition [32; 33; 34; 35] of superconductivity and superfluidity. Similar to the 3D analog, inclusion of SOC to 2D attractive fermions can also induce mixed-pairing pairing [8], which has been relatively much less studied. Experimentally, the recently elegant realization of synthetic SOC for fermions [36; 37] with ultracold atoms, especially in 2D optical lattice [38; 39; 40], substantively paves the way for exploring novel quantum phenomena related with SOC. Thus, a systematically theoretical study with high precision on the mixed-pairity pairing in 2D is highly demanded to shed light on problems closely related to ultracold atom experiments. For example, finding the best condition to observe the spin-triplet pairing in optical lattice, in comparison to the achieved singlet pairing [41], should be a useful guide for experiments. To date, most theoretical work on 2D attractive fermions with SOC falls into the Fermi gas and approximate theories [42; 43; 44; 45; 46]. Interesting results such as singlet and triplet contributions to the condensation [42] are presented in these studies, but still need careful verifications from unbiased approaches. Nevertheless, numerically exact calculations for such systems are rare. Auxiliary-field Quantum Monte Carlo (AFQMC) simulations have been performed for the ground state of 2D Fermi gas [47], and for the lattice system at finite temperatures [48] as well as its ground state [49]. The authors in Ref. [48] focused on the properties of BKT transition temperatures and anisotropic spin susceptibility without touching the pairing structure, which were limited to 12\(\times\)12 finite lattices. The pairing structure were discussed in Ref. [49] only for the half-filling case, for which the BKT transition disappears and thus it was of less interest to experiments. In this paper, we study the mixed-parity pairing of attractive fermions with Rashba SOC in 2D optical lattice, applying finite-temperature AFQMC algorithm [50; 51; 52; 53]. We mainly concentrate on the condensate fraction and pair wave functions to demystify the pairing structures of both singlet and triplet channels for physically relevant regimes of the temperature, fermion filling, SOC and interaction strengths. We also present the determination of BKT transition temperature from condensate fractions. The rest of the paper is organized as follows. In Sec. II, we introduce the lattice model that we use to describe the 2D attractive fermions with Rashba SOC in optical lattice, and the AFQMC method. In Sec. III, we present our numerical results, including the pairing structures, the pairing correlations and calculations of the BKT transition temperature. Finally, Sec. IV summarizes this work, and discusses its connections with the optical lattice experiments. ## II Model and method We describe the 2D attractive fermions with Rashba SOC in optical lattice using the following square lattice model Hamiltonian [48; 49] as \[\begin{split}\hat{H}=&\sum_{\mathbf{k}\sigma} \varepsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\sigma}c_{\mathbf{k}\sigma}+ \sum_{\mathbf{k}}(\mathcal{L}_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\downarrow}c _{\mathbf{k}\uparrow}+\text{H.c.})\\ &+U\sum_{\mathbf{i}}\left(\hat{n}_{\mathbf{i}\uparrow}\hat{n}_{ \mathbf{i}\downarrow}-\frac{\hat{n}_{\mathbf{i}\uparrow}+\hat{n}_{\mathbf{i} \downarrow}}{2}\right)+\mu\sum_{\mathbf{i}\sigma}\hat{n}_{\mathbf{i}\sigma}, \end{split} \tag{1}\] with \(\varepsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})\), \(\mathcal{L}_{\mathbf{k}}=2\lambda(\sin k_{y}-i\sin k_{x})\), and \(\hat{n}_{\mathbf{i}\sigma}=c^{\dagger}_{\mathbf{i}\sigma}c_{\mathbf{i}\sigma}\) representing the density operator with spin \(\sigma=\uparrow,\downarrow\) on the lattice site \(\mathbf{i}=(i_{x},i_{y})\). The momentum \(k_{x}\) and \(k_{y}\) are defined in units of \(2\pi/L\) with the system size \(N_{s}=L^{2}\). We denote the fermion filling as \(n=N/N_{s}\) with \(N\) as the total number of fermions in the system. The nearest-neighbor hopping \(t\), on-site Coulomb interaction \(U\) (\(<0\)), the SOC strength \(\lambda\), and chemical potential \(\mu\) are model parameters. Within the above formulation, the system is at half filling with \(n=1\) for \(\mu=0\) due to the particle-hole symmetry [48], and it is hole doped for \(\mu>0\). Throughout this work, we set \(t\) as the energy scale, and we focus mostly on the doped systems with the fermion filling \(n<1\). The previous study [49] showed that the model in Eq. (1) has a supersolid ground state with coexisting charge and superfluid long-range orders at half filling. Away from this special point, the superfluidity survives for arbitrary filling with arbitrary interaction strength [47]. Since the SOC term breaks the spin SU(2) symmetry and results in two helical bands for noninteracting case, the corresponding superfluid state with interaction is composed of both spin-singlet and triplet Cooper pairs, whose pairing properties are the main content of this work. We then apply the finite-temperature AFQMC algorithm [50; 51; 52; 53] to numerically solve the lattice model in Eq. (1). It is free of fermion sign problem at arbitrary filling due to the time-reversal symmetry [54]. The scheme of the AFQMC method is first to decouple the two-body interactions into free fermions coupled with auxiliary fields and then to calculate the fermionic observables through importance sampling of the field configurations. Practically, the imaginary-time discretization of the inverse temperature as \(\beta=M\Delta\tau\), the symmetric Trotter-Suzuki decomposition \(e^{-\Delta\tau\hat{H}}=e^{-\Delta\tau\hat{H}_{0}/2}e^{-\Delta\tau\hat{H}_{I}} e^{-\Delta\tau\hat{H}_{0}/2}+\mathcal{O}[(\Delta\tau)^{3}]\) (with \(\hat{H}_{0}\) and \(\hat{H}_{I}\) as the free and interaction parts of the Hamiltonian), and the Hubbard-Stratonovich (HS) transformation are successively implemented. The discrete HS transformation with the spin-\(\hat{s}_{z}\) decomposition rather than the usual charge channel [51] is adopted for the attractive \(U\) interaction to suppress the fluctuations of pairing related observables. Other algorithmic advances and techniques applied here include the Fast Fourier Transform (FFT) between the real and momentum space [53], the delayed version of local update [55], and the \(\tau\)-line type of global update [56], which together improve the efficiency of the numerical simulations. For further details of the AFQMC algorithm, we refer to the reviews in Ref. [57; 58]. ## III Numerical results In this section, we present the AFQMC simulation results of the lattice model in Eq. (1), including the pairing structure, the pairing correlation functions and BKT transition. Our AFQMC calculations reach the linear system size \(L=20\) with the temperature as low as \(T/t=0.025\) to sufficiently access the superfluidity (quasi-long-range ordered or quasi-condensate) regime. We mainly concentrate on the pairing properties away from half filling, for which the charge density wave does not have long-range order (see details in Appendix A). The parameter \(\Delta\tau t=0.05\) is chosen mostly in this work, which has been tested to safely eliminate the Trotter error, except for the strong interactions where smaller \(\Delta\tau\) is applied. Periodic boundary conditions in both directions are applied for all the calculations. ### Condensate fractions and pair wave functions The contributions of spin-singlet and triplet channels to the fermion pairing can be quantified by the corresponding condensate fractions [42]. On the other hand, properties of the Cooper pairs, including their sizes and the fermion momentum, can be obtained from the pair wave functions [47, 49]. The computation of these quantities involves the following pairing matrix in momentum space [47, 49] as \[M(\mathbf{k},\ell;\mathbf{k}^{\prime},\ell^{\prime})=\langle\Delta_{\ell}^{ \dagger}(\mathbf{k})\Delta_{\ell^{\prime}}(\mathbf{k}^{\prime})\rangle, \tag{2}\] with \(\ell=s\) or \(t_{\uparrow}\) or \(t_{\downarrow}\), and \(\Delta_{\ell}^{\dagger}(\mathbf{k})\) as spin-singlet and triplet pairing operators with _zero center-of-mass momentum_ as \[\Delta_{s}^{\dagger}(\mathbf{k})=\frac{1}{\sqrt{2}}(c_{\mathbf{ k}\uparrow}^{\dagger}c_{-\mathbf{k}\downarrow}^{\dagger}-c_{\mathbf{k} \downarrow}^{\dagger}c_{-\mathbf{k}^{\dagger}}^{\dagger}) \tag{3}\] \[\Delta_{t_{\uparrow}}^{\dagger}(\mathbf{k})=c_{\mathbf{k}\uparrow }^{\dagger}c_{-\mathbf{k}\uparrow}^{\dagger}\qquad\Delta_{t_{\downarrow}}^{ \dagger}(\mathbf{k})=c_{\mathbf{k}\downarrow}^{\dagger}c_{-\mathbf{k} \downarrow}^{\dagger}.\] Note that the third component of the triplet pairing eliminates by symmetry. In \(N_{s}=L^{2}\) finite system, the pairing matrix \(\mathbf{M}\) is a \(3N_{s}\times 3N_{s}\) matrix. Attributed to the FFT algorithm applied in our numerical calculations, we can compute the equal-time, momentum-space single-particle Green's function matrix \(\mathbf{G}=\{G_{\mathbf{k}\sigma,\mathbf{k}^{\prime}\sigma^{\prime}}=\langle \mathbf{c}_{\mathbf{k}\sigma}c_{\mathbf{k}^{\prime}\sigma^{\prime}}^{\dagger} \rangle_{\tau}\}\), and thus directly measure the pairing matrix defined in Eq. (2) for a single auxiliary-field configuration via Wick decomposition. Then from the leading eigenvalue \(N_{c}\) of the pairing matrix, we can obtain the total condensate fraction as \(n_{c}=N_{c}/(N/2)\)[59, 60]. The corresponding eigenstate of \(N_{c}\) is the momentum-space pair wave function \(\boldsymbol{\Psi}_{c}\), which consists of the singlet and triplet components as \(\boldsymbol{\Psi}_{c}=(\boldsymbol{\Psi}_{c,s},\boldsymbol{\Psi}_{c,t_{ \uparrow}},\boldsymbol{\Psi}_{c,t_{\downarrow}})^{\mathrm{T}}\) with every component as a \(N_{s}\)-dimensional vector. For the lattice model in Eq. (1), the two triplet channels are degenerate as \(\boldsymbol{\Psi}_{c,t_{\uparrow}}=\boldsymbol{\Psi}_{c,t_{\downarrow}}\) due to the spin-inversion symmetry. Thus, we define the overall triplet pair wave function as \(\boldsymbol{\Psi}_{c,t}=\sqrt{2}\boldsymbol{\Psi}_{c,t_{\uparrow}}\). Then within normalized \(\boldsymbol{\Psi}_{c}\), we assign the condensate fractions of spin-singlet and triplet pairing as \(n_{c,s}=n_{c}\times(\boldsymbol{\Psi}_{c,s}\boldsymbol{\Psi}_{c,s}^{\mathrm{T}})\) and \(n_{c,s}=n_{c}\times(\boldsymbol{\Psi}_{c,t}\boldsymbol{\Psi}_{c,t}^{\mathrm{T}})\), respectively. Thus, the relation \(n_{c}=n_{c,s}+n_{c,t}\) obviously holds, with \(n_{c,s}/n_{c}\) and \(n_{c,t}/n_{c}\) as the contributions of singlet and triplet channels to the pairing. The square \(|\boldsymbol{\Psi}_{c,t}(\mathbf{k})|^{2}\) (\(\ell=s,t\)) stands for the probability of fermions with momentum \(\mathbf{k}\) participating the pairing. We can further obtain the corresponding real-space pair wave functions \(\psi_{c,s}(\mathbf{r})\) and \(\psi_{c,t}(\mathbf{r})\) by Fourier transform of \(\boldsymbol{\Psi}_{c,s}\) and \(\boldsymbol{\Psi}_{c,t}\). Similarly, \(|\psi_{s}(\mathbf{r})|^{2}\) and \(|\psi_{t}(\mathbf{r})|^{2}\) represent probabilities of spin-singlet and triplet Cooper pairs with distance \(\mathbf{r}\) of the two fermions, and they actually reflect the size of the pairs. As a demonstration, we show the typical results of condensate fractions and pair wave functions for a specific group of parameters in Fig. 1. With lowering temperature, both the spin-singlet and triplet condensate fractions monotonically increases from the high-temperature normal state to the low-temperature superfluid phase, and then saturates to the ground-state values as indicated by the plateau achieved with \(T/t\leq 0.06\) results. They also exhibit a rapid increase at a specific temperature, for which the BKT transition should be responsible (see Sec. III.4). Remarkably, the triplet channel has a rather small contribution to the total condensate fraction (less than 15% approaching \(T=0\)). As shown in Fig. 1(b), the momentum-space pair wave functions of both singlet and triplet channels have peaks around the Fermi surfaces of the two helical bands for the in Figure 1: Illustration of the condensate fractions and pair wave functions in momentum space. Plotted in panel (a) are the total, singlet and triplet condensate fractions versus the temperature. The inset is the corresponding fermion filling. Panel (b) presents the magnitudes of momentum-space pair wave functions \(|\Psi_{c}(\mathbf{k})|\) in both singlet and triplet channels for two temperatures \(T/t=0.025\) and \(0.30\). These calculations are performed for \(L=20\) system with \(U/t=-4\), \(\lambda/t=0.5\) and \(\mu/t=0.5\). termediate interaction \(U/t=-4\) at low temperature, which resembles the results without SOC [53; 61]. This is also consistent with the fundamental picture of BCS theory that the fermions around the Fermi surfaces dominates the pairing in the weakly interacting regime [2]. In contrast, the high-temperature results of the pair wave functions seem featureless as the system is in the normal state. Note that the node at \(\mathbf{\Gamma}\) point in triplet pair wave function indicates its antisymmetry, while the singlet component is symmetric without node. ### The mixed-parity pairing structure Based on the results and discussions in Sec. III.1, we then concentrate on the mixed-parity pairing structure for physically relevant parameter regimes, revealed by the numerical results of condensate fractions and pair wave functions. tunning parameters including the temperature, interaction strength, SOC and the chemical potential are accounted for in our AFQMC simulations. The condensate fractions versus the temperature typically shares similar behavior as the results shown in Fig. 1(a), with differences lying in the specific numbers and BKT transition temperatures. With varying interaction strength, SOC and chemical potential, we perform the AFQMC simulations with \(T/t=0.10\), at which most systems studied as follows fall into the superfluid phase and the condensate fractions are close to the corresponding \(T=0\) results [similar to Fig. 1(a)]. In Fig. 2, we present one of the key results in this work, as condensate fractions with tunning parameters other than the temperature. First, with increasing on-site interaction, the system hosts the BCS-BEC crossover from extended Cooper pairs to tightly bounded molecules [62]. Fig. 2(a) shows that the singlet condensate fraction simply increases during the crossover, while the triplet contribution has a peak around \(U/t=-4\) (for \(\lambda/t=0.5\)). This difference can be understood as follows. Turning on the interaction can first enhance the pairing in both channels as well as the condensate fractions. Cross some intermediate \(U\), the interaction begins to frustrate the triplet pair formation and continues to increase the singlet pairs, due to the nature of the attraction between fermions with unlike spins. The results suggest that the triplet contribution to the pairing is most significant [\(\sim 21\%\) as in Fig. 2(a)] in the intermediate interaction regime, whose specific value of \(U/t\) should depend on SOC strength. These results are qualitatively consistent with those from the ground-state calculations of 2D spin-orbit-coupled Fermi gas [47]. Then, with tunning SOC strength, the condensate fractions for \(U/t=-4\) are plotted in Fig. 2(b). The decreasing of singlet condensate fraction with \(\lambda/t\) can be explained by the enlarged bandwidth \(W(t,\lambda)\)[48] and the reduced effective interaction \(U/W\). However, the triplet condensate fraction is first enhanced by SOC, due to the fact that SOC is the essential source of triplet pairing in presence of Hubbard interaction. Then the effect from reduced \(U/W\) sets in, and the competition results in a broaden peak around \(\lambda/t=0.5\sim 1.0\). The biggest contribution from the triplet channel to the pairing is \(\sim 30\%\) around \(\lambda/t=1.3\), where nevertheless the total condensate fraction is only \(0.043\). Finally, in Fig. 2(c), we show the numerical results versus the fermion filling (by tunning the chemical potential). Both the singlet and total condensate frac Figure 2: The condensate fractions \(n_{c}\), \(n_{c,s}\) and \(n_{c,t}\) versus various tunning parameters. (a) Tune interaction strength \(U/t\) with \(\lambda/t=0.5\) and \(\mu/t=0.5\); (b) Tune SOC \(\lambda/t\) with \(U/t=-4\) and \(\mu/t=0.5\); (c) Tune the fermion filling \(n\) by chemical potential \(\mu/t\) with \(U/t=-4\) and \(\lambda/t=0.5\). The insets in Panel (a) and (b) plot the results of corresponding filling. These calculations are performed for \(L=20\) system with temperature \(T/t=0.1\). tions reach the maximum around \(n=0.80\), while the triplet one possesses a wide plateau regarding the filling. The triplet contribution saturates to largest value \(\sim 20\%\) towards the low filling regime. As discussed above, for the simulation temperature \(T/t=0.10\), the system evolves from the normal state at half filling (with \(\mu=0\)) to the superfluid phase with increasing doping. Thus, the results in Fig. 2(c) might indicate that the maximal BKT transition temperature is achieved around the filling \(n=0.80\)[48]. Combining all the results in Fig. 2, we can conclude that the spin-singlet pairing always has the predominant contribution than the triplet channel to the mixed-parity pairing in the system. We then turn to the results of the pair wave functions. First, their evolutions versus the chemical potential in momentum space and real space are illustrated in Fig. 3 and Fig. 4, respectively. For half filling, our results are quantitatively consistent with the \(T=0\) results in Ref. [49]. Increasing the chemical potential results in smaller fermion filling, and the corresponding noninteracting Fermi surfaces at \(T=0\) of the two helical bands (dashed lines in Fig. 3), which are determined from the corresponding fermion filling from finite-T AFQMC calculation, shrinks towards circles. It's clear that for the intermediate interaction \(U/t=-4\) the pair wave functions in both channels show sharp peaks in the vicinity of the Fermi surfaces, regardless of the filling. With the increasing interaction, the results should gradually become smooth in the whole Brillouin zone (not shown) without apparent peaks [47] indicating the deviation from BCS theory. In contrast, the singlet and triplet pair wave functions in real space show significant difference, as shown in Fig. 4. The localized peaks in singlet pair wave function \(|\psi_{s}(\mathbf{r})|\) clearly shows that the singlet pairing mainly has a local origin with on-site pairs. However, the Pauli principle prohibits such on-site triplet pair formation, resulting in zero value at \(\mathbf{r}=\mathbf{0}\). Instead, the triplet pair wave function \(|\psi_{t}(\mathbf{r})|\) is more extended and has very rich patterns and evolutions along with decreasing fermion filling. Multi-peak structures appear in \(|\psi_{t}(\mathbf{r})|\), with the largest amplitude locations changing from the next-nearest-neighbor (NNN) sites at half filling, to intermediate fourth-nearest-neighbor (\(4^{\text{th}}\)-NN), and finally to the nearest-neighbor (NN) sites at low filling, as shown in Fig. 6(a). These finite-size triplet Cooper pairs within several NN sites can be explained by the real-space nature of Rashba SOC, which is actually NN spin-flip hopping. As a result, successive SOC hops can enhance the possibility to find another fermion at neighboring sites with the same spin as the one located at origin. Moreover, towards smaller filling, both of \(|\psi_{s}(\mathbf{r})|\) and \(|\psi_{t}(\mathbf{r})|\) show more extended behaviors due to the enlarged wavelength \(\sim 2\pi/k_{F}\). As for the other tunning parameters, the pair wave functions in momentum space show similar behaviors as illustrated in Fig. 3, and in real space for singlet channel as \(|\psi_{s}(\mathbf{r})|\) are also dominated by the center peak as Fig. 4. Thus, we now concentrate on triplet pair wave function Figure 3: The singlet (top) and triplet (bottom) pair wave functions in momentum space, \(|\mathbf{\Psi}_{c,s}(\mathbf{k})|\) and \(|\mathbf{\Psi}_{c,t}(\mathbf{k})|\), versus chemical potential \(\mu/t\) with corresponding fermion filling \(n\) (error bars are in the fourth/fifth digits and are thus neglected) shown on top of the plots. The error bars of \(n\) are in the fourth or fifth digits and thus are neglected. The noninteracting Fermi surfaces at \(T=0\) of the two helical bands are also plotted with the green and red dotted lines. These calculations are performed for \(L=20\) system with \(T/t=0.10\) and \(U/t=-4\), \(\lambda/t=0.5\). Figure 5: The triplet pair wave function in real space \(|\psi_{t}(\mathbf{r})|\) versus (top row) interaction strength \(-U/t=2,4,6,8\) with \(T/t=0.10,\lambda/t=0.5,\mu/t=0.5\); (middle row) the temperature \(T/t=0.025,0.05,0.14,0.20\) with \(U/t=-4,\lambda/t=0.5,\mu/t=0.5\); (bottom row) SOC strength \(\lambda/t=0.3,0.6,1.2,1.5\) with \(T/t=0.10,U/t=-4,\mu/t=0.5\). These calculations are performed for \(L=20\) system. in real space \(|\psi_{t}(\mathbf{r})|\) shown in Fig. 5. With increasing interaction strength [top row of Fig. 5], \(|\psi_{t}(\mathbf{r})|\) gradually evolves from rather extended pattern with multipeaks along diagonals, to local peaks at NN lattice sites, which illustrates the BCS-BEC crossover behavior in triplet pairing channel. With decreasing temperature [middle row of Fig. 5], the peaks in \(|\psi_{t}(\mathbf{r})|\) (located at NNN and 4\({}^{\text{th}}\)-NN sites) simply become more significant and eventually stabilize, indicating entering the superfluid phase from the normal state. Increasing SOC strength, the \(T=0\) AFQMC simulations at half filling [49] showed a diamond pattern of \(|\psi_{t}(\mathbf{r})|\) with enhanced peak values at both NNN and 3\({}^{\text{rd}}\)-NN sites. It behaves differently away from half filling [bottom row of Fig. 5]. As shown in Fig. 6(b), SOC first enhances all the finite-range triplet pairing for \(\lambda/t<0.75\), where NNN and 4\({}^{\text{th}}\)-NN pairing play the leading role. The NNN component is then further promoted by SOC, and NN and NNN pairing gradually becomes comparable towards large SOC, resulting in instead a square pattern as illustrated in Fig. 5. All the qualitative behaviors of pair wave functions results in Fig. 3, Fig. 4 and Fig. 5 do not change with the system size. ### The pairing correlation functions In Sec. III.2, the results in Fig. 2 clearly present an optimal SOC strength and fermion filling regime where the triplet condensate fraction reaches the maximum. In this section, we pursue to understand this point from the aspect of the pairing correlation functions. We define the real-space singlet and triplet pairing operators as \[\begin{split}\hat{\Delta}_{s,\mathbf{i}}&=(c^{ \dagger}_{\mathbf{i}\uparrow}c^{\dagger}_{\mathbf{i}\downarrow}+c_{\mathbf{i} \downarrow}c_{\mathbf{i}\uparrow})/2,\\ \hat{\Delta}_{t,\mathbf{i}}&=(c^{\dagger}_{\mathbf{ i}\uparrow}c^{\dagger}_{\mathbf{i}+\mathbf{\delta}\uparrow}+c_{\mathbf{i}+\mathbf{ \delta}\uparrow}c_{\mathbf{i}\uparrow})/2.\end{split} \tag{4}\] with \(s\) and \(t\) as singlet and triplet. For the triplet, we concentrate on NN and NNN pairing with \(\mathbf{\delta}=(1,0)\) and \(\mathbf{\delta}=(1,1)\) denoting the corresponding lattice vectors. We then measure the real space correlation functions \(P_{s}(\mathbf{r})=\langle\hat{\Delta}_{s,\mathbf{i}}\hat{\Delta}_{s,\mathbf{i }+\mathbf{r}}\rangle\) and \(P_{t}(\mathbf{r})=\langle\hat{\Delta}_{t,\mathbf{i}}\hat{\Delta}_{t,\mathbf{i }+\mathbf{r}}\rangle\), and the structure factors as their Fourier transformation \(S_{\ell}(\mathbf{q})=\sum_{\mathbf{r}}P_{\ell}(\mathbf{r})e^{i\mathbf{q}\cdot \mathbf{r}}\) with \(\ell=s\) or \(t\). To directly evaluate the pure interaction contribution, we have also obtained the vertex contribution for the pairing correlations and structure factors, as \(\bar{P}_{s}(\mathbf{r})\) and \(\bar{S}_{\ell}(\mathbf{q})\), by subtracting the uncorrelated part [63]. In Fig. 7, we present the vertex contributions to the Figure 6: The amplitudes of real-space triplet pair wave function \(|\psi_{c,t}(\mathbf{r})|\) with the distance \(r\) equal to NN, NNN, 3\({}^{\text{rd}}\)-NN and 4\({}^{\text{th}}\)-NN sites, with (a) tunning the fermion filling \(n\) and (b) varying the SOC strength. The other simulation parameters for Panel (a) and (b) are the same as Fig. 4 and the bottom row of Fig. 5, respectively. These calculations are performed for \(L=20\) system. Figure 7: Vertex contribution of on-site singlet (top), NN triplet (middle) and NNN triplet (bottom) pairing correlation functions \(\bar{P}_{t}(\mathbf{r})\) (with \(\ell=s,t\)) versus SOC strength. The correlations with distance \(r=5\sim 10\) (along the \(x\) axis) are plotted. These calculations are performed for \(L=20\) system with \(T/t=0.10\) and \(U/t=-4\), \(\mu/t=0.5\). pairing correlation functions of on-site singlet, NN and NNN triplet channels, with tunning SOC strength. All the positive vertex contributions to the correlations in Fig. 7 reveal that the on-site attractive interaction enhances the singlet and triplet pairing correlations with specific distances as \(L/4\leq r\leq L/2\) (as \(L=20\)). These results contribute more than 90% of the corresponding bare correlation functions (not shown). It's clear that SOC simply suppresses the singlet pairing correlation, while the NN and NNN triplet correlations show broaden peaks around \(\lambda/t=0.8\) and \(\lambda/t=1.0\). Moreover, the singlet correlation is stronger than the triplet ones around two orders of magnitude, indicating the dominant role of singlet channel. These results are in accordance with the behaviors of the corresponding condensate fractions shown in Fig. 2(b). The almost collapsed numerical data for different distances in Fig. 7 also highlight the superfluid phase of the system for the chosen parameters. Then, the vertex contributions of the pairing structure factors \(\bar{S}_{\ell}(\mathbf{q}=\mathbf{\Gamma})\) (with \(\ell=s,t\)) with increasing SOC strength are illustrated in Fig. 8. They show similar behaviors as the real-space correlation functions. The negative vertex of \(\bar{S}_{t}(\mathbf{\Gamma})\) with NN triplet for \(\lambda/t<0.3\) and \(\lambda/t>1.2\) means that the NN triplet pairing is not favored in these regimes. The growing numbers of \(\bar{S}_{\ell}(\mathbf{\Gamma})\) in Fig. 8 for all three quantities (especially in the intermediate SOC regime) with increasing system size also suggest quasi-long-range pairing orders. Similarly, the results of condensate fractions versus the fermion filling in Fig. 2(c) can also be alternatively understood from the pairing correlations. Fig. 9 plots the vertex contributions of the pairing structure factors \(\bar{S}_{\ell}(\mathbf{\Gamma})\) versus fermion filling. The results of on-site singlet structure factor has the same nonmonotonic behavior as the singlet condensate fraction in Fig. 2(c). Instead, the NN and NNN triplet structure factors show more interesting signatures with different peak locations, revealing that the triplet channel is first governed by the NNN and then by the NN pairing from half filling to low filling regime (\(n<0.5\)). These validate the results of condensate fractions in Fig. 2(c) and pair wave functions in Fig. 3. Moreover, the wide plateau of the triplet condensate fraction in Fig. 2(c) can be explained by the accumulated results of the NN and NNN triplet pairing correlations in Fig. 9(b). As for the temperature and interaction strength, we have also obtained the vertex contributions of both singlet and triplet pairing correlations. In Appendix B, we have presented the results of vertex contributions \(\bar{P}_{\ell}(\mathbf{r})\) versus temperature. Figure 8: Vertex contribution of on-site singlet (top), NN triplet (middle) and NNN triplet (bottom) pairing structure factors \(\bar{S}_{\ell}(\mathbf{q}=\mathbf{\Gamma})\) (with \(\ell=s,t\)) versus SOC strength. These calculations are performed for \(L=16,18,20\) systems with \(T/t=0.10\) and \(U/t=-4\), \(\mu/t=0.5\). Figure 9: Vertex contribution of (a) on-site singlet, (b) NN triplet and NNN triplet pairing structure factors \(\bar{S}_{\ell}(\mathbf{\Gamma})\) versus fermion filling. These calculations are performed for \(L=20\) systems with \(T/t=0.10\) and \(U/t=-4\), \(\lambda/t=0.5\). ### BKT transition temperature from condensate fractions In previous studies of 2D attractive Hubbard model, the BKT transition temperature was usually determined by the finite-size scaling of the pairing structure factor or from the universal jump property of the superfluid density [48, 64, 65]. However, these quantities can become significantly small for low filling system, which makes the finite-size scaling even harder. In Ref. [48], the BKT transition temperatures for the same system as we study were calculated from superfluid density with systems up to \(L=12\). Such AFQMC simulations need to compute the superfluid density from dynamical current-current correlation functions, which definitely cost much more computational effort to reach high-precision results. Alternatively, numerical studies in 2D XY models solidly confirm that the finite-size BKT transition temperature \(T_{\rm BKT}(L)\) has a following form [66, 67] as \[T_{\rm BKT}(L)=T_{\rm BKT}(L=\infty)+\frac{a}{(\ln L+b)^{2}}, \tag{5}\] with \(a,b\) as coefficients related to the specific problem, and \(T_{\rm BKT}(L=\infty)\) as the final answer under thermodynamic limit. The second term in Eq. (5) containing the logarithm of linear system size \(L\) already indicates the strong finite-size effect. As a result, biased result of \(T_{\rm BKT}(L=\infty)\) might be obtained if only a small group of systems with not large enough sizes are accessed in the calculations. Based on Eq. (5), we could extrapolate the precise \(T_{\rm BKT}(L=\infty)\) from the finite-size \(T_{\rm BKT}(L)\) results. In the previous study of the 2D interacting Fermi gas without SOC [61], it was found that the first-order derivative of condensate fraction over the temperature shows a peak and its location can be identified as \(T_{\rm BKT}(L)\). Such calculations do not involve dynamical measurements and are thus computationally much cheaper and high-precision results of \(T_{\rm BKT}(L)\) can be yielded. Similar formula as Eq. (5) was also confirmed dealing with the convergence of \(T_{\rm BKT}\) with number of fermions for 2D Fermi gas in Ref. [61]. Thus, in the following, we also concentrate on calculating \(T_{\rm BKT}(L)\) from the condensate fractions and reaching \(T_{\rm BKT}(L=\infty)\) using Eq. (5), for the system with SOC. In Fig. 10, we first demonstrate the determination of BKT transition temperature from the spin-singlet pairing structure factor \(S_{s}(\mathbf{\Gamma})\) (defined in Sec. III.3) and the total condensate fraction as a comparison. The correlation ratio for \(S_{s}(\mathbf{\Gamma})\) is defined as \(R_{\rm corr}=1-S_{s}(\mathbf{\Gamma}+\mathbf{q})/S_{s}(\mathbf{\Gamma})\) with \(\mathbf{q}\) as the smallest momentum on the lattice, i.e., \((2\pi/L,0)\) or \((0,2\pi/L)\). It resembles the Binder cumulant which converges to unity in ordered phase and vanishes in the disordered phase in thermodynamic limit. Then the cross points of the finite-size \(R_{\rm corr}\) results can be approximately viewed as the transition temperature. As shown in Fig. 10(b), the cross points of \(R_{\rm corr}\) indeed move to the lower temperature with system size but not with a well defined behavior. Instead, for the total condensate fraction in Fig. 10(c), we first perform a polynomial fitting to the numerical data and then compute its first-order derivative and get the location of the peak as \(T_{\rm BKT}(L)\) [shown in Fig. 10(d)], which avoids the step error involved in the numerical derivative. We have further calculated the error bars of \(T_{\rm BKT}(L)\) applying the standard bootstrapping technique. Finally, we use Eq. (5) to extrapolate the final result of BKT transition temperature \(T_{\rm BKT}(L=\infty)=0.135(4)\), as plotted in inset of Fig. 10(d). The details of the bootstrapping calculations of \(T_{\rm BKT}(L)\) are presented in Appendix C. These results also indicates large finite-size effect in \(R_{\rm corr}\) as the cross point of \(L=18\) and \(L=20\) is \(T/t=0.158\), which strongly deviates from \(T_{\rm BKT}(L=\infty)\). For the mixed-parity pairing we stuty, we also have the numerical data of singlet and triplet condensate fractions. From them, we can separately extrapolate the BKT transition temperatures for the spin-singlet and triplet superfluidity as \(T_{\rm BKT}^{s}(L=\infty)\) and \(T_{\rm BKT}^{s}(L=\infty)\), which are expected be the same. In Fig. 11, we illustrate the determination of \(T_{\rm BKT}\) from both singlet and triplet condensate fractions. The procedure is exactly the same as that in Fig. 10(c) and (d), and the details can also be refered in Appendix C. These calculations produce the final results of \(T_{\rm BKT}^{s}(L=\infty)=0.135(4)\) and \(T_{\rm BKT}^{t}(L=\infty)=0.135(4)\). These results are indeed consistent as expectation, meaning the BKT transition for the quasi-long-range mixed-parity pairing order happens simultaneously for singlet and triplet channels. Figure 10: Determination of the BKT transition temperatures from correlation ratio and total condensate fraction. (a)(b) are the singlet pairing structure factor \(S_{s}(\mathbf{\Gamma})\) and the corresponding correlation ratio. The inset in panel (b) plots the cross points of finite-size correlation ratios. (c)(d) are the total condensate fraction and its first order derivative (after polynomial fitting). The inset in panel (d) plots \(T_{\rm BKT}(L)\) after the best fitting using Eq. (5), reaching the final result as \(T_{\rm BKT}(L=\infty)/t=0.135(4)\). These calculations are performed for \(L=8\sim 20\) systems with \(U/t=-4,\lambda/t=0.5,\mu/t=0.5\). Based on the results in Fig. 10 and Fig. 11, we have obtained the BKT transition temperature \(T_{\rm BKT}(L=\infty)=0.135(4)\) for the parameter \(U/t=-4,\lambda/t=0.5,\mu/t=0.5\) (with fermion filling \(n=0.6795\) at the transition point). This result is consistent with the \(T_{\rm BKT}\) computed for filling \(n=0.7\) in Ref. [48]. We can then conclude that, similar to previous studies [61], it's also an efficient way to determine BKT transition temperature from (total, singlet and triplet) condensate fractions for attractive fermion systems with SOC. ## IV Summary and Discussion The mixed-parity pairing phenomena is theoretically a natural result for fermionic systems with broken inversion symmetry [8; 22], and it has been experimentally observed in various three-dimensional superconductors with SOC [11]. In addition, the experimental realization of SOC with an artificial gauge field in optical lattice by ultracold atoms [38; 39; 40] provides the opportunity to perform more systematic and deeper study of the mixed-parity pairing in a more controlled manner. Our AFQMC numerical results in this work can not only serve as quantitative guide for such 2D optical lattice experiments, but also present some new physical results on the essential pairing structure of the corresponding mixed-parity pairing. In summary, we have applied the numerically exact finite-temperature AFQMC method to study the pairing properties of attractive fermions with Rashba SOC in 2D optical lattice. We evaluate the contributions of the spin-singlet and triplet channels to the mixed-parity pairing. With the scanning of temperature, fermion filling, SOC and interaction strengths, we find that the singlet pairing plays a dominant role with relatively small triplet contribution in most relevant parameter regimes. From the pair wave functions, we find that, for intermediate interaction (\(U/t=-4\)), the singlet pairing mainly consists of local Cooper pairs while the triplet channel is rather extended with major contributions from several nearest neighbors. Especially, in low filling regime (\(n<0.5\)), the triplet pairing is dominated by NN fermion pairs, in contrast with the NNN ones around half filling. Via the vertex contribution of pairing correlations, we have shown that the triplet pairing is first enhanced and then suppressed with increasing SOC, and there exists an optimal SOC strength for observing the triplet pairing. Finally, we have demonstrated the computation of the BKT transition temperature from the finite-size results of total, singlet and triplet condensate fractions, suggesting it also as an efficient method for systems with SOC. Our numerical results will surely offer useful benchmarks for future optical lattice experiments as well as theories and other numerical methods. Our work also has implications for achieving the spin-triplet superconductivity and superfluidity. Considering the fact that the triplet pairing is only confirmed to exist in very rare systems, it might be a way out to pay more attention to the systems with mixed-parity pairing. Specifically, if one can control the triplet contribution to the pairing by tunning physical parameters (for example, the SOC strength) in such systems, we might access the special case in which the triplet channel dominates, similar to Li\({}_{2}\)Pt\({}_{3}\)B [14]. Unfortunately, our work shows it's very unlikely to realize such special case for the system described by the lattice model in Eq. (1). Instead, there are actually other possibilities, such as further including the Dresselhaus SOC, and NN or NNN attractive interactions. The former was found to be useful in promotion of the triplet contribution in interacting Fermi gas within the mean-field theory [42]. The latter is apparently supported by our numerical results as the triplet pairing is mainly contributed by NN and NNN Cooper pairs. We leave these open possibilities to future work. ###### Acknowledgements. Y.Y.H. acknowledges Peter Rosenberg and Shiwei Zhang for valuable discussions. This work was supported by the National Natural Science Foundation of China (under Grant No. 12047502, 12204377 and 12275263) and the Innovation Program for Quantum Science and Technology (under Grant No. 2021ZD0301900). Figure 11: Determination of the BKT transition temperatures from singlet and triplet condensate fractions. (a)(b) are the singlet condensate fraction and its first-order derivative (after polynomial fitting). (c)(d) are the triplet condensate fraction and its first-order derivative. The inset in panel (b)(d) plots \(T_{\rm BKT}(L)\) after the best fitting using Eq. (5), reaching the final result as \(T_{\rm BKT}^{s}(L=\infty)/t=0.135(4)\) and \(T_{\rm BKT}^{s}(L=\infty)/t=0.135(4)\), respectively. Simulation parameters are the same as Fig. 10. ## Appendix A Structure factor of the density-density correlation function In Ref. [49], it was found that, at half filling, the long-range charge density wave (CDW) order with checkerboard pattern coexist with the pairing order in ground state for the lattice model in Eq. (1). We have also checked this and our numerical results suggest that the long-range CDW order should not exist for the case away from half filling. We compute the density-density correlation function defined as \(D(\mathbf{r})=\frac{1}{4}(\langle\hat{n}_{\mathbf{i}}\hat{n}_{\mathbf{i}+ \mathbf{r}}\rangle-\langle\hat{n}_{\mathbf{i}}\rangle\langle\hat{n}_{\mathbf{ i}+\mathbf{r}}\rangle)\) (with \(\hat{n}_{\mathbf{i}}=\hat{n}_{\mathbf{i}\uparrow}+\hat{n}_{\mathbf{i}\downarrow}\)), and the corresponding momentum-space structure factor as \(S_{\mathrm{CDW}}(\mathbf{q})=\sum_{\mathbf{r}}D(\mathbf{r})e^{i\mathbf{q} \cdot\mathbf{r}}\). The leading component of \(S_{\mathrm{CDW}}(\mathbf{q})\) appears at \(\mathbf{q}=\mathbf{M}=(\pi,\pi)\) point, consistent with the CDW order with the checkerboard pattern. In Fig. 12, we illustrate the results of the CDW structure factor \(S_{\mathrm{CDW}}(\mathbf{M})\) with various tunning parameters. First, with doping as increasing the chemical potential, \(S_{\mathrm{CDW}}(\mathbf{M})\) immediately decrease from the half-filling result by approximately an order of magnitude for \(n=0.94\), which suggests the significant suppression of CDW order away from half filling. Second, the results with lowering temperature with \(\mu/t=0.5\) (around \(n=0.68\)) explicitly shows that \(S_{\mathrm{CDW}}(\mathbf{M})\) first decreases, then reaches a minimum and gradually saturates towards \(T=0\), indicating the absence of long-range order. The results with varying SOC and interaction strengths show some enhancements of \(S_{\mathrm{CDW}}(\mathbf{M})\) for specific regimes, but its largest values are still much smaller than the half-filling results, which also suggests the short-range correlations. ## Appendix B Vertex contributions of the pairing correlation functions versus temperature In Sec. III.3, we have shown the numerical results of vertex contributions for the pairing correlation functions with tunning SOC strength and chemical potential. Here, we present more results with varying temperature. In Fig. 13, we present the vertex of reals-space pairing correlations \(P_{\ell}(\mathbf{r})\) (with \(\ell=s,t\)) with the largest distance as \(r=\sqrt{2}L/2\) [as \(\mathbf{r}=(L/2,L/2)\)] on the lattice with on-site singlet, NN and NNN triplet channels versus the temperature, for several SOC strengths as \(\lambda/t=0\sim 1.5\). All the results show enhanc Figure 12: The density-density structure factor at \(\Gamma\) point for different quantities. The detailed parameters are shown in the figure. Compared with half-filling case (\(\mu=0\)), the structure factor has an obvious decrease, which indicates that the ”super-conducting” order and CDW do not coexist in the doped system. Figure 13: Vertex contribution of on-site singlet (top), NN triplet (middle) and NNN triplet (bottom) pairing correlation functions \(P_{\ell}(\mathbf{r})\) (with \(\ell=s,t\)) versus temperature, with \(r=\sqrt{2}L/2=10\sqrt{2}\) [as \(\mathbf{r}=(L/2,L/2)\)] as the largest distance. Results with several SOC strengths \(\lambda/t=0\sim 1.5\) are plotted. These calculations are performed for \(L=20\) system with \(T/t=0.10\) and \(U/t=-4\), \(\mu/t=0.5\). temperature and plateaus appear saturating to the \(T=0\) results, indicating the quasi-long-range pairing order at low temperature regime. At low temperature regime, the triplet correlations reach the maximum at \(\lambda/t=0.9\) for both NN and NNN pairing, consistent with the results shown in Fig. 7. Besides, these results also illustrate that the results at \(T/t=0.10\) is very close to the \(T=0\) correspondences. Appendix C The determination of the BKT transition temperature \(T_{\rm BKT}(L)\) and \(T_{\rm BKT}(L=\infty)\) In this section, we present the details for the determination of the BKT transition temperature \(T_{\rm BKT}(L)\) and \(T_{\rm BKT}(L=\infty)\). Based on the numerical data of condensate fraction \(\bar{n}_{c}(L,T)\) (including the total, singlet and triplet) and the corresponding standard error \(\sigma(L,T)\), we apply the bootstrapping technique by first generating a set of random data by \[n_{c,i}(L,T,q)=\bar{n}_{c}(L,T)+N(0,q\sigma(L,T)), \tag{10}\] where \(i\) denotes the \(i\)-th random data with \(q=1,2,3\) for different range of deviation, and \(N(0,q\sigma(L,T))\) stands for the normalized Gaussian distribution with expectation and standard deviation as \(0\) and \(q\sigma(L,T)\). The whole process follows the Gauss analysis and can quickly construct a large number of \(n_{c,i}(L,T,q)\). Then we fit \(n_{c,i}(L,T,q)\) for every set of random data with a fourth-order polynomial of temperature around the transition point, and compute the peak location of its first-order derivative, and then we take it as \(T_{\rm BKT}(i,L,q)\). With the full set of \(T_{\rm BKT}(i,L,q)\), one can perform data analysis and obtain \(\overline{T}_{\rm BKT}(L,q)\) with the stand deviation as its error bar. Compared with the method to only fit \(n_{c}(L,T)\) with the original data, this bootstrapping method can additional present a reasonable error bar for \(\overline{T}_{\rm BKT}(L,q)\). In order to show the process, we take \(L=18\) and \(20\) for example. Fig. 14 shows the original data and fitting polynomials of five random sets of data. It is shown that the fourth-order polynomials can capture the essential behavior of \(n_{c}\) around the transition point. By generating \(150000\) samples, Fig. 15 shows the histogram of results for \(\overline{T}_{\rm BKT}(L,q)\), which are fairly consistent with Gaussian distributions. It is also well illustrated that, for different \(q\), the average \(\overline{T}_{\rm BKT}(L,q)\) are obviously identical for both system sizes. The only difference of these results the data set generated by different Gaussian noise (different \(q\)) shows in the standard deviations of these distributions. As expectation, the distribution is wider (with larger standard deviation) for larger \(q\). Finally, to obtain \(T_{\rm BKT}\) in the thermodynamic limit, we perform fittings of \(T_{\rm BKT}(L,q)\) with the formula in Eq. (5). Fig. 16 shows the fitting results with different \(q\). The total condensate fraction as well as its two channels give similar \(T_{\rm BKT}\), as shown in Fig. 16. To achieve a confident estimate of \(T_{\rm BKT}(L=\infty)\), we adopt the results of \(q=3\) as the final result as presented in Sec. III.4 of the main text. Figure 16: BKT transition temperature \(T_{\rm BKT}(L,q)\) versus \((\ln L-0.7)^{-2}\) for the total, singlet and triplet condensate fractions. The fittings are based on the correction formula Eq. (5), where \(-0.7\) is determined from the fitting. The singlet and triplet channels give very similar results. Simulation parameters are the same as Fig. 14. Figure 14: Condensate fraction versus temperature. The points are QMC data for system size \(L=18\) and \(20\), where the error bars denote the standard error. The curves are the fitting results of different random \(n_{c,i}(L,T,q)=\bar{n}_{c}(L,T)+N(0,q\sigma(L,T))\). For simplicity, we plot five random curves for each system. The parameters are \(U/t=-4\), \(\lambda/t=0.5\) and \(\mu/t=0.5\). Figure 15: The distribution of \(T_{\rm BKT}(L,q)\) based on the bootstrapping calculations. We have generated \(150000\) random data for each system size and \(q\). It is well illustrated that the average values \(\overline{T}_{\rm BKT}(L)\) for different \(q\) are identical. Simulation parameters are the same as Fig. 14.
2309.13406
Statistically Adaptive Filtering for Low Signal Correction in X-ray Computed Tomography
Low x-ray dose is desirable in x-ray computed tomographic (CT) imaging due to health concerns. But low dose comes with a cost of low signal artifacts such as streaks and low frequency bias in the reconstruction. As a result, low signal correction is needed to help reduce artifacts while retaining relevant anatomical structures. Low signal can be encountered in cases where sufficient number of photons do not reach the detector to have confidence in the recorded data. % NOTE: SNR is ratio of powers, not std. dev. X-ray photons, assumed to have Poisson distribution, have signal to noise ratio proportional to the dose, with poorer SNR in low signal areas. Electronic noise added by the data acquisition system further reduces the signal quality. In this paper we will demonstrate a technique to combat low signal artifacts through adaptive filtration. It entails statistics-based filtering on the uncorrected data, correcting the lower signal areas more aggressively than the high signal ones. We look at local averages to decide how aggressive the filtering should be, and local standard deviation to decide how much detail preservation to apply. Implementation consists of a pre-correction step i.e. local linear minimum mean-squared error correction, followed by a variance stabilizing transform, and finally adaptive bilateral filtering. The coefficients of the bilateral filter are computed using local statistics. Results show that improvements were made in terms of low frequency bias, streaks, local average and standard deviation, modulation transfer function and noise power spectrum.
Obaidullah Rahman, Ken D. Sauer, Charles A. Bouman, Roman Melnyk, Brian Nett
2023-09-23T15:30:39Z
http://arxiv.org/abs/2309.13406v1
# Statistically Adaptive Filtering for Low Signal Correction in X-ray Computed Tomography ###### Abstract Low x-ray dose is desirable in x-ray computed tomographic (CT) imaging due to health concerns. But low dose comes with a cost of low signal artifacts such as streaks and low frequency bias in the reconstruction. As a result, low signal correction is needed to help reduce artifacts while retaining relevant anatomical structures. Low signal can be encountered in cases where sufficient number of photons do not reach the detector to have confidence in the recorded data. X-ray photons, assumed to have Poisson distribution, have signal to noise ratio proportional to the dose, with poorer SNR in low signal areas. Electronic noise added by the data acquisition system further reduces the signal quality. In this paper we will demonstrate a technique to combat low signal artifacts through adaptive filtration. It entails statistics-based filtering on the uncorrected data, correcting the lower signal areas more aggressively than the high signal ones. We look at local averages to decide how aggressive the filtering should be, and local standard deviation to decide how much detail preservation to apply. Implementation consists of a pre-correction step i.e. local linear minimum mean-squared error correction, followed by a variance stabilizing transform, and finally adaptive bilateral filtering. The coefficients of the bilateral filter are computed using local statistics. Results show that improvements were made in terms of low frequency bias, streaks, local average and standard deviation, modulation transfer function and noise power spectrum. X-ray CT, statistics based filtering, bilateral filtering, low frequency bias Further author information: (Send correspondence to O.R.) O.R.: E-mail: [email protected], Telephone: 1 574 298 6896 K.D.S.: E-mail: [email protected], Telephone: 1 574 631 6999 C.A.B.: E-mail: [email protected] R.M.: E-mail: [email protected] B.N.: E-mail: [email protected] ## 1 Introduction Concerns over long-term health effects of x-ray exposure in CT has moved the medical community in the direction of minimal dosage in clinical settings [1].[2] As a result, the research motivation for CT dose reduction has grown lately under the industry-guiding principle of ALARA (as low as reasonably achievable). The easiest way to lower the radiation dose is to reduce the x-ray flux by reducing the tube current and shortening the exposure time. However, simply lowering the radiation dose will, especially in presence of high attenuating material or large patients, severely degrade the image quality and diagnostic capabilities. To address this problem, low signal correction (LSC) and postprocessing algorithms have become necessary. X-ray photon emission can be accurately modeled as a Poisson process, with SNR proportional to the x-ray intensity. Therefore at low intensities of detected photons, photon counting noise may overwhelm diagnostically important information. At the detector, the data acquisition system (DAS) adds electronic noise. This has negligible effect in high signal data, but further damages SNR in low signal areas, possibly driving registered counts below zero. Sinogram domain correction[3] is generally preferred over image domain to correct low signal artifacts because the low signal errors are more localized in the projection domain. Sinogram filtration includes techniques which could be as simple as a local averaging or Gaussian filtering, or could be treating with a custom-designed filter[4]. These filters adaptively correct the signal based on the signal or noise level. CT vendors have their own specific filters that are designed to suit their customer preferences. The goal of this research was to provide a robust, adaptive filtering method to more fully exploit local statistical variation in sinogram data than previous approaches. Some of the image quality metrics that we will look at are degree of bias and streak correction, modulation transfer function (MTF) profile, noise power spectrum (NPS), local sample average and standard deviation. ## 2 Method We are attempting to reduce streaks, reduce low frequency bias, and reduce non-uniform texture while maintaining good resolution in the reconstruction image. Counts data are approximately Poisson distributed and are independent, conditioned on the integral projections. A well chosen filtering action can reduce the noise using some kind of local spatially weighted averaging. Most LSC algorithms involve adaptive filters[3, 5], or non-linear sinogram filtering. Bilateral filters have also shown promise in correcting low signal artifacts[6]. We seek a solution that entails signal-adaptive filtering action in applying bilateral filters in the domain of a variance stabilizing transform. The CT imaging chain starts with scanning of patients and acquisition of photon counts. It is followed by correction of low signal artifacts, conversion of counts to integral projection estimates by the \(-\log()\) operator and, typically, FBP reconstruction and possibly image-domain post-processing. Our algorithm, adaptive filering low signal correction (AF LSC), deals with the low signal correction part of the chain and comprises the following components. #### 2.0.1 Local linear minimum mean-squared error (LLMMSE) correction Before the negative log step converting photon counts to integral projections, the counts need to be positive. In cases of near-complete photon starvation, large numbers of negative registered counts pose a unique problem. Simply forcing the negative data to non-negative values creates a bias in the data, so we wish to correct them with minimal disturbance of the local mean. The electronic noise in this case injects a large fraction of the variance. Since this noise is independent of the signal, its variance can be estimated before the scan. Its statistics can be captured by recording detector response with the x-ray beam turned off. It is assumed to be independent Gaussian[7][8]. The local, linear, minimum mean-squared error (LLMMSE) filter[9] removes a large fraction of negative data points through adaptive linear filtering. LLMMSE is a pre-correction step, and we limit it to only very low values of SNR. #### 2.0.2 Variance stabilizing transform (VST) Photon count levels in common clinical scans vary by several orders of magnitude, with large offsets possible in sinogram values in the distance of a few detectors in the presence of heavily attenuating materials. Because there exists a large number of well-designed algorithms in the literature to tackle constant-varaince Gaussian noise, a key step in our algorithm is transformation to approximately constant variance in the data exiting the LLMMSE step. The VST transforms a Poisson random variable to have a variance that does not depend on its mean[10]. Once the data has approximately constant variance, it is similar to Gaussian and therefore traditional Gaussian denoisers can be applied to it. In the paper[11] authors show how to perform the VST when the data is corrupted with Poisson-Gaussian noise. This work also provides a closed form unbiased inverse VST. #### 2.0.3 Bilateral filtering Some existing LSC methods choose among a fixed bank of filters based on the measured counts. This piecewise approach is somewhat restrictive, so we proposed an adaptive approach in which the filter parameters depend on the local signal and noise level. Bilateral filtering in the VST-transformed counts is the heart of our LSC algorithm. The bilateral transform has been shown to work well with images[12], and the authors in[6] have implemented it in projection space. The parameters the authors used in this case were fixed and not dependent on local statistics. Since the recorded counts can have a large dynamic range, choosing one set of parameters for the entire data set is a limitation in responding to serious low signal problems. We would like to do extremely aggressive correction in the counts at comparable levels to the standard deviation of the electronic noise, and less for higher counts. This is the principle innovation in our adaptive filtering. The authors in [13] have also implemented adaptive bilateral filtering but only the spatial term, and they compute the coefficients of the filter using a neural net. The bilateral filter outputs a weighted sum of a datum and its neighbors for each data point. Each weight is decided by the respective neighbor's spatial proximity to the current datapoint and the value of the neighboring measurement. The first part (i.e. spatial term) in the weight decreases as the spatial distance of the neighbor pixel w.r.t the center or current voxel, as in a conventional filter. The second part (intensity term) relaxes the degree of filtering in the case of large difference in intensity between the neighbor and the current pixel. If the distance and difference are high, the neighbor has decreased weight and filtering tends to be done separately on the two sides of an edge. #### 2.0.4 Inverse VST and positivity mapping After adaptive, bilateral filtering, we convert the data from the VST domain back to counts domain for subsequent processing. An unbiased VST inverse is implemented as explained in [11]. After the inverse VST, there could be a small number of zero values which need to be mapped to positive numbers before the negative log step. We effect this with a simple exponential mapping for any value below a set threshold. The algorithm in summarized in Algorithm 1. ``` Get raw counts from the reconstruction chain; \(\lambda\leftarrow\) raw counts \(\sigma_{e}\leftarrow\) standard dev. of electronic noise ; \(N\leftarrow\) number of data points in raw counts array; \(\lambda_{av}\leftarrow\) local average of raw counts; \(\eta=\frac{\lambda_{av}}{\lambda_{av}+\sigma_{e}^{2}}\); \(\lambda_{th}\leftarrow\) A scalar, counts below which undergo LLMMSE correction; \(\lambda^{\prime}_{th}\leftarrow\) A scalar, counts below which undergo positive mapping; if\(\lambda\leq\lambda_{th}\)then \(\lambda_{llmmse}=\eta\lambda+(1-\eta)\lambda_{av}\); end if \(\lambda_{vst}=2\sqrt{\lambda_{llmmse}+\frac{3}{8}}\); for\(i\gets 1\) to \(N\)do for\(j\in\Omega_{i}\)do \(W_{j}=e^{-\frac{|i-j|}{\sigma_{d}}}e^{-\frac{|\lambda_{st}^{(i)}-\lambda_{vst}^ {(j)}|}{\sigma_{r}}}\); end for \(\lambda_{bf}^{(i)}=\frac{\sum_{j\in\Omega_{i}}W_{j}\lambda_{vst}^{(j)}}{\sum_{j \in\Omega_{i}}W_{j}}\); end for \(\lambda_{ivst}=\frac{1}{4}(\lambda_{bf})^{2}+\frac{1}{4}\sqrt{\frac{3}{2}}( \lambda_{bf})^{-1}-\frac{11}{8}(\lambda_{bf})^{-2}+\frac{5}{8}\sqrt{\frac{3}{2 }}(\lambda_{bf})^{-3}-\frac{1}{8}\); \(\hat{\lambda}=\lambda^{\prime}_{th}e^{\frac{\lambda_{ivst}}{\lambda_{th}}-1} \qquad\forall\ \lambda_{ivst}<\lambda^{\prime}_{th}\); Plug \(\hat{\lambda}\) back into the reconstruction chain; ``` **Algorithm 1**Adaptive filtering (AF) LSC algorithm ## 3 Results To get \(\sigma_{d}\), and \(\sigma_{r}\), local average and standard deviation, in a sinogram window of \(7\times 5\times 3\) were computed for the array of received photon counts, \(\lambda\), which resulted in arrays \(\hat{\mu}\) and \(\hat{\sigma}\) respectively. For \(i^{th}\) datapoint, \(\sigma_{d}^{(i)}=K_{1}\frac{1}{\hat{\mu}^{(i)}}\), and \(\sigma_{r}^{(i)}=K_{2}\hat{\sigma}^{(i)}\), where \(K_{1}=400,\ K_{2}=5\). Bilateral filtering was performed in a 3-D window of \(Channel\times Row\times View=13\times 7\times 3\). For a practical comparison, we use an LSC algorithm similar to [14] which employs filtering based on lower and upper thresholds. The data points valued less than the lower threshold undergo full box-car filtering, and the Figure 1: A slice of reconstructed image (left) Uncorrected, (center) FT LSC, (right) AF LSC (a) An axial slice of clinical image reconstruction in the liver region (b) An axial slice of clinical image reconstruction in the shoulder region containing contrast (c) An axial slice of low signal phantom image reconstruction data points higher than the upper threshold undergo median filtering. This or its variants are typical examples of simple yet effective low signal correction techniques used commercially. We will label it simply "fixed threshold" (FT) LSC for results below. It is clear that streaks are reduced in the FT LSC as well as AF LSC images. Compared with FT LSC, in AF LSC local averages in the reconstruction images were preserved, with slight reduction in local variance. In Fig. 0(a), the liver region in AF LSC looks more uniform. In Fig. 0(b), we see a reduction in low frequency bias around and below the spine. MTF measurement was made in a region of interest containing a titanium wire in a low signal phantom. Measurements are rendered for the frequencies at which the response falls to 50%, 10% and 4% of maximum. It can be seen in Table 1 AF LSC receives improved MTF scores at each of these points. The noise power spectrum for AF LSC is flatter than that of FT LSC and uncorrected images, as seen in Fig. 2. This property is typically viewed as advantageous for image evaluation. From the NPS plot it can also be inferred that AF LSC has lower variance in liver region compared to FT LSC. ## 4 Conclusions Adaptive filter LSC is a relatively simple technique, but is highly adaptive to signal and noise levels in CT sinograms with very low photon counts. In early evaluations, it appears to offer several improvements over FT methods: reduced streaks and low frequency bias, improved spectral properties of noise, more uniform texture, better resolution as measured by MTF score, and lower standard deviation within uniform ROIs.
2309.03615
Navigation Through Endoluminal Channels Using Q-Learning
In this paper, we present a novel approach to navigating endoluminal channels, specifically within the bronchial tubes, using Q-learning, a reinforcement learning algorithm. The proposed method involves training a Q-learning agent to navigate a simulated environment resembling bronchial tubes, with the ultimate goal of enabling the navigation of real bronchial tubes. We discuss the formulation of the problem, the simulation environment, the Q-learning algorithm, and the results of our experiments. Our results demonstrate the agent's ability to learn effective navigation strategies and reach predetermined goals within the simulated environment. This research contributes to the development of autonomous robotic systems for medical applications, particularly in challenging anatomical environments.
Oded Medina, Liora Kleinburd, Nir Shvalb
2023-09-07T10:17:37Z
http://arxiv.org/abs/2309.03615v1
# Navigation Through Endoluminal Channels Using Q-Learning ###### Abstract In this paper, we present a novel approach to navigating endoluminal channels, specifically within the bronchial tubes, using Q-learning, a reinforcement learning algorithm. The proposed method involves training a Q-learning agent to navigate a simulated environment resembling bronchial tubes, with the ultimate goal of enabling the navigation of real bronchial tubes. We discuss the formulation of the problem, the simulation environment, the Q-learning algorithm, and the results of our experiments. Our results demonstrate the agent's ability to learn effective navigation strategies and reach predetermined goals within the simulated environment. This research contributes to the development of autonomous robotic systems for medical applications, particularly in challenging anatomical environments. ## I Introduction Endoluminal navigation, particularly in anatomical channels such as the bronchial tubes, presents a unique set of challenges in the field of medical robotics. The intricate and dynamic nature of these channels requires precise and adaptable navigation techniques to ensure safe and effective procedures. Traditional manual navigation methods can be time-consuming and challenging, often necessitating highly skilled operators. Therefore, there is a growing interest in developing autonomous robotic systems that can navigate endoluminal channels with minimal human intervention. The goal of this research is to investigate the feasibility of using Q-learning, a well-established reinforcement learning algorithm, to enable autonomous navigation within endoluminal channels. Q-learning has shown success in various domains, such as game-playing and control tasks, making it a promising candidate for the complex task of navigating within anatomical structures. The use of Q-learning in medical robotics, particularly for endoluminal navigation, is relatively unexplored, and this paper aims to contribute to filling this gap. ### _Problem Statement_ Navigating through endoluminal channels, such as the bronchial tubes, requires addressing several key challenges. The channels are narrow, curved, and may contain branching pathways, making it difficult to maintain the correct trajectory. Additionally, depth perception can be limited, further complicating navigation. Autonomous navigation systems must account for real-time obstacles, potential collisions with channel walls, and the need to reach specific target locations. Notably, it's challenging to achieve precise navigation solely through local maneuvers. More often, it's evident that in order to turn or change direction, actions must be initiated well before reaching a bifurcation. This foresight is necessary to properly lean on the cavity walls and manage to take the turn effectively. ### _Motivation and Objectives_ The motivation for this research stems from the potential benefits of autonomous robotic navigation within endoluminal channels. By developing intelligent navigation algorithms, we can enhance the precision, efficiency, and safety of medical procedures performed within these channels. This is particularly relevant for applications such as bronchoscopy, where accurate navigation is critical for diagnosing and treating respiratory conditions. The main objectives of this research are as follows: * Develop a simulation environment that closely mimics the conditions of endoluminal channels, specifically bronchial tubes. * Implement a Q-learning algorithm to train an agent to autonomously navigate within the simulated environment. * Evaluate the agent's performance in terms of successful navigation and the strategies it learns. * Discuss the implications and potential applications of the proposed approach in the context of medical robotics. ### _Related Work_ Motion planning is a crucial field within robotics. For problems involving high-dimensional scenarios, such as flexible wires, it's often necessary to address the issue within the configuration space [1]. The chosen actions might be contingent upon the current configuration, and in instances where the configuration space is fully understood, it can be resolved analytically [2]. In the context of non-Markovian challenges, one might resort to heuristics, leveraging intuitive approaches and shortcuts to hasten the path-finding procedure [3]. In this study, we focus on a high-dimensional non-Markovian problem where the configuration space is unknown. Previous research in medical robotics and autonomous navigation has primarily focused on various imaging modalities, sensor integration, and path planning algorithms. Bron choscopy, a commonly used medical procedure involving the insertion of a bronchoscope into the airways, has seen advancements in imaging techniques and robotics-assisted navigation [4][5]. Imaging modalities such as fluoroscopy and monocular cameras have been explored to aid navigation and provide real-time feedback [6][7]. However, the development of autonomous navigation algorithms using reinforcement learning, specifically Q-learning, for bronchoscopy remains relatively unexplored. In the field of robotics, Q-learning has been applied to path planning, robotic control, and game-playing scenarios [8][9]. These applications showcase the adaptability and learning capabilities of Q-learning algorithms. However, the translation of Q-learning to the context of navigating complex and dynamic anatomical structures presents unique challenges and opportunities. ### _Contribution_ For the end described above we shall devise the following: * Formulation of the endoluminal navigation problem within bronchial tubes as a reinforcement learning task. * Development of a simulated environment that emulates the challenges of navigating within anatomical channels. * Implementation and evaluation of a Q-learning agent trained to navigate within the simulated environment. * Discussion of the agent's learned navigation strategies and their potential implications in medical robotics. The remainder of this paper is organized as follows: Section 2 provides an overview of reinforcement learning and Q-learning. Section 3 describes the simulation environment and the Q-learning algorithm implementation. Section 4 presents the experimental results and analysis. Section 5 discusses the significance of the results and suggests future research directions. Finally, Section 6 concludes the paper. ## II Reinforcement Learning and Q-Learning _Reinforcement Learning_ (RL) is a machine learning paradigm concerned with learning how an agent should take actions in an environment to maximize cumulative rewards. In RL, an agent interacts with an environment by taking actions and receiving feedback in the form of rewards. The agent's goal is to learn a policy that maps states to actions in a way that maximizes the expected sum of rewards over time [7]. At the heart of RL lies the Markov Decision Process (MDP), which provides a mathematical framework for modeling decision-making problems. An MDP is defined by a tuple \((S,A,P,R)\), where: * \(S\) is the set of possible states \(s\) in the environment. * \(A\) is the set of possible actions the agent can take. * \(P\) is the state transition probability function, \(P(s^{\prime}|s,a)\), which gives the probability of transitioning to state \(s^{\prime}\) from state \(s\) after taking action \(a\). * \(R\) is the reward function, \(R(s,a,s^{\prime})\), which gives the immediate reward received after transitioning to state \(s^{\prime}\) from state \(s\) by taking action \(a\). _Q-learning_ is a model-free reinforcement learning algorithm that aims to learn the optimal action-selection policy for an agent. The Q-learning algorithm iteratively updates estimates of the Q-values, which represent the expected cumulative reward for taking a specific action in a particular state. The Q-values are updated using the _Bellman equation_, which expresses the Q-value of a state-action pair in terms of the immediate reward and the maximum Q-value of the next state: \[Q(s,a)=R(s,a)+\gamma\cdot\max_{a^{\prime}}Q(s^{\prime},a^{\prime})\] where \(s\) is the current state, \(a\) is the chosen action, \(s^{\prime}\) is the next state, \(\gamma\) is the discount factor that balances immediate and future rewards, and \(a^{\prime}\) iterates over all possible actions in the next state. The Q-learning algorithm iteratively updates the Q-values using the following equation: \[Q(s,a)^{\prime}\gets Q(s,a)+\alpha\cdot[R(s,a)+\gamma\cdot\max_{a^{ \prime}}Q(s^{\prime},a^{\prime})-Q(s,a)]\] where \(\alpha\) is the learning rate, which determines the weight given to the new information compared to the existing Q-value. A challenge in reinforcement learning is balancing exploration and exploitation. Exploration involves trying new actions to gather information about the environment, while exploitation involves choosing the best-known actions based on the learned Q-values. To address this challenge, the epsilon-greedy algorithm is commonly used. The epsilon-greedy algorithm introduces an exploration factor \(\epsilon\) that determines the probability of taking a random action versus the best-known action. At each step, a random number is generated. If the random number is less than \(\epsilon\), the agent takes a random action; otherwise, it takes the action with the highest Q-value. In the context of endoluminal navigation, the state space is taken as the robot's position and orientation within the Fig. 1: Applying Q-learning to endoluminal autonomous navigation paradigm. anatomical channel, and the action space comprises the robot's possible movements (e.g., bending, advancing). The Q-values indicate the expected cumulative reward for taking specific actions in different states, guiding the agent's navigation decisions. ## III The mathematics model To investigate the feasibility of using Q-learning for endo-luminal navigation, we created a simulated environment that serves as a testbed for training and evaluating the Q-learning agent. ### _Simulation Environment_ The bronchial tubes are assumed to be two-dimensional and are represented as a series of connected segments, each with specific properties such as curvature and diameter. The endoluminal robot can bend and move within the bronchial tubes while navigating toward predefined goals. We employ a model resembling an open-chain serial robot consisting of \(N\) connected links, each featuring a rotational spring at its joint with a spring constant \(k_{1}\) (see Figure 2). The robot therefore undergoes dynamic motion until it eventually gains a minimal energy configuration. To establish a proper robot behavior consider the configuration space \(\mathcal{C}\). A configuration \(c\in\mathcal{C}\) of the robot is the ordered set of its bending angles \(c=\{\theta_{i}\}_{i=1}^{N}\). Accordingly, the positions \((x_{i},y_{i})_{i=1}^{N}\) of each joint may be calculated as depicted in Figure 2. The energy of a free endoluminal robot is the sum of its spring energies \(k_{1}\theta_{i}^{2}/2\). When the robot applies force on the lumen in \(M\) positions, and therefore deforms the lumen in these positions, additional energy is added: \[E=\frac{1}{2}\sum_{i=1}^{N}k_{1}\theta_{i}^{2}+\frac{1}{2}\sum_{j=1}^{M}k_{2}( \Delta x_{j}^{2}+\Delta y_{j}^{2}) \tag{1}\] where \(\Delta x_{j}(\{\theta_{i}\}_{i=1}^{j})\) and \(\Delta y_{j}(\{\theta_{i}\}_{i=1}^{j})\) is the extent at which the \(j\)-th contact exceeds the original position of the lumen (i.e. deforms it). The associated lumen spring constant is considered constant \(k_{2}\) along the entire lumen. In every time step, the robot is provided with a relaxation stage at which it regains its minimal energy value. In other words, the robot is set to follow its energy gradient: \[\nabla E=(\frac{\partial E}{\partial\theta_{1}},\frac{\partial E}{\partial \theta_{2}},\dots,\frac{\partial E}{\partial\theta_{N}}) \tag{2}\] and so in the relaxation stage, one updates the configuration \(c^{\prime}\gets c+\varepsilon\nabla E\) until an energy plateau is reached. Here, \(\varepsilon\) is chosen small enough to account how interact is the configuration in the lumen. This procedure is then followed by bending of the robot tip and advancing (by holding its proximal end) into the lumen. The agent's inability to achieve some goals prompted us to investigate the energy gradient used to calculate bending angles. By reducing the frequency of energy gradient adjustments, we aimed to improve the agent's ability to navigate and bend within the bronchial tubes. Subsequent trials with the modified energy gradient showed improvements, emphasizing the importance of parameter tuning in achieving successful navigation. ### _Q-learning entities_ In the simulation, the _state space_ is defined by the agent's position and orientation within the bronchial tubes. The agent's position is determined by the \((x,y)\) coordinates of its distal end, while its orientation is defined by its angle of bending. This results in a six-dimensional state space. The _action space_ consists of possible movements that the agent can perform, such as bending and advancing. The agent can bend in either the clockwise (CW) or counterclockwise (CCW) direction, and it can also advance forward. The agent's bending angle is limited by the physical constraints of the bronchial tubes. The _set goal_ was to navigate to specific locations within the tubes. These goals were set as states within the environment, and the agent's objective was to learn how to reach these a predefined goal effectively. ## IV Methods and Results We conducted a set of \(50\) experiments in four different lumen setup generated by a set of five randomly generated parameters (see Figure 3) which define a lumen with a single bifurcation in the plane. The parameters define the lumen's diameters; the radius of curvature of the main lumen (can be positive or negative); the distance to the bifurcation; the radius Fig. 3: The five parameters used to define a bifurcation: (1) the lumens’ diameters; (2) the radius of curvature of the main lumen (can be positive or negative);(3) the distance to the bifurcation; (4) the radius of curvature of the bifurcating lumen (can be positive or negative); and (5) the bifurcation angle. Fig. 2: The endoluminal robot’s kinematic model. of curvature of the bifurcating lumen (can be positive or negative); and the bifurcation angle. The results of the Q-learning simulation indicated that the agent was able to learn effective navigation strategies within the bronchial tubes environment. The agent's success rate in reaching the predefined location was above \(70\%\) after completing the learning phase. ## V Discussion The robotic agent was trained to choose appropriate actions to reach the predefined goals by estimating the Q-values for state-action pairs. While the environment was discredited to facilitate learning, the agent still required thousands of sessions to effectively navigate. After the learning phase, the agent applied the epsilon greedy algorithm to balance exploration and exploitation, preventing it from getting stuck in infinite loops between states. The results indicated that the Q-learning agent could successfully learn navigation strategies in a complex environment. The results of our experiments have broader implications for the application of Q-learning in medical procedures and navigations. While the current study focused on endoluminal navigation, the insights gained can be extended to various medical scenarios that require navigation within complex and constrained environments. ## VI Conclusions The two-dimensional simulation environment deviates from the real-life conditions of navigating through three-dimensional bronchial tubes. As such, the results obtained from the simulation may not directly translate to real-world scenarios. Additionally, various parameters, such as the definition of terminal states and rewards, as well as the energy parameters, can impact the agent's behavior and performance. Further research is needed to explore the effects of these parameters and validate the results in more realistic settings. The next phase of our project entails a more rigorous validation of our approach through simulations involving real-life scenarios, such as the replication of lung, kidney, and gall bladder environments. These simulations will be constructed using CT scans as our foundational data. Achieving success in this phase will necessitate the adaptation of our algorithm and the training of our agent to maneuver within physical bronchial tubes. This endeavor introduces an additional layer of complexity as it calls for the incorporation of a 3D model, granting the system an extra degree of freedom. This enhanced model will need to account for various factors, including friction, forces applied to the luminal walls, and the unpredictable Fig. 4: A set of snapshots while maneuvering to the set goal through a given environment. Fig. 5: A set of snapshots while maneuvering to the set goal. conditions inherent to these biological environments. Investigating the feasibility of transfer learning to adapt navigation strategies learned in simulated environments to real-world scenarios could expedite the deployment of Q-learning-based navigation systems.
2309.14369
BSN: First Light Curve Study of the Low Mass Contact Binary V0610 Vir
Photometric data were used to perform the first light curve analysis of the V0610 Vir binary system. Observations and analysis were done in the form of the Binary Systems of South and North (BSN) Project. We extracted the minima from our observations and compiled the literature, which was few. Therefore, we performed computations using the reference ephemeris and presented a new ephemeris and O-C diagram with a linear fit. Light curve analysis was performed using the PHOEBE Python code and the Markov chain Monte Carlo (MCMC) approach. The assumption of a hot starspot was required due to the asymmetry in the light curve's maxima. The analysis shows that V0610 Vir is a contact binary system with a fillout factor of 0.085, a mass ratio of 0.998, and an inclination of 70.65-deg. The absolute parameters of the system were estimated based on the Gaia DR3 parallax method. The results show that the system is a low-mass contact binary with a total mass lower than 0.8(M_Sun). The location of the stars was shown in the M-L and M-R diagrams.
Ailar Alizadehsabegh, František Lomoz, Atila Poro, Ata Narimani
2023-09-23T16:32:12Z
http://arxiv.org/abs/2309.14369v3
# First Light Curve Study of the Low Mass Contact Binary V0610 Vir ###### Abstract Photometric data were used to perform the first light curve analysis of the V0610 Vir binary system. We extracted the minima from our observations and compiled the literature, which was few in number. Therefore, we performed computations using the reference ephemeris and presented a new ephemeris and O-C diagram with a linear fit. Light curve analysis was performed using the PHOEBE Python code and the Markov chain Monte Carlo (MCMC) approach. The assumption of a cold spot was required due to the asymmetry in the light curve's maxima. The analysis shows that V0610 Vir is a contact binary system with a fillout factor of 0.085, a mass ratio of 0.998, and an inclination of \(70.65^{\circ}\). The absolute parameters of the system were estimated based on the Gaia DR3 parallax method. The results show that the system is a Low-Mass Contact Binary (LMCB) with a total mass lower than \(0.8(M_{\odot})\). The location of the stars was shown in the \(M-L\) and \(M-R\) diagrams. techniques: photometric, stars: binaries: eclipsing, stars: individual (V0610 Vir) 0000-0002-4880-7088]Frantisek Lomoz 0000-0002-4880-7088]Atila Poro ## 1 Introduction The W Ursae Majoris (W UMa) binaries include two stars that are typically F, G, or K spectral type stars whose Roche lobes have been fielded, and they share a similar envelope (Kjurkchieva et al., 2016; Zhang, Han, & Liu, 2016). The orbital period of W UMa-type systems is less than one day. Also, the light curves of these binary stars show two equal or almost equal minima, demonstrating that their effective surface temperatures are close to each other (Li et al., 2018). Further investigation of contact systems is important since it can reveal many details about the evolution of stars. According to the AAVSO International Variable Star Index (VSX1) and ASAS-SN2 variable stars' catalogs, V0619 Virgo is a W UMa-type binary system with an orbital period of \(0.3398754^{days}\) and \(0.3398768^{days}\), respectively. The coordinates of this system in the Simbad3 database are RA: \(11^{h}\)\(47^{\prime}\)\(05.887^{\prime\prime}\) and Dec: \(+01^{\circ}\)\(14^{\prime}\)\(41.489^{\prime}\) (J2000), with an apparent magnitude of \(V=13.30^{mag}\). Footnote 1: [https://www.aavso.org/vsx/](https://www.aavso.org/vsx/) Footnote 2: [http://asas-sn.osu.edu/variables](http://asas-sn.osu.edu/variables) Footnote 3: [http://simbad.u-strasbg.fr/simbad/](http://simbad.u-strasbg.fr/simbad/) The paper's structure is as follows: Section 2 explains observation and data reduction; Section 3 is about extracting minima and obtaining new ephemeris; and Section 4 is related to light curve analysis. The technique used to estimate the absolute parameters is described in Section 5, and the conclusion is in Section 6. ## 2 Observation and Data Reduction The photometric observations of V0610 Vir were carried out in March 2020 by a Schmidt-Newton 254mm/1016mm telescope with the G2-8300 CCD camera at a private observatory in the Czech Republic (49.65 N, 14.41 E). During observations, the CCD average temperature was \(-20^{\circ}C\). A \(V\)-band filter was used, and a total of 279 images were obtained. Each image has an exposure time of 90 seconds. Images were processed using MaxIm DL software, which included dark, bias, and flat-field for basic data reduction. Figure 1 displays the comparison and check stars that were selected that were close to the target and had a suitable apparent magnitude in comparison to V0610 Vir. So, we considered a comparison star named UCAC4 459-049136 (11 46 55.388, +01 44 51.534) with an apparent magnitude of \(V=14.32\) and nine check stars including UCAC4 457-049442 (11 47 17.678, +01 20 02.553) with a \(V=13.20\) magnitude, UCAC4 458-049431 (11 46 59.259, +01 21 57.295) with a \(V=13.72\) magnitude, UCAC4 457-049428 (11 46 52.883, +01 21 12.000) with a \(V=13.29\) magnitude, UCAC4 457-049456 (11 47 48.705, +01 19 04.569) with a \(V=13.90\) magnitude, UCAC4 457-049455 (11 47 46.001, +01 18 14.696) with a \(V=14.51\) magnitude, UCAC4 457-049448 (11 47 29.787, +01 22 38.372) with a \(V=14.13\) magnitude, UCAC4 458-051054 (11 46 49.701, +01 25 18.116) with a \(V=14.17\) magnitude, UCAC4 458-051065 (11 47 07.913, +01 32 12.092) with a \(V=13.65\) magnitude and UCAC4 458-051064 (11 47 07.423, +01 33 47.439) with a \(V=15.37\) magnitude. The coordinates and apparent magnitudes of all the comparison and check stars were gathered from the ASAS-SN catalog. Finally, we used the AstroImageJ program to normalize the flux of all the data (Collins et al., 2017). ## 3 New Ephemeris In binary star systems, the O-C diagram is an important tool for finding new ephemeris. The O-C value represents the difference between the predicted and observed times of an eclipse in a binary system. The O-C diagram allows us to visualize the changes in the timing of eclipses over a longer period of time. By analyzing the trends of the O-C Figure 1: The V0610 Vir binary system, a comparison star, and check stars’ field-of-view. the presence of additional companions in the system. For this purpose, we extracted one primary and one secondary from our observations and collected them with ten other minima from the literature (Table 1). Also, we converted all of the times of minima to the Barycentric Julian Date in Barycentric Dynamical Time \((BJD_{TDB})\)4. Footnote 4: [https://astroutils.astronomy.osu.edu/time/](https://astroutils.astronomy.osu.edu/time/) To compute Epoch and O-C, we used a reference ephemeris with a time of minima of 2455291.81179(30) from Diethelm (2010) and an orbital period of 0.3398768\({}^{days}\) that we obtained from the ASAS-SN catalog. Therefore, according to the O-C diagram and considering that the number of observations for this system is limited and few minima are available for it, only a least-squares linear fit can be considered (Figure 2). Based on this information, the new ephemeris can be calculated as follows: \[Min.I(BJD_{TDB})=2455291.81304(13)+0.339875947(17)\times E \tag{1}\] ## 4 Light Curve Solution The PHOEBE 2.4.9 version and the MCMC method were used to analyze the light curve of the V0610 Vir system (2). We used the \(P-T_{1}\) relationship from the Poro et al. (2022a) study to calculate the effective temperature of the hotter star as the input (Equation 2). \[T_{1}=(6951.42^{+112.16}_{-112.68})P+(3426.01^{+44.12}_{-43.90}) \tag{2}\] Figure 2: O-C diagram with a linear fit for the V0610 Vir system. The gravity-darkening coefficients was determined \(g_{1}=g_{2}=0.32\)(Lucy, 1967) and the bolometric albedo was assumed to be \(A_{1}=A_{2}=0.5\)(Rucinski, 1969). Additionally, the stellar atmosphere was modeled using the Castelli & Kurucz (2004) method, and the limb darkening coefficients were employed in the PHOEBE as a free parameter. Due to the availability of photometric data, we used \(q\)-search to estimate the mass ratio. The obtained mass ratio was used as the MCMC process's initial parameter value. The maxima of the light curve were asymmetric (\(V_{max}1-V_{max}2\)) \(\neq 0\)(Tavakkoli et al., 2015). So, the light curve solution required the use of a cold starspot on the hotter star (O'Connell, 1951). Then, the theoretical fit was improved using PHOEBE's optimization tool. Moreover, taking into account a normal Gaussian distribution in the range of solutions for inclination, mass ratio, fillout factor, and effective temperatures, we estimated the values of the parameters together with their uncertainties using the MCMC approach based on the emcee package in PHOEBE (Hogg & Foreman-Mackey, 2018). We employed 96 walkers and 500 iterations for each walker in the MCMC processing. Table 2 contains the results of the light curve solution. The corner plots and final synthetic light curve are shown in Figure 3 and Figure 4, respectively. The component positions for the four phases of an orbital period are shown in Figure 5. The following equations were used to determine the luminosity and radius, and separation between the center of mass of the components (6, 7 and 8): \[M_{bol}-M_{bol_{\odot}}=-2.5log(\frac{L}{L_{\odot}}) \tag{6}\] \[R=(\frac{L}{4\pi\sigma T^{4}})^{1/2} \tag{7}\] \[a=\frac{R}{r_{mean}} \tag{8}\] Additionally, using the mass ratio determined by the outcomes of the light curve analysis, each component's mass was determined using the well-known Kepler's third law (Equation 9). Using Equation 11, the surface gravity was estimated. The estimated parameters using the Gaia parallax method are shown in Table 3. \[\frac{a^{3}}{G(M_{1}+M_{2})}=\frac{P^{2}}{4\pi^{2}} \tag{9}\] \[g=G_{\odot}(M/R^{2}) \tag{10}\] \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Result \\ \hline \(T_{1}\) (K) & \(5811^{+(7)}_{-(5)}\) \\ \(T_{2}\) (K) & \(5440^{+(4)}_{-(9)}\) \\ \(q=M_{2}/M_{1}\) & \(0.998^{+(15)}_{-(9)}\) \\ \(\Omega_{1}=\Omega_{2}\) & \(3.70(5)\) \\ \(i^{\circ}\) & \(70.65^{+(12)}_{-(11)}\) \\ \(f\) & \(0.085^{+(9)}_{-(9)}\) \\ \(l_{1}/l_{tot}\) & \(0.580(2)\) \\ \(l_{2}/l_{tot}\) & \(0.420(2)\) \\ \(r_{1(mean)}\) & \(0.388(27)\) \\ \(r_{2(mean)}\) & \(0.388(26)\) \\ Phase shift & \(-0.015(1)\) \\ \hline \(Colatitude_{spot}(deg)\) & \(74(1)\) \\ \(Longitude_{spot}(deg)\) & \(348(2)\) \\ \(Radius_{spot}(deg)\) & \(27(1)\) \\ \(T_{spot}/T_{star}\) & \(1.09(1)\) \\ \hline \hline \end{tabular} \end{table} Table 2: Photometric solution of V0610 Vir. \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Hotter star & Cooler star \\ \hline \(M_{v}(mag.)\) & 5.457(72) & 5.807(70) \\ \(M_{bol}(mag.)\) & 5.384(72) & 5.654(70) \\ \(L(L_{\odot})\) & 0.553(35) & 0.431(27) \\ \(R(R_{\odot})\) & 0.735(21) & 0.741(21) \\ \(M(M_{\odot})\) & 0.400(44) & 0.399(49) \\ \(log(g)(cgs)\) & 4.307(20) & 4.300(26) \\ \(a(R_{\odot})\) & 1.902(71) & \\ \hline \hline \end{tabular} \end{table} Table 3: The absolute parameters of the V0610 Vir binary system. Figure 3: The observed and synthetic light curves of the system in the \(V\) filter. ## 6 Summary and Conclusion We observed the V0610 Vir binary system at an observatory in the Czech Republic. We extracted our observed minima in addition to collecting from the literature. Then, we determined the epoch and O-C values using the reference ephemeris. The O-C diagram shows that just a liner fit can be considered, and that is descending. Light curve analysis was performed using the latest available version of PHOEBE Python code together with the MCMC approach. Moreover, the Gaia parallax method was used to estimate the absolute parameters of the V0610 Vir system. According to the light curve analysis, the companion stars in this system have a temperature difference of 371 K. In contact systems, the maximum temperature difference between two stars is around 5%, which is consistent with our light curve analysis in this regard (Poro et al., 2021). Based on the temperatures of the stars, G3 and G8 are the spectral types of the hotter and cooler stars in this system, respectively (Eker et al., 2018). The evolution of V0610 Vir is depicted by the positions of each component on the logarithmic scaled Mass-Luminosity (\(M-L\)) and Mass-Radius (\(M-R\)) diagrams (Figure 6a and b). These diagrams show both the Terminal-Age Main Sequence (TAMS) and the Zero-Age Main Sequence (ZAMS). Due to their very close masses and radii, their position is next to each other and above TAMS. Figure 4: The corner plots of the system from the MCMC modeling. The orbital angular momentum of the system is \(51.173\pm 0.079\). This result is based on the following equation from the Eker et al. (2006) study: \[J_{0}=\frac{q}{(1+q)^{2}}\sqrt[3]{\frac{G^{2}}{2\pi}M^{5}P} \tag{11}\] where \(q\) is the mass ratio (\(M2/M1\)), \(M\) is the total mass of the system (\(M1+M2\)), \(P\) is the orbital period with the unit of the day, and \(G\) is the gravitational constant. The \(logJ_{0}-logM_{tot}\) diagram (Figure 6c) considers the V0610 Vir in a contact binary systems region. According to the short orbital period, light curve solution, and estimation of the absolute parameters of the V0610 Vir, it can be concluded that this system is an LMCB contact binary system. The orbital period variation trend in LMCB systems is usually on a decrease, which is consistent with the V0610 Vir system. It should be noted that the LMCB systems have formed a disc that has the potential to be a place for planet formation with an age considerably shorter than the age of host stars (Stkepieri & Gazeas, 2012). So, based on the position of this system's stars in Figure 6 diagrams and the low mass of the two stars, we suggest it for future observations and investigations. ## Acknowledgements This manuscript was prepared by the Binary Systems of South and North (BSN) project ([https://bsnp.info/](https://bsnp.info/)). We have made use of Gaia DR3 results. The Gaia mission is from the European Space Agency (ESA) ([http://cosmos.esa.int/gaia](http://cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC). Figure 5: Geometric structure of V0610 Vir with a cold spot on the hotter component. Figure 6: a) \(logM-logL\) diagram; b) \(logM-logR\) diagram; c) \(logJ_{0}-logM_{tot}\) diagram.
2309.03173
PDiscoNet: Semantically consistent part discovery for fine-grained recognition
Fine-grained classification often requires recognizing specific object parts, such as beak shape and wing patterns for birds. Encouraging a fine-grained classification model to first detect such parts and then using them to infer the class could help us gauge whether the model is indeed looking at the right details better than with interpretability methods that provide a single attribution map. We propose PDiscoNet to discover object parts by using only image-level class labels along with priors encouraging the parts to be: discriminative, compact, distinct from each other, equivariant to rigid transforms, and active in at least some of the images. In addition to using the appropriate losses to encode these priors, we propose to use part-dropout, where full part feature vectors are dropped at once to prevent a single part from dominating in the classification, and part feature vector modulation, which makes the information coming from each part distinct from the perspective of the classifier. Our results on CUB, CelebA, and PartImageNet show that the proposed method provides substantially better part discovery performance than previous methods while not requiring any additional hyper-parameter tuning and without penalizing the classification performance. The code is available at https://github.com/robertdvdk/part_detection.
Robert van der Klis, Stephan Alaniz, Massimiliano Mancini, Cassio F. Dantas, Dino Ienco, Zeynep Akata, Diego Marcos
2023-09-06T17:19:29Z
http://arxiv.org/abs/2309.03173v1
# PDiscoNet: Semantically consistent part discovery for fine-grained recognition ###### Abstract Fine-grained classification often requires recognizing specific object parts, such as beak shape and wing patterns for birds. Encouraging a fine-grained classification model to first detect such parts and then using them to infer the class could help us gauge whether the model is indeed looking at the right details better than with interpretability methods that provide a single attribution map. We propose PDiscoNet to discover object parts by using only image-level class labels along with priors encouraging the parts to be: discriminative, compact, distinct from each other, equivariant to rigid transforms, and active in at least some of the images. In addition to using the appropriate losses to encode these priors, we propose to use part-dropout, where full part feature vectors are dropped at once to prevent a single part from dominating in the classification, and part feature vector modulation, which makes the information coming from each part distinct from the perspective of the classifier. Our results on CUB, CelebA, and PartImageNet show that the proposed method provides substantially better part discovery performance than previous methods while not requiring any additional hyper-parameter tuning and without penalizing the classification performance. The code is available at [https://github.com/robertdvdk/part_detection](https://github.com/robertdvdk/part_detection) ## 1 Introduction Commonly used approaches to inspect a deep learning model's inner workings yield a saliency map that indicates which regions contributed the most to the output [5, 29]. If the model seems to focus on image regions that are known to be irrelevant, (e.g. the background or the wrong object), it becomes clear that the model has picked up on spurious correlations and cannot be trusted. This observation could then be used to improve future iterations of the model, for instance, by eliminating or compensating for the detected spurious correlations. However, this type of approach offers little information when the model provides an incorrect answer but the saliency map suggests that it is attending to the correct image regions. Other approaches aim at modifying the model architecture itself in order to ensure that the provided explanation actually reflects the decision process of the model [10, 6]. In particular, the saliency map explanation can be enriched by dividing it in multiple semantically interpretable parts, mimicking the traditional approaches of tackling fine-grained visual categorization (FGVC), in which image-level part annotations were leveraged [22] in order to help the model differentiate between similar classes by helping it focus on the relevant parts. In this manner, we have more information to judge the adequacy of the model's reasoning: even if the correct object is highlighted, we will be suspicious of the result if the part map that the model generally associated to the head of a bird seems to highlight the feet in one particular image. We thus posit that a model that classifies images based on just a few discriminative regions that are semantically consistent across images would be more interpretable than one which highlights the whole object, as one can immediately visualise the parts of the image that have been attended to and interpret their semantics across Figure 1: Our PDiscoNet extracts semantically consistent parts, without any part annotations, and reasons on these parts before combining the results into a final fine-grained classification output. images. By inspecting a few images and their corresponding detected parts, we can easily assign semantic meaning to each part (e.g., bird beak, vehicle wheel) and judge whether the correct parts are being detected in a new image. Even if the model correctly assigns high saliency to the object of interest, we will know to mistrust the result in case the discovered part semantics are not respected. This way of interpreting the models has an additional advantage over post-hoc methods in that we can be more certain that the model only uses information from the indicated regions. Such models have also been shown to be more robust; irrelevant parts are filtered out by only looking at the discovered discriminative regions, which can have a positive impact on generalization capability and thus robustness to occlusion [37] and adversarial attacks [30]. Discovering meaningful and discriminative parts using only image-level class labels requires the use of additional priors that encode our expectations on the characteristics of these parts along with a model architecture that allows for these priors to be implemented. We design a model, based on a Convolutional Neural Network (CNN) backbone, which discovers discriminative parts of objects by being forced to use the discovered parts as a bottleneck for fine-grained classification. The fine-grained setting ensures a high level of similarity between classes, enabling the possibility of discovering semantic parts that are shared by multiple classes. In our part bottleneck, class logits are independently extracted from each of the discovered parts before being combined for the final classification, along with a dropout layer that affects whole parts at a time, ensures that all discovered parts are relevant to classification. ## 2 Related work **Fine-grained recognition** FGVC is a classification setting in which objects of multiple sub-classes of the same super-class are present, thus constituting a challenging task where subtle intra-class and large inter-class variation need to be simultaneously addressed [33]. Solving fine-grained tasks usually requires one to closely inspect the object for the tell-tale differences between closely related classes. Traditional methods exploit shared keypoints [21, 22], parts [16, 35], attributes [25, 24], or a pre-segmentation of the object of interest [4] in order to effectively discriminate between similar sub-classes, although deep learning approaches using large quantities of data have since also proved effective [23]. Our PDiscoNet belongs to a family of approaches that facilitate injecting some of the structure provided by part-based [14, 17] or attribute-based [13] reasoning via weakly-supervised learning without part or attribute annotations. **Interpretability via attribution maps** In saliency-based attribution methods, the goal is to highlight important regions of the image that are used by the network to form its decision. Examples include perturbation-based [27, 28], activation-based [20, 39], and gradient-based [29, 31] explanations methods. Despite their popularity, these model-agnostic methods often cannot guarantee that their explanations are faithful to the model [2]. In contrast, inherently interpretable models aim to directly expose the decision process of the network [7, 10]. In this work, we focus on incorporating interpretable components into the network architecture to reveal the learned structure transparently. Attention rollout [1, 9] is a popular way to understand whether attention modules can provide such explanations [19, 34]. However, deep transformer architectures model complex functions such that reliable interpretation is often limited to inspecting single self-attention layers [8, 3]. Based on this observation we employ a shallow attention structure into our network that allows to directly explain the attention maps with the correspondence to object parts. **Unsupervised part discovery** Some previous works discover parts by using image reconstruction [36, 11], where a landmark bottleneck is used to discover object parts. However, the model having no learning signal indicating which parts of the image may represent an object of interest limits the applicability of these approaches to cases where the objects of interest are either dominating the image and depicted in similar poses [36] or are endowed with foreground segmentation masks that can be used as an additional training signal [11]. On datasets where most parts are common to all images, pre-trained Vision Transformer [3] is typically able to find the parts of the most relevant object in a semantically consistent manner. However, it breaks down when the assumption that salient parts occur in almost all images does not hold, since parts tend to become polysemous in such a setting. Unlike these approaches, PDiscoNet is able to leverage the class labels, requiring no additional annotations in fine-grained classification datasets, to learn parts that are specific to similar classes, making them more semantically consistent and suitable for interpretation. **Weakly-supervised part discovery via FGVC** MA-CNN [38] and ProtoPNet [10] propose to directly enforce that the CNN activation maps develop a part-like behaviour, showing that an architecture with enhanced interpretability does not result in a loss of performance. However, their focus is more on downstream fine-grained classification than on evaluating the discovered parts. SCOPS [18], a model for part co-segmentation, puts more emphasis on the quality of the discovered parts by adding several losses on the part maps that encourage them to be compact and distinct, the latter via decorrelation of the learned part prototypes. It also encourages part maps to be equivariant under geometric transforms of the image. Taken together, these incentives ensure that the discovered parts are semantically consistent across images. This method assumes that all the parts should be active in every image of the dataset. Huang and Li[17] aim to solve this issue by encouraging the presence of each part across a batch of images to follow a beta distribution with manually defined parameters. Depending on the chosen parameters, this encourages a pre-defined proportion of images in a batch to display the part, while it is discouraged in the rest of the images in the batch. ## 3 PDiscoNet Method We design an approach to discover \(K\)_discriminative parts_ that are relevant to a fine-grained classification task, based solely on the image-level class labels. Let \(\mathbf{X}\in\mathbb{R}^{3\times A\times B}\) denote an image in the dataset, and let \(y\in\{1,2,...,C\}\) be its corresponding label. Using a CNN base model \(f_{\theta}\) we obtain a feature tensor \(\mathbf{Z}=f_{\theta}(\mathbf{X})\) with \(\mathbf{Z}\in\mathbb{R}^{D\times H\times W}\). Following [18] and [17], from this tensor we compute \(K+1\) (\(K\) parts plus one background element) attention maps \(\mathbf{A}^{k}=[0,1]^{H\times W},\ k\in\{1,\dots,K+1\}\) by applying a negative squared Euclidean distance function between feature vectors \(\mathbf{z}_{ij}\) (with \(\mathbf{z}_{ij}\in\mathbb{R}^{D}\), \(i\in\{1,...,H\}\), \(j\in\{1,...,W\}\)) and \(K\) part prototypes \(\mathbf{p}^{k}\in\mathbb{R}^{D}\) in a \(1\times 1\) convolutional manner, followed by a softmax across the \(K+1\) channels: \[a_{ij}^{k}=\frac{\exp(-\|\mathbf{z}_{ij}-\mathbf{p}^{k}\|^{2})}{\sum_{k}\exp( -\|\mathbf{z}_{ij}-\mathbf{p}^{k}\|^{2})}, \tag{1}\] Each attention map is then used to compute its corresponding part vector \(\mathbf{v}^{k}\in\mathbb{R}^{D}\) by using the attention values to calculate a weighted average over the feature vectors in \(\mathbf{Z}\): \[\mathbf{v}^{k}=\frac{\sum_{i}\sum_{j}\mathbf{z}_{ij}a_{ij}^{k}}{HW} \tag{2}\] Each of these part vectors could then be used to obtain a vector of class scores \(\mathbf{s}^{k}\in\mathbb{R}^{C}\) calculated as \[\mathbf{s}^{k}=\mathbf{W}_{class}\mathbf{v}^{k}\] by applying the same linear classifier \(\mathbf{W}_{class}\in\mathbb{R}^{C\times D}\) to all part feature vectors, but we use the modification in Eq. (3). The scores are then averaged into a single score vector \(\mathbf{s}=\frac{1}{K}\sum_{k}\mathbf{s}^{k}\) on which a softmax is applied to obtain the final classification probabilities \(\mathbf{\hat{y}}\). **Part vector modulation** In the above formulation, all parts share the same classifier weights \(W_{\text{class}}\). This poses the problem that, from the perspective of the classifier, all parts are equivalent, meaning that the classifier could be encouraging all parts of the same object to result in the similar feature representation. The classifier can also profit from part misdetection, since a wrongly detected part would still provide a useful feature vector. Although it would, in principle, be possible to learn part-specific classifiers, this would not scale well to fine-grained classification scenarios where the classification head already contains the majority of learnable weights. As an alternative, we propose to keep a modulation vector \(\mathbf{m}_{k}\in\mathbb{R}^{D}\) per landmark that multiplies element-wise each part vector before classification: \[\mathbf{s}^{k}=W_{\text{class}}\cdot(\mathbf{m}_{k}\odot\mathbf{v}_{k}). \tag{3}\] **Part dropout** We would like the learned parts to be as discriminative as possible. In order to prevent the most discriminative parts (such as the head in birds) to discourage other parts from becoming discriminative by rendering them unnecessary, we propose to randomly drop out a proportion of all parts during training. This encourages the model to find a variety of discriminative parts. **Loss functions** The main learning signal for our model comes from fine-grained classification, for which we use cross-entropy on the output classification probabilities \(\mathcal{L}_{\text{class}}(\mathbf{y},\mathbf{\hat{y}})\). Although this signal itself would suffice for the model to perform well in the classification task, it does not guarantee that the learned attention maps will be interpretable as parts. There are several desirable properties we Figure 2: Diagram of the proposed method. The part discovery process is driven by the fine-grained classification loss and the losses that applied on the part attention maps (red boxes). wish to enforce in the learned parts. First, parts must be discriminative. This is taken care of by the classification as described previously. However, we also wish parts to be: **Compact** (\(\mathcal{L}_{\text{conc}}\)): We would like each detected part to consist of a compact and contiguous image region. **Distinct** (\(\mathcal{L}_{\text{orth}}\)): We want to avoid overlap between parts. This is encouraged by decorrelating part feature vectors. **Consistent** (\(\mathcal{L}_{\text{equiv}}\)): The same parts should be detected under translation, rotation or scaling of the image. This can be enforced via a loss that encourages the equivariance of the attention maps to random rigid transforms. **Present in the dataset** (\(\mathcal{L}_{\text{pres}}\)): All parts should be present in some of the images of the dataset. For this, we penalize the absence of a part across a whole batch during training. To enforce these priors, we use as many loss functions. Our concentration loss over the attention maps \(\mathbf{A}^{k}\): \[\mathcal{L}_{\text{conc}}=\frac{\sum_{k=1}^{K}\sigma_{v}^{2}(\mathbf{A}^{k})+ \sigma_{h}^{2}(\mathbf{A}^{k})}{K}, \tag{4}\] where \(\sigma_{v}\) and \(\sigma_{h}\) represent the vertical and horizontal spatial variance respectively. We calculate an orthogonality loss over the part vectors by applying the cosine distance between all pairs: \[\mathcal{L}_{\text{orth}}=\sum_{k}\sum_{l\neq k}\frac{\mathbf{v}^{k}\cdot \mathbf{v}^{l}}{\|\mathbf{v}^{k}\|\cdot\|\mathbf{v}^{l}\|}. \tag{5}\] Our equivariance loss creates a transformed image by applying a random rigid transformation \(T\) to the input image. We then pass both the original and the transformed image through the model and invert the transformation on the attention maps from the transformed ones. If \(A^{k}(\mathbf{X})\) is a function that returns the \(k^{th}\) attention map for image \(\mathbf{X}\), the equivariance loss is computed using the cosine distance between the attention maps from the original image and the transformed image: \[\mathcal{L}_{\text{equiv}}=1-\frac{1}{K}\sum_{k}\frac{\left\|A^{k}(\mathbf{X} )\odot T^{-1}(A^{k}(T(\mathbf{X})))\right\|}{\|A^{k}(\mathbf{X})\|\cdot\|A^{k }(T(\mathbf{X}))\|}. \tag{6}\] Lastly, a presence loss encourages each part to be present at least once per batch. Given a batch \(\{\mathbf{X}_{1},\dots,\mathbf{X}_{B}\}\): \[\mathcal{L}_{\text{pres}}=1-\frac{1}{K}\sum_{k}\max_{b,i,j}\text{avgpool}(a_{ ij}^{k}(\mathbf{X}_{b})), \tag{7}\] where \(\text{avgpool}()\) is a 2D average pooling with a small kernel size and a stride of 1. This operator is applied to prevent encouraging single pixel attention maps. A weighted combination of these losses is used as the final loss. ## 4 Experiments We compare our method, for different values of \(K\), against the results obtained by the most closely related methods in the recent literature [17]. We also compare our method to a few other methods, among which the most recent method on part discovery [3], which is not aimed at fine-grained classification but showcases high quality part discovery by using self-supervised pretraining with a visual transformer architecture. **Datasets** Our aim is to perform part discovery with the only assumption being that we have image-level class labels where parts are shared by some of the classes, which is typically the case in FGVC tasks. In order to investigate this, we have chosen three datasets with a varying proportions of shared parts across images: a face image dataset where the vast majority of images display all relevant parts (_i.e._ facial landmarks), a bird species recognition dataset, where the assumption of the presence of all parts is limited due to the effects of pose and occlusion, and a more challenging dataset in which several fine-grained class categories (_e.g._ birds and cars) are mixed together, resulting in specific parts only being shared by a small subset of the images in the dataset. To assess the quality of the discovered parts, we have selected datasets for which semantic part annotations are available. CUB [32] contains 11,788 images of 200 bird species that include manual part annotations of 15 body parts. The images are split approximately in half for training and half for evaluating. During the development phase of this work we used a 90%-10% split of the training set of CUB in order to find a good set of hyperparameters for our model and used the same across all experiments on all datasets. CelebA [26] is a dataset of face images of 10,177 celebrities. We follow earlier approaches [18, 17] and use the unaligned training set of 45,609 images to train our models and use the 283 MAFL test images to evaluate the part detection, and the 5,379 images of the MAFL training set were used for training the keypoint regressor. We use identity classification as the downstream task. PartImageNet [15] consist of 158 classes split among a diverse set of categories (e.g., 10 species of fish, 14 of birds, 15 of snakes, 23 types of car). We train all models on 14,876 images of the train set, which is limited to 109 classes, and test on 1,664 images. **Evaluation metrics** The part annotations in CUB and CelebA are in the form of points, meant to represent part centroids in CUB and facial landmarks in CelebA. We first evaluate the quality of the part discovery methods by performing part location regression based on the centroids of the discovered parts. However, as noted by [11], keypoint regression may not be a good indicator of overall part quality. We therefore employ also the Normalized Mutual infor mation (NMI) and the Adjusted Rand Index (ARI), metrics commonly used for evaluating clustering quality. In PartImageNet the part annotations are in the form of semantic segmentation masks, from which we extract the centroids to compute NMI and ARI. Note that NMI and ARI are computed on the annotation/prediction correspondences across the whole datasets, meaning that they capture part semantic consistency (i.e. a perfect score can only be obtained if the same discovered part matches exactly with the same annotated part). In CUB and PartImageNet we report, in addition, the classification score on the same test set used for part quality evaluation. In the case of CelebA, the classes on the test set do not overlap with those in the training set. **Implementation details** We trained all our models with Adam, with a starting learning rate of \(10^{-4}\) for the ResNet-101 backbone, \(10^{-3}\) for the new layers, and \(10^{-2}\) for the modulation vectors. We apply 5 reductions by 0.5 every 5 epochs for CUB and PartImageNet and every 3 for CelebA. The loss weights were all set to 1 except for \(\mathcal{L}_{\text{conc}}\), where a weight of 1000 was used because of its much lower magnitude. This setting was decided based on the results on the CUB validation set and kept constant on all experiments afterwards. For [17], we used \(\alpha=1\) on CUB and CelebA and \(\alpha=0.002\) on PartImageNet. ### Quantitative results The results in Table 1 show that, on CUB with \(K=4\) parts, our method already performs comparably to [11], with 9.12% keypoint regression error vs. 9.20% in [11], even though [11] use spatially explicit foreground masks at train time. Our method obtains better results than all other methods in all settings and on all metrics, improving over the second best method [3] from 50.57 to 56.87 NMI and 26.14 to 38.05 ARI for \(K=16\), all while improving the classification accuracy over [17] and a ResNet101 trained in the same setting. Interestingly, increasing the number of parts not only results in a substantial improvement on the part quality metrics, but also in classification accuracy, from 86.17% with \(K=4\) to 87.49% with \(K=16\), unlike for [17], with which the classification accuracy is reduced as the number of parts increases. On CelebA, our method obtains the best clustering scores on all settings, improving for \(K=4\) over [17] from 56.69 to 75.97 NMI and from 34.74 to 69.53 ARI, thus doubling the result of the best competing method. However, [17] does result in lower keypoint regression errors. We also obtain better keypoint regression errors than [3], 11.11% vs. 11.36%, although this method completely fails when evaluated in terms of the clustering metrics. As can be seen in Section 4.4, this is related to the fact that this method is task agnostic and focuses on elements not related to facial landmarks, such as clothing and hair, which are not as useful for locating facial landmarks. With a single part being assigned to the face, Dino ViT [3] obtains much lower clustering scores than the other methods. In the case of PartImageNet, a more challenging dataset in terms of class diversity, Table 1 shows that both our method and Dino ViT [3] are competitive, with our method taking the lead in terms of NMI: 41.49 with PDiscoNet vs. 37.81 with Dino ViT, and Dino ViT in terms of ARI: 14.17 with PDiscoNet and 16.50 with Dino ViT. The method by Huang and Li [17] fails to capture the diversity in terms of part semantics, resulting in very low NMI and ARI, 10.19 and 1.05, and much lower classification scores, with a maximum of 74.22% with \(K=4\), while our model reaches 89.00% with \(K=25\), close to the 90.81% obtained by ResNet101 in the same training settings. \begin{table} \begin{tabular}{l||c c c|c||c c c||c c|c} & \multicolumn{4}{c||}{**CUB**} & \multicolumn{4}{c||}{**CelebA**} & \multicolumn{4}{c}{**PartImageNet**} \\ & \multicolumn{1}{c}{Kp.\(\downarrow\)} & NMI\(\uparrow\) & ARI\(\uparrow\) & Class.\(\uparrow\) & Kp. reg. \(\downarrow\) & NMI\(\uparrow\) & ARI\(\uparrow\) & NMI\(\uparrow\) & ARI\(\uparrow\) & Class. \(\uparrow\) \\ \hline \hline Choudhury [11]\({}^{*}\) & 9.20 & 43.50 & 19.60 & - & - & - & - & - & - \\ \hline \hline SCOPS [18]\({}^{**}\) & 12.60 & 24.40 & 7.10 & - & 15.00 & - & - & - & - & - \\ DFF [12]\({}^{**}\) & - & 25.90 & 12.40 & - & 31.30 & - & - & - & - \\ \hline Dino [3]\({}^{**}\) (K=4) & - & 31.18 & 11.21 & - & 11.36 & 1.38 & 0.01 & 19.17 & 7.59 & - \\ Dino [3]\({}^{**}\) (K=8) & - & 47.21 & 19.76 & - & 10.74 & 1.12 & 0.01 & 31.46 & 14.16 & - \\ Dino [3]\({}^{**}\) (K=16) & - & 50.57 & 26.14 & - & - & 3.29 & 0.06 & 37.81 & **16.50** & - \\ \hline \hline Huang [17] (K=4) & 11.51 & 29.74 & 14.04 & 87.30 & 8.75 & 56.69 & 34.74 & 5.88 & 1.53 & 74.22 \\ Huang [17] (K=8) & 11.60 & 35.72 & 15.90 & 86.05 & 7.96 & 54.80 & 34.74 & 7.56 & 1.25 & 73.56 \\ Huang [17] (K=16) & 12.60 & 43.92 & 21.10 & 85.93 & **7.62** & 62.22 & 41.01 & 10.19 & 1.05 & 73.20 \\ \hline PDiscoNet (K=4) & 9.12 & 37.82 & 15.26 & 86.17 & 11.11 & 75.97 & 69.53 & 27.13 & 8.76 & 88.58 \\ PDiscoNet (K=8) & 8.52 & 50.08 & 26.96 & 86.72 & 9.82 & 62.61 & 51.89 & 32.41 & 10.69 & **89.00** \\ PDiscoNet (K=16) & **7.67** & **56.87** & **38.05** & **87.49** & 9.46 & **77.43** & **70.48** & **41.49** & 14.17 & 86.06 \\ \end{tabular} \end{table} Table 1: Part discovery results on CUB, CelebA and PartImageNet. * Methods use foreground masks in training. **Methods** do not use class supervision. In the case of PartImageNet the number of parts are \(K=[8,25,50]\) instead of \(K=[4,8,16]\) used in CUB and CelebA. A ResNet101 baseline trained in the same setting as our model results in accuracies of 85.35% on CUB and 90.81% on PartImageNet. ### Ablation studies The ablation results in Table 2 confirm that all ingredients in the method are important to obtain competitive results on part discovery. On CUB, \(\mathcal{L}_{\text{orth}}\), \(\mathcal{L}_{\text{equiviv}}\) and part dropout seem to individually contribute the most to part quality in terms of the clustering metrics, while \(\mathcal{L}_{\text{conc}}\) seems to play an important role to improve the keypoint regression results. On the other hand, \(\mathcal{L}_{\text{orth}}\) and part feature vector modulation are the elements with the highest impact on classification performance, which on CUB is positively impacted by all terms. \(\mathcal{L}_{\text{pres}}\) seems to only have a very marginal impact on both part discovery and classification performance on CUB. However, on PartImageNet it is the most important of all terms and the only one that does not hurt the classification accuracy. This is likely to stem from the very different distribution of parts in each dataset. On CUB, on the one hand, all parts are shared by all objects in the dataset, since they are all birds, and a majority of them is visible in all images. On PartImagenet, on the other hand, the different categories of classes do not naturally share the exact same parts (e.g. snakes, vehicles and birds). Forcing all parts to be present in each batch would prevent one single part prototype from dominating and becoming an object detector rather than a part detector, as happens in PartImageNet with the method of [17], as seen in Fig. 4. ### Sensitivity studies We have performed a part-dropout rate sensitivity analysis, shown in Table 3. In general, increasing the dropout rate improves the part discovery performance, with the best results obtained with the highest tested rate of 0.9, with NMI going up to 54.42 from 49.22 when a part dropout rate of 0.3 is used. Such a high rate means that every part needs to capture enough information to be able to classify the image, since very often all parts except one are dropped-out. This poses a very strict constraint on the part discovery process that prevents the appearance of spurious part prototypes. However, this tends to negatively impact the classification performance, which drops from 87.31% to 83.34%, probably due to the fact that, by trying to learn parts that are able to perform classification on their own, there is a lack of incentives for the model to learn the complementarities between parts. A value between 0.3 and 0.7 provides a good compromise between the two tasks. We also investigate the behaviour of our method with respect to the presence of noise at test time. The results in Table 4 show that the classification accuracy of our method is higher than the most closely related method [17] when the input images are subjected to Gaussian noise. Apart from the absolute accuracy of our method being higher, the percentual decrease between each increase in noise is lower, suggesting that an improved ability in part localization also carries advantages in terms of robustness to noise. ### Qualitative results In Figs. 3 and 4 we showcase the effect of increasing the number of parts for the compared methods on CUB (\(K=4\) and \(K=16\)) and PartImageNet (\(K=8\) and \(K=25\)). For the method in [17], we show the part assignment maps for all parts and for those with an assigned attention value higher than 0.1, in order to highlight only the image regions that contribute substantially towards the classification output. We can see that [3] and PDiscoNet are able to correctly assign most parts to the foreground objects in the shown examples, even with the increased number of parts (bottom three rows), with [3] resulting in the best adherence to object boundaries. [17], on the other hand, assigns only a few parts to the foreground objects, even when more parts are available. In the third column ([17] (all)), we see how the rest of parts are assigned to the background in ways that do \begin{table} \begin{tabular}{l|c c c c c c} Noise SD & 0.03 & 0.07 & 0.15 & 0.30 & 0.75 & 1.50 \\ \hline \hline Huang [17] & 84.2 & 82.1 & 78.2 & 71.4 & 50.5 & 22.1 \\ PDiscoNet & 85.6 & 85.6 & 82.2 & 77.9 & 63.3 & 38.2 \\ \end{tabular} \end{table} Table 4: Class. acc. with Gaussian noise on CUB with \(K=16\). \begin{table} \begin{tabular}{l|c c c|c||c c|c} & \multicolumn{5}{c||}{**CUB**} & \multicolumn{5}{c}{**PartImageNet**} \\ & Kp. \(\downarrow\) & NMI \(\uparrow\) & ARI \(\uparrow\) & Class. \(\uparrow\) & NMI \(\uparrow\) & ARI \(\uparrow\) & Class. \(\uparrow\) \\ \hline \hline Full model & **7.67** & **56.87** & **38.05** & **87.49** & **27.13** & **8.76** & 88.58 \\ \hline No \(\mathcal{L}_{\text{orth}}\) & 10.29 & 36.12 & 19.41 & 86.17 & 16.25 & 4.59 & 89.12 \\ No \(\mathcal{L}_{\text{equiviv}}\) & 10.31 & 40.22 & 21.32 & 86.60 & 17.55 & 4.22 & 89.90 \\ No \(\mathcal{L}_{\text{pres}}\) & 7.72 & 55.18 & 35.69 & 87.21 & 12.22 & 3.84 & 88.52 \\ No \(\mathcal{L}_{\text{conc}}\) & 8.58 & 52.44 & 32.17 & 86.77 & 19.71 & 7.39 & **90.32** \\ No modulation & 8.05 & 53.45 & 35.90 & 86.36 & 19.83 & 6.02 & 89.42 \\ No part dropout & 8.48 & 46.37 & 25.36 & 86.93 & 19.97 & 4.65 & 89.72 \\ \end{tabular} \end{table} Table 2: Ablation studies on CUB with \(K=16\) and PartImageNet with \(K=8\). not follow any of the objects in the image. When showing only parts that are actually attended to by the classifier ([17]\(>0.1\)), we confirm that only two or three parts are used in CUB (Fig. 3) with both \(K=4\) and \(K=16\). In the case of PartImageNet (Fig. 4), [17] assigns one single part to the foreground object with \(K=8\) and none with \(K=16\), while again both [3] and PDiscoNet tend to assign a diverse set of parts to the foreground. Also in this case we observe that Dino ViT [3] results in better boundary adherence than PDiscoNet, since we did not explicitly add any element towards this objective. In Figs. 5 to 6 we show the assignment maps for the three methods, with an attention threshold of 0.1 for [17], across ten images for each dataset in order to explore the semantic consistency of the discovered parts across a diverse set of examples. As can be seen in Fig. 6, [17] tends to find symmetric part assignment maps, while our method finds independent parts for the areas around each eye. This figure also explains why [3] fails in the clustering metrics: the whole face tends to be assigned to a single part, making all the facial landmarks indistinguishable from each other. This phenomenon showcases that, although the self-supervised approach of [3] provides remarkable results in terms of semantics and boundary adherence, it may also miss the relevant partitions due to not making use of the fine-grained recognition signal. This drawback of the Dino ViT approach is also visible in the CUB results in Fig. 5, where in some examples some parts either mix with background elements (first column) or are completely missed (last column), while our method is consistent across all samples. The method by Huang and Li [17] also displays a problem with mixing in background parts in CUB and, even more markedly, in PartImageNet. Fig. 3 shows that a majority of the available parts tend to be used on background areas that ultimately receive a low attention weight, leading to only two or three parts being used even in the case of \(K=16\). In Fig. 4 we can see that [17] assigns one single part to foreground objects with \(K=8\) and fails to assign any parts to foreground objects for \(K=25\), which explains the low part discovery and classification scores in Table 1. Both our method and [3] are generally able to identify the object of interest in PartImagenet (Figs. 4 and 7), with [3] often providing better boundary adherence, while our method tends to provide better semantic consistency. For instance, notice how our method uses the same part (in cyan) for the head of mammals and birds, while another one (in purple) is used for the Figure 4: Qualitative results on PartImageNet for our method, [17] w/ and w/o part map thresholding, and [3]. Top rows: all methods with \(k=8\). Bottom rows: \(K=25\). Figure 3: Qualitative results on CUB for our method, [17] w/ and w/o part map thresholding, and [3]. Top rows: all methods with \(k=4\). Bottom rows: \(K=16\). head of reptiles and amphibians. This better semantic consistency is further reinforced by Fig. 8, where we show the histograms of part presence per PartImageNet supercategory for Dino ViT [3] and our method with \(K=8\). We omit the results on [17] because only one part tended to be active in high attention areas (see Fig. 7). This figure confirms the notion that PDiscoNet discovers parts with strong semantic consiency. We can see that similar supercategories (such as _Aeroplane_ and _Boat_, or _Biped_ and _Quadruped_) tend to share the same parts, and parts tend to specialize on only a subset of supercategories. For instance, we note that the cyan part (number 5) is indeed mostly present in _Biped_, _Quadruped_ and _Bird_, while the orange part (number 7), is only present in _Fish_, _Reptile_ and _Snake_. On the other hand, all parts are almost equally shared by all supercategories in the case of Dino ViT, indicating that parts are less semantically consistent across the dataset and acquire multiple semantic interpretations. The quantitative and qualitative results indicate that recent methods for part discovery seem to be tailored to datasets with specific characteristics: Dino ViT [3] thrives with a diverse set of natural images belonging to different supercategories such as PartImageNet but fails to provide the sought after results on the more narrow CelebA, where the parts of interest are restricted to facial landmarks, and the opposite is true for [17]. Our proposed method, on the other hand, is able to extract semantically consistent parts on all tested datasets without the need for any dataset-specific adjustment, showing its potential for out-of-the-box application to datasets with different characteristics. Figure 5: Discovered part segmentation with \(K=4\) on CUB for [3] (top), [17] (middle) and our method (bottom). Figure 6: Discovered part segmentation with \(K=4\) on CelebA for [3] (top), [17] (middle) and our method (bottom). The ground truth facial landmarks appear as black dots. ## 5 Conclusion We propose a method for fine-grained visual categorization that uses part representations as an information bottleneck and thus learns to detect semantically consistent parts that are useful for that task. Our method requires no additional annotation effort and leverages the fine-grained class labels as the sole supervision signal. The quantitative and qualitative comparisons against recent part discovery methods shows that our approach improves upon the state-of-the-art in part localization and semantic consistency, with parts specializing in certain categories and consistently overlapping with the same semantic elements of the objects of interest, without sacrificing accuracy on the down-stream classification task. There are several directions in which more work is needed to improve this approach. The first relates to the fact that, by applying a mask to a high-level feature map in a deep model, we have no guarantee that only the underlying regions of the image influence the corresponding part feature representation. Information from the background or neighboring parts can leak into the feature representation of a part due to the large receptive field of most modern architectures, limiting the interpretability of the approach. In addition to this, our results show that PDiscoNet displays a lower level of contour adherence than a method trained with a very large dataset with self-supervision. This, in turn, could affect the interpretability of the part maps and allow background information to substantially affect the part feature representation. We hope that this approach will contribute towards making models for fine-grained visual categorization more interpretable by facilitating inspection of some aspects of the model's internal reasoning, thus allowing a much richer interaction between the model and its end users. ## Acknowledgements This work was supported by the French National Research Agency under the Investments for the Future Program, referred as ANR-16-CONV-0004 (DigitAg), by the ERC (853489 - DEXIM), by the DFG (2064/1 - Project number 390727645), by the Tubingen AI Center (BMBF, FKZ: 01IS18039A), and by the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU. Figure 8: Histograms of part presence per PartImageNet supercategory for Dino ViT [3] (top) and our method (bottom), with \(K=8\). Same color code as the figure above. Figure 7: Discovered part segmentation with \(K=8\) on PartImageNet for [3] (top), [17] (middle) and our method (bottom).
2309.05502
Incentivising Demand Side Response through Discount Scheduling using Hybrid Quantum Optimization
Demand Side Response (DSR) is a strategy that enables consumers to actively participate in managing electricity demand. It aims to alleviate strain on the grid during high demand and promote a more balanced and efficient use of (renewable) electricity resources. We implement DSR through discount scheduling, which involves offering discrete price incentives to consumers to adjust their electricity consumption patterns to times when their local energy mix consists of more renewable energy. Since we tailor the discounts to individual customers' consumption, the Discount Scheduling Problem (DSP) becomes a large combinatorial optimization task. Consequently, we adopt a hybrid quantum computing approach, using D-Wave's Leap Hybrid Cloud. We benchmark Leap against Gurobi, a classical Mixed Integer optimizer in terms of solution quality at fixed runtime and fairness in terms of discount allocation. Furthermore, we propose a large-scale decomposition algorithm/heuristic for the DSP, applied with either quantum or classical computers running the subroutines, which significantly reduces the problem size while maintaining solution quality. Using synthetic data generated from real-world data, we observe that the classical decomposition method obtains the best overall \newp{solution quality for problem sizes up to 3200 consumers, however, the hybrid quantum approach provides more evenly distributed discounts across consumers.
David Bucher, Jonas Nüßlein, Corey O'Meara, Ivan Angelov, Benedikt Wimmer, Kumar Ghosh, Giorgio Cortiana, Claudia Linnhoff-Popien
2023-09-11T14:44:12Z
http://arxiv.org/abs/2309.05502v2
# Dynamic Price Incentivization for Carbon Emission Reduction using Quantum Optimization ###### Abstract Demand Side Response (DSR) is a strategy that enables consumers to actively participate in managing electricity demand. It aims to alleviate strain on the grid during high demand and promote a more balanced and efficient use of electricity resources. We implement DSR through discount scheduling, which involves offering discrete price incentives to consumers to adjust their electricity consumption patterns. Since we tailor the discounts to individual customers' consumption, the Discount Scheduling Problem (DSP) becomes a large combinatorial optimization task. Consequently, we adopt a hybrid quantum computing approach, using D-Wave's Leap Hybrid Cloud. We observe an indication that Leap performs better compared to Gurobi, a classical general-purpose optimizer, in our test setup. Furthermore, we propose a specialized decomposition algorithm for the DSP that significantly reduces the problem size, while maintaining an exceptional solution quality. We use a mix of synthetic data, generated based on real-world data, and real data to benchmark the performance of the different approaches. ## I Introduction The increasing demand on energy resources and the growing adaption of renewable electricity sources have prompted a search for innovative solutions to optimize energy consumption in order to reduce grid congestion and carbon emissions. Demand Side Response (DSR) [1] has emerged as a promising strategy that focuses on actively managing and adjusting energy consumption patterns in response to grid conditions. Multiple investigations on DSR exist in literature, describing its impact on smart grid technology [2], load scheduling [3], energy economics [4], optimal control and pricing schemes [5]. Price adjustment is a simple tool to steer consumer behavior. With the emergence of smart devices and the electrification of heating and transportation, the response to price incentives can progressively be automated. Typically, DSR is achieved by handing out a dynamic price to all customers simultaneously. Yet, the different usage patterns of the consumers may favor alternate dynamic price policies. Therefore, we set out to find individual price patterns on a per-customer basis such that an optimal load shift can be achieved. We call the distribution of discounts or penalties to specific customers the discount scheduling problem (DSP). The number of customers that have to be considered in such a problem, i.e., an urban power grid, can become prohibitively large to be solved by classical resources. In recent years, Quantum Computing (QC) has garnered significant attention as a potential game-changer in various domains, including optimization. Leveraging the principles of quantum mechanics, quantum optimization algorithms are hypothesized to solve complex optimization problems more efficiently than their classical counterparts. Besides gate-based universal quantum computing, Adiabatic Quantum Computing (AQC) has emerged, which can be shown in general to be equivalent to gate-based approaches [6]. A subset of AQC, Quantum Annealing (QA) [7, 8] has been widely adopted for solving optimization problems [9, 10]. With D-Wave being the industry leader in providing quantum annealing computing hardware, we use D-Wave's quantum annealer in this work to solve the DSP. The limited size of current quantum computing hardware forces us to utilize hybrid quantum computing approaches, like Leap, which is a Cloud service offered by D-Wave and is based on internal problem size reduction [11]. In this work, we additionally develop a customized hybrid approach that performs a problem-specific decomposition. The paper is structured as follows: After giving a concise Literature Review in Sec. II, we describe the problem formulation and mathematical modeling of the DSP in Sec. III. Since the algorithm should be applicable to large amounts of customers in realistic scenarios, Sec. IV motivates and develops a problem-specific decomposition approach for problem size reduction. This decomposition routine proves to be very effective, as the benchmarking of classical and quantum-enhanced solvers, based on various criteria, in Sec. V shows. Furthermore, we observe that Gurobi, as a mixed-integer classical solver, reaches a limit for larger problem sizes, while D-Wave's Leap Hybrid Quantum solver still outputs workable results. Nevertheless, the decomposition routine aided by a classical solver provides the overall best results. ## II Literature Review ### _Related Work_ Recently, quantum computing applications in the power and energy sector [12, 13, 14, 15] are gaining attention for the development of smart grid technology. Several important problems are addressed using quantum computing, for example power flow calculations [16, 17] or energy grid classification [18]. The traditional planning and scheduling tasks in power systems, such as the minimization of generation cost or the maximization of revenue from electricity generation, are generally formulated as combinatorial optimization problems, which are often NP-hard. Using quantum-inspired optimization algorithms is expected to outperform their classical counterparts [13, 19]. A wide range of optimization problems can be converted into quadratic unconstrained binary optimization (QUBO) problems [20], which can be efficiently solved with the Quantum Approximate Optimization Algorithm (QAOA) [21] using gate-based universal quantum computers or using D-Wave quantum annealers. In the literature, there exist multiple quantum computing approaches towards unit commitment [22, 23, 24, 25] and other mixed integer problems [26], using quantum-inspired ADMM [27] or Benders' decomposition methods [28]. Quantum annealing approaches are also used for community detection in electrical grids [29], peer-to-peer energy trading [30] or coalition structure optimization [31, 32]. As one of this work's main contributions is developing a problem-specific decomposition method to solve large instances of the DSP on currently available hardware, we give a brief overview of combinatorial problem decomposition algorithms in the context of quantum optimization here. Divide-and-conquer approaches have been used for various problem instances, such as the MaxClique problem [33, 34, 35, 36], Minimum Vertex Cover [36, 37], Community Detection [38] and MaxCut [39, 38]. They all combine the splitting of the problem into sub-problems using problem-related methods. In special cases, such as Ref. [39], quantum optimization is further utilized in recombining the solution because of the special \(\mathbb{Z}_{2}\) symmetry of MaxCut solutions. Quantum Local Search (QLS) [40] takes local sub-problems of a graph-based problem and iteratively improves a global solution vector. Although applicable to any graph-based problem, QLS has been specifically tested for the Maximum Independent Set problem. The recent emergence of distributed quantum computing has led to the development of decomposition algorithms that still allow for a limited amount of quantum information exchange between the optimization of the sub-problems [41, 42], which was successfully demonstrated for the Maximum Independent Set problem. Apart from problem-specific methods, general QUBO decomposition methods have been devised, like QBSolv [43]. Here, subsets of binary variables of the full QUBO are selected as sub-problems, which are solved on a quantum annealer, while in parallel, a classical heuristic optimizes the original problem. During the process, solutions to the sub-problems will incrementally improve the current solution state of the heuristic. ### _Introduction to Quantum Annealing_ Quantum annealing (QA) is a heuristic for solving combinatorial optimization problems, first proposed in 1998 by Kadowaki and Nishimori [9]. QA utilizes the adiabatic theorem to find the unknown ground state of an Ising Hamiltonian \(\mathcal{H}_{\text{Ising}}\), whose minimal energy state corresponds to the solution of a target problem. With \(\mathcal{H}_{\text{Init}}\) being the initial Hamiltonian, the annealing process can be described by the following dynamic Hamiltonian: \[\mathcal{H}(s)=A(s)\mathcal{H}_{\text{Init}}+B(s)\mathcal{H}_{ \text{Ising}} \tag{1}\] \[\mathcal{H}_{\text{Init}}=-\sum_{i}\sigma_{x}^{i}\] (2) \[\mathcal{H}_{\text{Ising}}=-\sum_{i}h_{i}\sigma_{z}^{i}-\sum_{i>j }J_{ij}\sigma_{z}^{i}\sigma_{z}^{j}, \tag{3}\] where \(\sigma_{x,z}^{(i)}\) are Pauli matrices operating on qubit \(i\), and \(h_{i}\) and \(J_{i,j}\) are the qubit biases and coupling strengths, which encode the specific problem. \(A(s)\) and \(B(s)\) are known as the annealing schedule, with \(s\in[0,1]\). At \(s=0\), \(A(s)\gg B(s)\), while \(A(s)\ll B(s)\) for \(s=1\). As we increase \(s\) from 0 to 1, the system undergoes a gradual change from \(\mathcal{H}_{\text{Init}}\) to \(\mathcal{H}_{\text{Ising}}\). The adiabatic theorem of quantum mechanics states that if that evolution happens slowly enough and the system is initialized in the trivial ground state of \(\mathcal{H}_{\text{init}}\), then the state will remain in the ground state of the momentary Hamiltonian [44]. Eventually, at \(s=1\), the state will be in the ground state of the \(\mathcal{H}_{\text{Ising}}\). Finding the ground state of the Ising model is isomorphic to QUBO [20], therefore, measuring the final state will reveal the solution to an NP-hard optimization task. In quantum annealing, this transition speed will typically be faster than required for the adiabatic theorem, due to practical considerations. Nevertheless, experimental evidence suggests that, depending on the spin glass model, faster evolution times still output the solution with high probability [45]. Thus, measuring the output repeatedly will eventually find the correct solution. ## III Problem Formulation Since currently the distribution system operator (DSO) cannot automatically influence the consumption of the customer at a certain time yet, we have to go the leeway over price incentives. We assume a customer is strictly economically motivated, i.e., he tries to minimize his electricity costs. Thus, we aim to assign each consumer a custom price that dictates how much the respective consumer varies its load. Of course, the convenience of having access to electricity at all times is more important than saving on the cost, such that, in reality, a customer cannot vary his consumption arbitrarily at a certain time. However, with the emergence of electric vehicles (EV) with home charging and heat pumps, automatically varying the load comes in the realm of the possible. The given discounts (and penalties) serve as a protocol that tells a smart home appliance on the customer side when to use electricity and when not, e.g., start or stop charging the EV. The goal on the DSO side is to reduce its CO\({}_{2}\) emissions, which is linked to production cost through carbon pricing. We, therefore, aim to provide customers with individual dynamic tariffs in order to reduce the overall CO\({}_{2}\) emissions. However, we strive to avoid change in the total consumed energy by a single customer, since we only want to shift the time of consumption. Beneficially, reducing the CO\({}_{2}\) emissions leads to more consumption at times with lots of local renewable energy production. To formulate the problem, we discretize the optimization horizon in \(N_{t}\) steps and assign each customer \(c=1,\ldots,N_{c}\) a discount (or penalty) \(z_{c,t}\) at each timestep. Furthermore, we have the forecasted consumption data \(d_{c,t}\geq 0\,[\text{kWh}]\) for each customer and predicted grid CO\({}_{2}\) intensity \(I_{t}\,[\text{g}/\text{kWh}]\) of the power generation in the considered region. We start by introducing a suitable variable encoding for the discounts and define the main optimization objective afterward. Finally, we present grid and customer constraints, as well as lower-priority optimization objectives. ### _Discount Encoding_ The given discounts \(z_{c,t}\) are defined as discrete discount categories for two reasons. First, discrete discounts are easily represented through integer encoding as binary variables [20]. This makes it easy to translate the following formulation into QUBO form, which is needed to employ quantum optimization techniques. Secondly, discrete steps allow the user to change his behavior more distinctly. For instance, if a thousand customers would receive a tiny discount, we cannot expect all the thousand customers to increase their consumption by just a tiny bit. Instead, suppose we offer a moderate discount to only a few customers. In that case, we can expect those customers to adapt their load such that the overall demanded effect of increasing consumption by a small amount is achieved. Thus, limiting the discounts to a small set of categories is useful. In order to prepare the optimization formulation for quantum computation, we need a full binary encoding of the variables. We have two options for that: First, by assigning each discount category \(Z\) (\(z_{c,t}\in Z\)) its own binary variable. In a valid encoding, exactly one of the introduced binary variables has to be set to 1, while the others must be 0. This so-called _one-hot encoding_ has the advantage that we do not have to impose structure on \(Z\), and the effects of a given \(z_{c,t}\) can also be non-linear. Nevertheless, one-hot encoding requires an additional penalty term that enforces valid bitstrings as solutions. Based on our initial experiments, we choose _integer encoding_ as the preferred encoding since it provided better results as no invalid solutions are possible. Here, we discretize a range \([z_{\text{min}},z_{\text{max}}]\) into \(N_{k}\) linearly spaced categories. We choose a symmetric interval \(z_{\text{max}}=-z_{\text{min}}=z_{m}>0\). Therefore, \(Z=\{-z_{m}+i\Delta z\,|\,i=0,\ldots N_{k}-1\}\), with \(\Delta z=\frac{2z_{m}}{N_{k}-1}\). A \(z_{k}<0\) refers to a discount, while \(z_{k}>0\) is a penalty. This range can subsequently be expressed using \(Q=\lfloor\log_{2}N_{k}+1\rfloor\) binary variables \(x_{c,t,k}\) for each discount \(z_{c,t}\) \[z_{c,t}=\Delta z\sum_{k=0}^{Q-1}w_{k}x_{c,t,k}-z_{m}, \tag{4}\] \[\text{with }w_{k}=\begin{cases}2^{k}&\text{if }k<Q-1\\ N_{k}-2^{Q-1}+1&\text{else}\end{cases}. \tag{5}\] Expressing the discounts this way, every bit combination \(x_{c,t,k}\) results in a valid encoding. Thus, there is no need for an additional penalty term in the objective. Furthermore, this encoding is more space efficient, allowing for an exponential number of categories to be represented with a linearly growing number of qubits. Although typically an advantage, the impact on the discussed problem is relatively small as the number of discount categories is chosen small. Given the customer initially receives a flat tariff \(t_{0}\)\([\text{$\text{\text{\text{\text{\text{\text{\text{\text{\text{\text where \(z(x)\in Z^{N_{c}\times N_{t}}\) is the discount matrix that can be encoded through binary variables \(x\in\mathbb{B}^{N_{e}\times N_{t}\times N_{t}}\). As we later include soft constraints into the objective formulation, we rescale and shift the objective (without changing the solution to the problem) by employing the constants \(R_{\text{min}}\) and \(R_{\text{max}}\) \[\text{obj}(z)=\frac{R(z)-R_{\text{min}}}{R_{\text{max}}-R_{\text{min}}}. \tag{9}\] Here, \(R_{\text{min}}\) serves as a lower bound to the amount of CO2 that can be reduced through our optimization, and the \(R_{\text{max}}=R(0)=\sum_{t}\Delta I_{t}D_{t}\) is the result of \(R(z)\) prior to optimization, i.e., \(z_{c,t}=0\). Therefore, \(\text{obj}(z)\in[0,1]\). Note that valid solutions with \(R(z)>R_{\text{max}}\) do exists, but since the optimization should optimally take place \(R(z)<R_{\text{max}}\), i.e., below the naive baseline, we are satisfied with this scaling. A naive way to compute the lower bound \(R_{\text{min}}\) is to give maximal discounts (penalty) when CO2 intensity is low (high). \[R_{\text{min}}=\sum_{c,t}\Delta I_{t}[1-\chi_{c}\operatorname{sign}(\Delta I _{t})z_{m}]d_{c,t}. \tag{10}\] Of course, this lower-bound solution does not satisfy the constraints introduced in Sec. III-C. Therefore, it is substantially smaller than the best solution. Nevertheless, in Sec. IV-A1, we present a better method of finding a more accurate lower bound. ### _Constraints_ The following constraints must be satisfied for \(z\) being an applicable discount matrix that can be forwarded to the customers. We decide between two kinds of constraints: Hard constraints that must never be violated and soft constraints, where small violations are tolerated but are aimed to be minimized. This typically happens by adding the violation as a lower-priority term to the objective function. An additional scaling constant then dials in the strength of the violation penalty. #### Iii-C1 Consumption deviation constraint We do not want customers to change their total consumed energy over the optimization horizon, i.e. \[\delta_{c}=\frac{1}{D_{c}}\sum_{t}z_{c,t}d_{c,t}=0\quad\forall c\in\{1,\dots, N_{c}\}, \tag{11}\] where \(D_{c}=\sum_{t}d_{c,t}\) is the total energy usage of a certain customer. Yet, this strict equality can generally not be exactly achieved through discrete discounts \(z_{c,t}\) on continuous data \(d_{c,t}\), unless \(z_{c,t}=0\). Therefore, we impose a soft constraint \[\text{obj}(z)+\frac{\lambda}{N_{c}z_{m}^{2}}\sum_{c}\delta_{c}^{2} \tag{12}\] with a penalty factor \(\lambda\) that steers the importance of the constraint. Furthermore, the squared error is normalized to make its action problem size independent. #### Iii-C2 Global consumption deviation constraint Even though the per-customer consumption deviation is soft-constrained, the consumption deviation of all customers together may deviate. Globally, i.e., the combined view of all customers, we do not want any change in overall consumption \[\frac{1}{D}\sum_{c,t}z_{c,t}d_{c,t}\chi_{c}=\frac{1}{D}\sum_{c}D_{c}\delta_{c} =0. \tag{13}\] Normally, this equality can be held up to numerical precision when many customers are considered. #### Iii-C3 Power restriction constraint The momentary change in consumption of all customers should be bounded: \[p_{t}^{\text{low}}\leq\sum_{c}\chi_{c}z_{c,t}d_{c,t}\leq p_{t}^{\text{high}} \quad\forall t\in\{1,\dots,N_{t}\}. \tag{14}\] This should be hard-constrained since we want to allow a range of consumption deviation through scheduling. Physical grid limitations are the reason for the limited flexibility here. The values for \(p_{t}^{\text{high/low}}\) can be determined using power flow computations. Of course, the presented global restriction is a major simplification, but it suffices for an initial investigation of the problem. #### Iii-C4 Discount change penalty We want to give the customer long periods of the same discount instead of rapidly changing discounts in order to give them response time to the discount. Therefore, we penalize the changes of discounts between two consecutive timesteps: \[\text{obj}(z)+\frac{\tau}{4N_{c}(N_{t}-1)z_{m}^{2}}\sum_{c,t}(z_{c,t}-z_{c,t+1 })^{2}\,. \tag{15}\] The normalized penalty can be dialed in using the factor \(\tau\), which will be small. #### Iii-C5 Discount regularization We do not want to give out discounts that only have little effect on the objective. For example, suppose somebody does not consume any electricity at some timestep. In that case, a discount won't have an impact but may be given anyways since it does not make a difference. An \(L2\)-regularization ensures that discounts are only given if they benefit the overall goal to a certain extent \[\text{obj}(z)+\frac{\rho}{N_{c}N_{t}z_{m}^{2}}\sum_{c,t}z_{c,t}^{2}\,. \tag{16}\] Again, the penalty is normalized and a penalty factor \(\rho\) is used, which has low priority. ### _On customer savings_ Discount scheduling incentivizes customers to reduce their CO2 through discounts and penalties. Yet, the procedure should never happen to the customer's disadvantage, even if they do not change their behavior. The customer's cost change over the optimization horizon can be computed via the sum of momentary price differences through Eq. (6) and Eq. (7): \[\Delta p_{c} =\sum_{t}p_{c,t}-t_{0}\sum_{t}\tilde{d}_{c,t} \tag{17}\] \[=t_{0}\sum_{t}z_{c,t}(\tilde{d}_{c,t}-d_{c,t}). \tag{18}\] Note that we have used the sum over the changed consumption as the baseline for our comparison, since in any case \(\sum_{t}d_{c,t}\approx\sum_{t}\tilde{d}_{c,t}\) and we only want to compare the cost for the same amount of purchased electricity. Plugging in the altered consumption from Eq. (7), we get as a price change \[\Delta p_{c}=-t_{0}\chi_{c}\sum_{t}z_{c,t}^{2}d_{c,t}. \tag{19}\] As \(z_{c,t}^{2}\geq 0\) and it is assumed that \(\chi_{c}\geq 0\), the customer's price change is guaranteed to be \(\Delta p_{c}\leq 0\), so a customer will save money by complying to the incentives. The savings are exactly zero if the customer does not change his behavior at all. As the absolute price change quantity is dependent on the flat tariff and the total consumption of the customer, we will look at the relative savings \(s_{c}=-\Delta p_{c}/\sum_{t}t_{0}\tilde{d}_{c,t}\geq 0\) in the evaluation section. ## IV Problem Decomposition The number of integer variables needed to construct the discount matrix is \(N_{c}\times N_{t}\). Given a one-day optimization horizon with 15-minute timesteps, each customer requires 96 integer decision variables in the problem. However, as the number of customers will grow quite large1, the number of integers grows akin. Even worse, the number of qubits in the quantum formulation is scarce, and every integer must be encoded with \(\lfloor\log N_{k}+1\rfloor\) qubits. Thus, the move to a hybrid quantum-classical optimization scheme seems inevitable. Footnote 1: Typically, we want to consider more than 1000 customers. In this section, we propose a hybrid approach that is based on problem decomposition. Despite the drawback that decomposition increases solution bias, we find that we can manage the hard constraints of the DSP classically in a pre-processing step. This eliminates the need for a costly reformulation of inequality constraints with slack variables. Fig. 1 shows an overview of the steps taken for the decomposition. ### _Motivation_ #### Iv-A1 Global Solution Let us shift our perspective from the individual customer level to a global view, where all customers are regarded as a unified entity. We consider the overall consumption \(D_{t}=\sum_{c}d_{c,t}\) and the mutable consumption, i.e., the consumption weighted by the individual customer susceptibilities \(\widetilde{D}_{t}=\sum_{c}\chi_{c}d_{c,t}\). Furthermore, we can express the weighted average of all discounts given per customer--from now on called effective discount--as follows \[\zeta_{t}=\langle z_{c,t}\rangle_{c}=\frac{1}{\widetilde{D}_{t}}\sum_{c}\chi_ {c}d_{c,t}z_{c,t}\in[-z_{m},z_{m}]. \tag{20}\] Utilizing the formulation of the effective discount, we can transform the CO\({}_{2}\) reduction from Eq. (8) into \[R(\zeta)=\sum_{t}\Delta I_{t}\left(D_{t}-\widetilde{D}_{t}\zeta_{t}\right). \tag{21}\] The global consumption deviation constraint, Eq. (13), and the power restriction constraint, Eq. (14) can be expressed solely in terms of the effective discount. Therefore, we represent the global version of the DSP as a linear program \[\begin{array}{ll}\text{minimize}&R(\zeta)\\ \text{subject to}&p_{t}^{\text{low}}\leq\widetilde{D}_{t}\zeta_{t}\leq p_{t}^{ \text{high}}\quad\forall t\in 1,\dots,N_{t}\\ &\sum_{t}\widetilde{D}_{t}\zeta_{t}=0.\end{array} \tag{22}\] This formulation disregards any per-customer constraints that are still part of the DSP. Nevertheless, it is a useful tool to estimate how much CO\({}_{2}\) reduction is maximally possible, with all the hard constraints in place. In fact, the solution \(\zeta_{t}^{*}\) is guaranteed to give an optimal lower bound \(R(\zeta^{*})\) \[R(\zeta^{*})\leq R(z)\quad\forall z\in\mathcal{Z}, \tag{23}\] where \(\mathcal{Z}\) is the set of feasible discount matrix configurations. The global DSP consists of only \(N_{t}\) continuous variables. Thus, it can be quickly and efficiently solved using standard procedures like the Simplex method [46]. Given an optimal effective discount, \(\zeta_{t}^{*}\), we can utilize Eq. (20) to optimize the integers \(z_{c,t}\) for the individual customers per timestep. Additionally, we need to include the soft constraints from the DSP in the subsequent optimization. However, doing so would yield an optimization problem the same size as the original problem. Nonetheless, the following section reveals that we can achieve a satisfactory approximation of a continuous number by considering only a limited number of customers. As a result, we can divide the customers into smaller groups or chunks and optimize each chunk separately. #### Iv-A2 Representational Power In this section, we motivate that Eq. (20) can be fulfilled for any arbitrary \(\zeta_{t}\) with sufficient accuracy given a small constant number of customers. We will focus on a discount range \(\zeta_{t}\in[-1/2,1/2]\) and five discrete discounts \(z_{c}\in\{-1/2,-1/4,0,1/4,1/2\}\). From the generated consumption data, see Sec. V, we take a random set of customers and compute \[\min_{z_{c}}\frac{1}{\zeta}\left|\frac{1}{D}\sum_{c}d_{c,t}z_{c}-\zeta\right| \tag{24}\] for all available time steps. Fig. 2 shows the result with different numbers of customers. The average over all timesteps is Fig. 1: Overview of the decomposition routine. The problem is split into sub-problems. Solutions can influence the following sub-problems via sequential updating. Finally, sub-solutions are gathered to a full solution and a post-processing step is employed that improves the solution quality greedily while also making the power restriction constraint is satisfied. plotted, and the error bands indicate a 95% confidence interval. It is evident that even with only ten customers, the relative error remains consistently below 1%. As more customers are added, the error decreases significantly, reaching a negligible level. Therefore, we contend that by maintaining a small, constant number of customers within a chunk (e.g., 20-50 customers), it is possible to obtain a reliable approximation of an effective discount while still considering the per-customer soft-constraints from the DSP. ### _The Full Decomposition Routine_ Let us now assemble the pieces into a full hybrid routine for decomposition, as seen in Fig. 1. The process begins with solving the global DSP (22), followed by dividing customers into chunks. We sort the customers by total consumption and split them into \(M\)-sized groups, s.t. the largest customers are arranged in the first chunk, etc. We argue that it is better to have customers with comparable consumption in one chunk because they can counteract each other better than e.g. one industrial customer and 20 single households. For each chunk, we can define sub-problems in which special effective discounts per chunk are introduced in Sec. IV-B1. These sub-problems can be transformed into QUBO format and solved on a QC. Since we can solve the sub-problems sequentially, we can enhance the results by incorporating the errors from prior optimizations into the subsequent sub-problems in Sec. IV-B2. Eventually, all the chunks are collected, and a final post-processing step shown in Sec. IV-B3 is applied to ensure that no constraints are violated. #### Iv-B1 Chunk Problems The customers are partitioned into \(M=N_{c}/m\) mutually exclusive chunks \(C_{j}\), s.t. \(\bigcup_{j}C_{j}=\{1,\ldots,N_{c}\}\) and \(C_{i}\cap C_{j}=\emptyset\,\forall i\neq j\). Note, that we require and expect the chunk size to be chosen, s.t. \(N_{c}\mod m=0\). Most likely the consumption deviation per chunk \[\sum_{c\in C_{j}}\sum_{t}\chi_{c}d_{c,t}\zeta_{t}^{*}\neq 0\quad\forall j \tag{25}\] is not zero, which, by default, violates the consumption deviation soft-constraint (12). Thus, the first goal is to define chunk effective discounts \(\xi_{t}^{j}\) with the following properties: \[\sum_{t}\widetilde{D}_{t}^{j}\xi_{t}^{j}=0\quad\forall j, \tag{26}\] \[\frac{1}{\widetilde{D}_{t}}\sum_{j=1}^{M}\widetilde{D}_{t}^{j} \xi_{t}^{j}=\zeta_{t}^{*}\quad\forall t, \tag{27}\] where we define an alterable consumption for one chunk \(\widetilde{D}_{t}^{j}=\sum_{c\in C_{j}}\chi_{c}d_{c,t}\), similar to the definition of the total alterable consumption. We define the chunk-effective discount as follows \[\xi_{t}^{j}=\zeta_{t}^{*}-\frac{\alpha_{t}}{\widetilde{D}_{t}^{j}}\sum_{t^{ \prime}}\widetilde{D}_{t^{\prime}}^{j}\zeta_{t^{\prime}}, \tag{28}\] where \(\alpha_{t}\) are arbitrarily chosen constants, s.t. \(\sum_{t}\alpha_{t}=1\). The conditions (26) and (27) are satisfied with this definition. The values \(\alpha_{t}\) are chosen constant \(\alpha_{t}=1/N_{t}\), but we have to make sure that \(\xi_{t}^{j}\in[-z_{m},z_{m}]\,\forall t,j\). If this is not possible for one timestep \(t\), we have to dial back the \(\alpha_{t}\) while equally increasing the remaining \(\alpha\)s. The optimization objective is to approximate the following equality with the chunk effective discount as well as possible \[\widetilde{D}_{t}^{j}\xi_{t}^{j}=\sum_{c\in C_{j}}d_{c,t}\chi_{c}z_{c,t}\quad \forall t\in\{1,\ldots,N_{t}\}. \tag{29}\] The objective can be reformulated as a least squares error problem \[z_{c,t}^{*}=\operatorname*{arg\,min}_{z_{c,t}}\frac{1}{N_{t}z_{m}^{2}}\sum_{t }\left(\xi_{t}-\frac{1}{\widetilde{D}_{t}^{j}}\sum_{c\in C_{j}}d_{c,t}\chi_{c }z_{c,t}\right)^{2} \tag{30}\] and is directly in QUBO form after the binary representation of the discounts has been plugged in. The previously discussed constraints and regularizations--consumption deviation (12), discount regularization (16) and discount change penalty (15)--can be carried over to this optimization problem. #### Iv-B2 Sequential updating When the sub-problems are solved in sequence, the error between the true achieved effective discount and the demanded one can be carried over into the next optimization to be corrected. For optimizing \(\xi_{t}^{j}\) can be adapted as follows \[\xi_{t}^{j}\leftarrow\xi_{t}^{j}+\frac{1}{\widetilde{D}_{t}^{j}}\sum_{i=1}^{j -1}\left(\widetilde{D}_{t}^{i}\xi_{t}^{i}-\sum_{c\in C_{i}}z_{c,t}^{*}d_{c,t} \right).\] Doing so will significantly improve the overall accuracy of the method. Of course, one has to ensure that the altered \(\xi\)s do not exceed the bounds \([-z_{m},z_{m}]\). #### Iv-B3 Post-processing Finally, we describe a post-processing scheme that refines the result and ensures that the power restriction constraint (14) is held. The Algorithm 1 describes the greedy improvement of the solution. Conceptually, it is quite simple: For each timestep, we extract those customers whose discounts can be increased or Fig. 2: The relative approximation error for different values of \(\zeta\) averaged over multiple timesteps. decreased while also improving the consumption deviation penalty (12). Then we try all combinations between one increase and one decrease and investigate how the effective discount behaves. If \(\zeta_{t}^{*}\) is negative, we want the real effective discount to be as close as possible but at least larger than \(\zeta_{t}^{*}\). If it is positive, the other way around. Doing so always satisfies constraint (14). We find the combination that matches the requirements the best and update the respective discounts if it achieves an improvement. Otherwise, the timestep is skipped. Since all possible combinations of up and down moves have to be considered, the complexity of the Algorithm scales at worst with \(\mathcal{O}(N_{t}N_{c}^{2}/4)\). Nevertheless, limiting the possible moves to at most \(r\) empirically provides sufficient accuracy. This then reduces the complexity to \(\mathcal{O}(N_{t}N_{c}+N_{t}r^{2})\). ``` Data:\(z_{c,t},r\in\mathbb{N}\) r is a parameter that dials the accuracy/runtime Result:\(z_{c,t}\) \(\Delta z\gets 2\,z_{m}/(N_{k}-1)\); # Discount step \(\Delta_{c,t}\leftarrow\chi_{c}d_{c,t}\Delta z\); # Possible deviations \(\delta_{c}\leftarrow\sum_{t}\chi_{c}d_{c,t}z_{c,t}\); # Consumption deviation for\(t\in\{1,\ldots,N_{t}\}\)do \(p\leftarrow\sum\chi_{c}d_{c,t}z_{c,t}\); # Power deviation \(\varepsilon\leftarrow\zeta_{t}^{*}\widehat{D}_{t}-p\); # Error from demanded # Increasible customers \(C^{\dagger}\leftarrow\{c=1,\ldots,N_{c}\,|\,z_{c,t}<z_{m},\delta_{c}<-\Delta_{c,t}/2\}\); # Decreasible customers \(C^{\dagger}\leftarrow\{c=1,\ldots,N_{c}\,|\,z_{c,t}>-z_{m},\delta_{c}>\Delta_{c,t}/2\}\); \(C^{\dagger}\leftarrow\text{limit}(C^{\dagger},r)\); # reduce size \(|C^{\dagger}|=r\) \(C^{\dagger}\leftarrow\text{limit}(C^{\dagger},r)\); # Compute combinations of increasing and decreasing two customer discounts \(X_{c,c^{\prime}}\leftarrow\text{sign}(\zeta_{t}^{*})(\varepsilon-[\Delta_{c,t}- \Delta_{c^{\prime},t}])\); # Find positive (feasible) ones \(C_{2}\leftarrow\{(c^{\dagger},c^{\dagger})\in C^{\dagger}\times C^{\dagger}\,| \,X_{c^{\dagger},c^{\dagger}}>0\}\); # Get the best move \(c^{\dagger},c^{\dagger}\leftarrow\arg\min_{c^{\dagger},c^{\dagger}\in C_{2}}X_{ c^{\dagger},c^{\dagger}}\); if\(X_{c^{\dagger},c^{\dagger}}>\text{sign}(\zeta_{t}^{*})\varepsilon\)then continue; # No improvement end if # Update solution and consumption deviation \(z_{c^{\dagger},t}\gets z_{c^{\dagger},t}+\Delta z\); \(z_{c^{\dagger},t}\gets z_{c^{\dagger},t}-\Delta z\); \(\delta_{c^{\dagger}}\leftarrow\delta_{c^{\dagger}}+\Delta_{c^{\dagger},t}\); \(\delta_{c^{\dagger}}\leftarrow\delta_{c^{\dagger}}-\Delta_{c^{\dagger},t}\); end if ``` **Algorithm 1**The post-processing algorithm ## V Experiments & Results ### _Experimental Setup_ To benchmark the performance of solving the DSP, we consider out-of-the-box solvers, as well as our developed decomposition method, and evaluate the results on a set of metrics that best represent the different goals described in the DSP formulation. #### V-A1 Investigated Solvers An overview of the considered solvers and settings can be found in Table I. As a state-of-the-art purely classical baseline, we used Gurobi2. [47]. This was compared to D-Wave's LeapHybridCOM solver [11] (called just Leap in the following), which is a quantum-classical hybrid algorithm that uses classical algorithms to optimize the problem whilst using quantum computers to solve suitable sub-tasks. This has the benefit of solving larger problems than possible directly on current quantum hardware while also supporting more sophisticated optimization models that include hard constraints. Leap is accessed through D-Wave's Cloud service. These two out-of-the-box solvers are compared against our own problem-specific decomposition routine introduced in Sec. IV, subsequently called Decomp-Gurobi, Decomp-Leap or Decomp-QPU, depending on the method considered for solving the chunk problems (30). QPU refers to direct access to the D-Wave's Quantum Annealing processor Advantage 4.1 [48]. Whenever a decomposition solver is followed by an integer, it refers to the chunk size \(m\). The post-processing algorithm is turned on and a cut-off value at \(r=500\) is chosen. Footnote 2: All experiments with Gurobi were conducted on an M1 MacBook Pro (2020) with Gurobi Version 9.0 To ensure a fair comparison, we gave each solver a time limit of \(0.1\,\mathrm{s}\times N_{c}\). The timeout needs scaling with the problem size since the problem difficulty grows considerably with the problem size. Nevertheless, we observed that Leap tends to overrun the set timeout, which is the reason that we first run Leap with the linear growing timeout and then run the remaining solvers with the timeout set to Leaps runtime. Since the Decomposition solver consists of multiple sub-solver calls, we set the timeout for each sub-solver as the whole timeout divided by the number of chunks, i.e., a timeout of \(0.1\,\mathrm{s}\times m\). #### V-A2 Metrics Because we are considering an optimization task with multiple goals involved, it does not suffice only to consider the objective value of our model as a performance metric. Instead, we simultaneously investigate multiple metrics: * _CO\({}_{2}\) reduction_: First and foremost, the CO\({}_{2}\) reduction is the central goal of the DSP, hence it is also the main metric that is investigated. We compute the relative CO\({}_{2}\) reduction error by making use of the solution to the global DSP. Therefore, \[\frac{R(\zeta^{*})-R(z)}{R(\zeta^{*})},\] (31) is a positive quantity and tells us how good the optimization has performed, in comparison to a theoretical maximal reduction. * _Energy_: The energy, or objective, of the optimization problem consists of the rescaled CO\({}_{2}\) reduction with the penalties added. For easier comparison, we again utilize the relative energy error for investigating a solver's performance. The baseline is taken from the best possible CO\({}_{2}\) reduction; all penalties are set to zero. This is a guaranteed lower bound to the energy. * _Consumption deviation standard deviation_: We expect the consumption deviations for each customer to be centered around zero since the problem is constrained to have a zero total consumption deviation. Therefore, the spread of \(\delta_{c}\) around the zero may be a good measure to judge, whether a result produces satisfactory little consumption deviations. That is, we measure the standard deviation of the consumption deviations \(\delta_{c}\). * _Average discount changes_: Since we want to reduce the changes between two discount categories as much as possible, we measure the average discount changes. \[\frac{1}{N_{c}(N_{t}-1)}\sum_{c}\sum_{t=1}^{N_{t}-1}(1-\delta_{z_{c,t},z_{c,t+ 1}}).\] (32) Here, \(\delta\) refers to the Kronecker-Delta. * _Average relative cost savings_: Not a quantity that is optimized for, but very interesting for the DSO, is to measure the relative cost savings per customer, as defined in Sec. III-D. To obtain a single indicator of the performance, we evaluate the mean \(\langle s_{c}\rangle_{c}\) of the relative savings. #### Iv-A3 Data Generation To facilitate the benchmarking of the DSP, we first require realistic data for the consumption of individual customers. For that, we take standard load profiles of residential and industrial customers, to which we add noise. Furthermore, we randomly shift the load profiles by small amounts in time. Additionally, they get scaled according to various numbers of inhabitants of a household. The number of inhabitants is taken from a residential area in Munich. Moreover, we include photovoltaic (PV) electricity generation into the mix, by estimating the potential based on roof data and simulating the production from historic solar irradiance data. The PV production screens the customer's consumption. Grid infeed, i.e., if more PV is generated than consumed, is not specially considered. The grid CO\({}_{2}\) intensity is taken from the real-world data in Munich3. Footnote 3: The data is provided by E.ON’s App for monitoring local CO\({}_{2}\) intensities: [https://www.bayernwerk.de/de/fuer-zruhause/oekoheld.html](https://www.bayernwerk.de/de/fuer-zruhause/oekoheld.html) #### Iv-A4 Parameters If we want to solve the DSP for a given data frame, consisting of the consumption of \(N_{c}\) customers at \(N_{t}\) time steps, we still need to fix a set of open variables and parameters. In a real-world scenario, the customer susceptibilities \(\chi_{c}\) would be measured from the individual customer's behavior. However, as it only acts as a proportionality constant, we turn their effect off and set them all to one. Next, we use five discount categories, with a 50% discount maximally. That, in turn refers to the following valid discounts \(z_{c,t}\in\{\,-1/2,-1/4,0,1/4,1/2\,\}\). As a consequence, a discount of, e.g., 50% would result in an increase in the customer's consumption by 50%. The power deviation bounds \(p_{t}^{\mathrm{high/low}}\) are set to a constant 10% of the average total consumption. Due to our goal not being an overly accurate representation of the real world and more the analysis of the quantum solver, we prefer the constant values here. In practice, however, those values may be derived from real-world grid constraints that can be inferred through power-flow calculations. Finally, the remaining penalty parameters are fixed by analyzing a small-scale example with Gurobi and dialing in the strengths of the penalties, such that they have a reasonable effect for the Gurobi result. It is important to note that a comprehensive examination of the solver's response to parameter settings is beyond the scope of the current investigation. An overview of all parameter settings is given in Table II. ### _Example with 100 Customers_ Let us first take a look at how the optimization result of the different solvers looks in detail before we only focus on the previously discussed metrics. To this end, we take 100 random customers out of the full dataset consisting of over 16,000 simulated customers. The 76 timesteps reach from 1 am to 7 pm for January 13, 2023. We analyze the solutions of four solvers, Gurobi, Leap and two \(m=50\) decomposition methods with the same solvers as the sub-routine. The results for the discount matrices \(z_{c,t}\) can be seen in Fig. 3, while their overall effect on the consumption is displayed in Fig. 4. Although the particular solutions differ quite a lot, the effective result stays similar, regarding the CO\({}_{2}\) reduction. Especially the difference between Gurobi and the other solvers is notable. Apart from the global action of optimization, we are also interested in how the optimization performs per customer. In Fig. 5, one can see how the final relative consumption changes are distributed. Furthermore, Fig. 6 visualizes the distribution of cost savings to the customers. Lastly, it remains important to note that the results for the Leap solvers vary throughout multiple runs. Here only a single run has been picked that is characteristic of the behavior of these solvers. Furthermore, no investigation towards direct QPU access has been made, since the space requirements for a single customer are already 76 integer variables, i.e., 228 binary variables. The problem after gathering multiple customers in a chunk is, hence, not embeddable in the QPU, since we are facing quite dense connectivity in the QUBO. For a reduced problem size, we perform investigations in Sec. V-F. ### _Scaling Analysis_ To test the performance of different solvers, we created test instances using the generated data described above with \(N_{c}\) ranging from 25 to 3200 customers and considering the full 76 timesteps. Our problem instances, therefore, consist of 1,900 to 243,200 integer variables4. To account for the stochasticity of the results from the quantum solvers, we run the quantum solvers multiple times. However, we need to limit ourselves to three execution of the quantum routines because of cost considerations. Footnote 4: All solvers can handle integers directly, so we do not perform the encoding explicitly, but supply the solver with the full information. The results for the discussed metrics are visualized in Fig. 7. Each plot shows a singular metric against the problem size for the considered solvers. Focusing on the relative CO\({}_{2}\) reduction, it is evident that a crossover in performance between Gurobi and Leap happens between 100 and 200 customers. After that size, Gurobi is not able to find converged results in the given time limit. Although not a directly fair comparison since Gurobi is run on a local machine whilst the Leap hybrid solver is run on an proprietary D-Wave cloud architecture. However, we argue that the pattern generalizes, i.e., the point Fig. 4: The effect of the DSP solution for problem size \(N_{c}=100\). The plot shows the aggregated consumption with and without (Base) discounts in place, as well as the grid CO\({}_{2}\) intensity. The solutions of all solvers produce a similar effective consumption change, as already predicted in Sec. IV. Times with high CO\({}_{2}\) emissions produce an effective decrease in consumption and vice versa, just as expected. Fig. 5: Histogram of the relative consumption deviation. One can see that both Gurobi solvers have relatively little spread. And are well centered around zero. The Leap solvers, on the other hand, possess a large spread and are additionally shifted away from zero. The shift away from zero reduces in larger problem instances. Fig. 3: The discount matrices \(z_{c,t}\) found by the investigated solvers for \(N_{c}=100\). Blue indicates a discount and red corresponds to a penalty. White means no discount given at all. Despite their effects on the overall consumption (see Fig. 4) being the same, the discount matrices differ a lot from each other. It is apparent that Gurobi hands out the discounts more greedily than Leap, indicating a bigger impact of the regularization. Nevertheless, a similar pattern is observable in the last three solutions. Fig. 6: A cumulative distribution plot of the relative savings of the customers. In the Decomp solvers, the two chunks can be well distinguished. Gurobi, because it is greedy with handing discounts, only distributes savings to relatively few customers. On the other hand, Leap distributes similar discounts to all customers. Remember: We do not optimize for this metric. This is just an observation of the different strategies and can be interpreted as a measure of the fairness of the optimization algorithms with respect to different customers. where Gurobi doesn't reach satisfactory results anymore shifts to the right but eventually happens. Leap starts off with a relatively weak performance in the small problem sizes but decreases its energy almost continuously. Yet the decomposition routines greatly outperform the general-purpose solvers. We can see steadily decreasing relative CO\({}_{2}\) reduction errors, which can indicate that the total error stays fixed, but gets diminishingly small in comparison to the total quantity. The relative energy error loosely shows the same picture, but there we can see that the solvers utilizing Leap approach a similar level, which can be explained through the dominating penalty terms of the other objectives, where the decomposition does not interfere anymore. Decomp-Gurobi returns a very good and constant optimization objective. Moving on to the per-customer constraints, we notice that the Gurobi based solvers outperform the quantum-enhanced routines (at least where they converge). This is likely due to Gurobi being better at handling smaller energy changes in the optimization objective. However, it is also important to note that, as apparent from the discount matrices in Fig. 3, Gurobi gives many customers not even a single discount. Hence they do not receive any discount changes or consumption deviations, which reduces the average measure. Furthermore, by investigating customer savings, it becomes clear that, on average, Gurobi does provide discounts of well below 5% before becoming extremely generous. The latter arises from the discount matrix being almost completely filled with \(z_{c,t}=\pm z_{m}\), and the customer savings are dependent on a weighted average over \(z_{c,t}^{2}=\sum_{t}z_{m}^{2}=\mathrm{const.}\), remember (19). Therefore, \(s_{c}=z_{m}^{2}=1/4\) in the investigated case. Furthermore, which is also visible in Fig. 3, the more generous customer savings of Decomp-Leap can be explained the same way. To conclude this analysis, we remark that Gurobi struggles with its solving strategy for large problem sizes, which indicates a potential advantage of the quantum-enhanced solver here. Yet, the biased domain-specific decomposition routine provides even better results, especially with the classical solver underneath. We argue, that since the decomposition-based solvers work so well, the possible space of good solutions is rather big, which makes this problem a fitting choice for heuristic-based solvers more than mathematical solvers, like Gurobi. ### _Chunk Size Effect_ After we have seen that the decomposition solver provides satisfying results both with Gurobi and Leap employed as sub-solver, we are interested in what impact the chunk size has on the result. For that, we only inspect Decomp-Gurobi with different chunk sizes \(m=5,10,25,50\) and focus on a reduced problem size frame up until \(N_{c}=800\). We have seen that the problem complexity does not grow linearly with the problem size. Thus, we give a more generous timeout of \(0.5\,\mathrm{s}\times m\) in this investigation in order to isolate the effects of the decomposition routine from the solver performance5. The global effect, i.e., how much CO\({}_{2}\) was reduced, did not differ between the chunk sizes. They all performed equally Fig. 8: Per-customer metrics evaluated with different chunk sizes in the decomposition. As expected, the metrics improve as the chunk sizes get larger since more flexibility remains in the chunk. The global metric, i.e., the CO\({}_{2}\) reduction, performs equally well for all chunk sizes. Fig. 7: The investigated metrics for different problem sizes and different solvers. The runtime of all solvers has been set to be equal for a certain problem size, but grows with \(N_{c}\). The top row shows the global metrics, which tell the most about how the solver performed. The relative energy error is the central objective that we try to minimize, while the energy error gives an overview of the performance with regards to all optimization targets. The second row shows the per-customer metrics, which are optimized for in the problem formulation. That is the standard deviation of all customer consumption deviations and the average discount changes, which we both want to be small. The last row displays the average relative savings and the deviation from the \(0.1\,\mathrm{s}\times N_{c}\) runtime. The error bands indicate the maximum and minimum of the three runs. The spike in relative runtime at 25 customers arises from the minimum Leap timeout of 5 s. As one can see, the other solvers follow the time Leap took well. well. The constant sequential updating of the objective also helps a lot with finding the best CO\({}_{2}\) reduction, even with five customer chunks. Fig.8 shows the per-customer metrics that are optimized for. Here, we can observe a clear tendency that larger chunks result in better per-customer metrics, i.e., less consumption deviation per customer and fewer overall discount changes. ### _Fairness Analysis_ The goal of this section is to investigate how the solvers strategically distribute the discounts to the target customers. This is done by investigating how the relative savings \(s_{c}\) are distributed between individual customers. Fig. 6 and Fig. 9 show two cumulative distribution plots of the results from 100 and 800 customer problem sizes. Except for Gurobi, which does not converge, the observable patterns of the solvers are similar. Leap produces a fair savings distribution, which means that all customers experience the same savings. In the cumulative plots, that means a straight vertical line. The more that line is skewed, the more different cost savings are registered between the customers. In Fig. 6 the splitting in half of the decomposition can be observed quite remarkably. The resolution of the 16 individual chunks in Fig. 9 is no more possible. Although, a kink in Decomp-Leap can be observed, which means that about 70% of the customers save a similar and relatively large amount, while fewer savings are distributed to a smaller group. Decomp-Gurboi reveals a straight but shallow curve, which means that customers will receive savings between 0% and 20% almost equally likely. ### _Direct QPU-Access with Decomposition_ A quantum annealing processor, such as D-Waves Advantage 4.1, suffers from limited connectivity between the physical qubits. However, for our QUBO sub-problems (30), we can analytically compute the number of couplings for a single qubit as follows: \[N_{k}\left(N_{t}-1\right)+N_{k}\left(N_{c}-1\right)+N_{k}-1. \tag{33}\] This term is derived by inspecting the terms in the QUBO formula and observing that we either have couplings within all customers at a single timestep or couplings within all timesteps of a single customer. For the first case, one qubit is connected to all \(N_{k}\) qubits of the other \(N_{c}-1\) customers and to \(N_{k}-1\) qubits of the same customer. Analogue for the second case, but the \(N_{k}-1\) connections within the timestep have already been covered in the first case. The derived quantity grows with the problem size, but the couplings per qubits of the D-Waves Pegasus graph is a constant 15 [48]. Thus, physical qubits have to be chained together to logical qubits in order to allow for higher connectivity. Finding the best, so-called embedding, is itself an NP-hard optimization problem, for which we utilize D-Wave's heuristic MinorMiner. Fig. 10 shows the computed embeddings for the sub-problem QUBOs with different problem sizes. It is apparent that we are very limited to small problem sizes. Since we do not want too few customers in a chunk to preserve flexibility, we settle at a reasonable middle ground of chunk size six and \(12\) timesteps. We interpolate the original data to 12 timesteps and use various (multiples of 6) customer sizes to compare the performance of Decomp-QPU against the other solvers. For each sub-problem, we take 100 readings from the QPU. We cannot directly steer the timeout in this case. Thus, we chose to first run Decomp-QPU and then set the timeout of the remaining solvers to exactly that time. However, Leap has a minimum runtime of 5 s, which is the reason why we only include Leap in the cases where the Decomp-QPU time is more than 5 s, being the case from \(N_{c}=480\) onwards. Again, we perform the analysis for different problem sizes, reaching from 60 to 1920 customers, or 720 to 23,040 integer variables. The sub-problems comprise 72 integer variables, resulting in 216 binary variables in the QUBO formulation. In contrast to the previous analysis, we additionally investigate Simulated Annealing (SA) as a sub-problem QUBO solver in this instance. Due to the larger problem sizes of the previous sub-problems, the SA routine could not return results within the runtime boundaries we had set. However, with the smaller Fig. 10: Embeddable sub-problem size for the D-Wave Advantage 4.1 QPU. The left-hand matrix shows how many physical qubits are needed when a sub-problem with \(N_{c}\) customers and \(N_{t}\) timesteps are embedded. A white field indicates that no embedding has been found. The right-hand plot shows the maximal chain length for the found embedding, i.e., how many qubits are maximally connected to form one logical qubit. All embeddings were found using D-Wave’s MinorMiner package. Fig. 9: A cumulative distribution plot of the relative savings of the customers at \(N_{c}=800\). As discussed earlier, Gurobi does not converge anymore, which causes savings of around 25%. Leap produces fair discounts, similar to Fig. 6. The other two solvers produce more complex, unfair savings distributions. problem sizes, this was not a problem. Fig. 11 displays the results of these experiments. The previously discussed solvers (Gurobi, Decomp-Gurobi, Decomp-Leap) exhibited similar patterns to the investigation done for the larger problem-sizes (Fig. 7). Therefore, we only focus on the QPU and SA-based decomposition routines and Leap. The two Decomp solvers exhibit similar good performance judging by the CO\({}_{2}\) reduction. Interestingly, the per-customer metrics, and therefore also the dominating factor in the energy, measure at a very constant level between the problem sizes. Curiously, SA as the sub-solver performs better in regard to per-customer metrics than the QPU does, leading to a gap in the energy. Leap exhibits similar performance as in our previous experiments. Most notably, although, is that Decomp-QPU seems to perform slightly better than Leap regarding the optimization objective (much better in regard to the CO\({}_{2}\) reduction). That means that our developed hybrid quantum routine does seem to outperform the general-purpose Leap. ## VI Conclusion Our investigation into the feasibility of current quantum computing techniques for DSR has begun by developing a mathematical formulation utilizing discount scheduling to shift grid load to more appropriate times. Our formulation involves providing discretized discounts to multiple customers at different times to incentivize a change in consumption while ensuring all customers receive the same amount of electricity. Our central goal is to reduce overall CO\({}_{2}\) emissions while maintaining grid stability and customer well-being, leading to a constraint quadratic integer optimization problem. Upon close inspection of the problem, we developed a custom decomposition algorithm that compartmentalizes the problem into customer chunks. These sub-problems involve unconstrained integer optimization and can be effectively addressed on quantum computers if encoded correctly. Moreover, since the problems are solved sequentially, we incorporated the accumulated errors into the subsequent optimization problems. Lastly, we developed a post-processing algorithm that further refines the solution. In the end, we benchmarked the performance of a classical general-purpose solver against D-Wave's Leap hybrid quantum-classical solver and our customized decomposition method with various (quantum or classical) sub-solvers employed. We found that, with a linearly increasing timeout for the problem size, the classical solver fails to produce acceptable results after a certain problem size, while the quantum-enhanced Leap continues to provide adequate results. This indicates a potential advantage of solving this particular problem using Leap over the purely classical counterpart, Gurobi. Nonetheless, the decomposition method with the classical solver as sub-solver developed the best-achieved results over the range of problem sizes we investigated. Furthermore, using quantum or simulated annealing for the QUBO problems has resulted in good performance. We found that decomposition paired with quantum annealing returned slightly better energies than Leap. We remark that the pairing of the decomposition method with Leap with large chunk sizes might be a promising pathway for utilizing the quantum-enhanced method for huge instances of this problem. This statement requires further experiments to be conducted, but we reason that large sub-problems may have issues to be solved within the time constraints using Gurobi whereas Leap could find results. Furthermore, the response of the solvers to different problem parameter settings is a topic for future work. Lastly, making the grid constraints more realistically sound instead of our reasonably but arbitrarily chosen constant band is a topic for future research. ## Acknowledgements The authors acknowledge funding from the German Federal Ministry of Education and Research under the funding program "Forderprogramm Quantentechnologien - von den Grundlagen zum Markt" (funding program quantum technologies--from basic research to market), project Q-Grid, 13N16177.
2310.00519
Finite element analysis of a generalized Robin boundary value problem in curved domains based on the extension method
A theoretical analysis of the finite element method for a generalized Robin boundary value problem, which involves a second-order differential operator on the boundary, is presented. If $\Omega$ is a general smooth domain with a curved boundary, we need to introduce an approximate domain $\Omega_h$ and to address issues owing to the domain perturbation $\Omega \neq \Omega_h$. In contrast to the transformation approach used in existing studies, we employ the extension approach, which is easier to handle in practical computation, in order to construct a numerical scheme. Assuming that approximate domains and function spaces are given by isoparametric finite elements of order $k$, we prove the optimal rate of convergence in the $H^1$- and $L^2$-norms. A numerical example is given for the piecewise linear case $k = 1$.
Takahito Kashiwabara
2023-09-30T22:43:44Z
http://arxiv.org/abs/2310.00519v1
Finite element analysis of a generalized Robin boundary value problem in curved domains based on the extension method ###### Abstract. A theoretical analysis of the finite element method for a generalized Robin boundary value problem, which involves a second-order differential operator on the boundary, is presented. If \(\Omega\) is a general smooth domain with a curved boundary, we need to introduce an approximate domain \(\Omega_{h}\) and to address issues owing to the domain perturbation \(\Omega\neq\Omega_{h}\). In contrast to the transformation approach used in existing studies, we employ the extension approach, which is easier to handle in practical computation, in order to construct a numerical scheme. Assuming that approximate domains and function spaces are given by isoparametric finite elements of order \(k\), we prove the optimal rate of convergence in the \(H^{1}\)- and \(L^{2}\)-norms. A numerical example is given for the piecewise linear case \(k=1\). Key words and phrases:Finite element method; Generalized Robin boundary condition; Domain perturbation error; Extension method; Local coordinate representation; \(H^{1}\)-stable interpolation on boundary 2020 Mathematics Subject Classification: Primary: 65N30 This work was supported by a Grant-in-Aid for Early-Career Scientists (No. 20K14357) of the Japan Society for the Promotion of Science (JSPS) ## 1. Introduction The generalized Robin boundary value problem for the Poisson equation introduced in [12] is described by \[-\Delta u =f\quad\text{in}\quad\Omega, \tag{1.2}\] \[\frac{\partial u}{\partial\boldsymbol{n}}+u-\Delta_{\Gamma}u =\tau\quad\text{on}\quad\Gamma:=\partial\Omega, \tag{1.1}\] where \(\Omega\subset\mathbb{R}^{d}\) is a smooth domain, \(\boldsymbol{n}\) is the outer unit normal to \(\Gamma\), and \(\Delta_{\Gamma}\) stands for the Laplace-Beltrami operator defined on \(\Gamma\). Since elliptic equations in the bulk domain and on the surface are coupled through the normal derivative, it can be regarded as one of the typical models of coupled bulk-surface PDEs, cf. [10]. It is also related to problems with dynamic boundary conditions [15] or to reduced-order models for fluid-structure interaction problems [7]. Throughout this paper, we exploit the standard notation of the Sobolev spaces in the domain and on the boundary, that is, \(W^{m,p}(\Omega)\) and \(W^{m,p}(\Gamma)\) (written as \(H^{m}(\Omega)\) and \(H^{m}(\Gamma)\) if \(p=2\)), together with the non-standard ones \[H^{m}(\Omega;\Gamma):=\{v\in H^{m}(\Omega)\ |\ v|_{\Gamma}\in H^{m}(\Gamma)\}, \quad\|v\|_{H^{m}(\Omega;\Gamma)}:=\|v\|_{H^{m}(\Omega)}+\|v\|_{H^{m}(\Gamma)}.\] According to [12, Section 3.1], the weak formulation for (1.1)-(1.2) consists in finding \(u\in H^{1}(\Omega;\Gamma)\) such that \[(\nabla u,\nabla v)_{\Omega}+(u,v)_{\Gamma}+(\nabla_{\Gamma}u,\nabla_{\Gamma }v)_{\Gamma}=(f,v)_{\Omega}+(\tau,v)_{\Gamma}\qquad\forall v\in H^{1}(\Omega ;\Gamma), \tag{1.3}\] where \((\cdot,\cdot)_{\Omega}\) and \((\cdot,\cdot)_{\Gamma}\) denote the \(L^{2}(\Omega)\)- and \(L^{2}(\Gamma)\)-inner products respectively, and \(\nabla_{\Gamma}\) stands for the surface gradient along \(\Gamma\). It is shown in [12] that this problem admits the following regularity structure for some constant \(C>0\): \[\|u\|_{H^{m}(\Omega;\Gamma)}\leq C(\|f\|_{H^{m-2}(\Omega)}+\|\tau\|_{H^{m-2}( \Gamma)})\qquad(m=2,3,\dots).\] Moreover, the standard finite element analysis is shown to be applicable, provided that either \(\Omega\) is a polyhedral domain and (1.2) is imposed on a whole edge or face in \(\Gamma\), or \(\Omega\) is smooth and can be exactly represented in the framework of the isogeometric analysis. For a more general smooth domain, a feasible setting is to exploit the \(\mathbb{P}_{k}\)-isoparametric finite element method, in which \(\Gamma=\partial\Omega\) is approximated by piecewise polynomial (of degree \(k\)) boundary \(\Gamma_{h}=\partial\Omega_{h}\). Because the approximate domain \(\Omega_{h}\) does not agree with \(\Omega\), its theoretical analysis requires estimation of errors owing to the discrepancy of the two domains, i.e., the domain perturbation. Such an error analysis is presented by [15] in a time-dependent case for \(k=1\) and by [9] for \(k\geq 1\), based on the _transformation method_. The name comes from the fact that they introduce a bijection \(L_{h}:\Omega_{h}\to\Omega\) and "lift" a function \(v:\Omega_{h}\to\mathbb{R}\) to \(v\circ L_{h}^{-1}:\Omega\to\mathbb{R}\) defined in \(\Omega\), thus transforming all functions so that they are defined in the original domain \(\Omega\). In this setting, the finite element scheme reads, with a suitable choice of the finite element space \(V_{h}\subset H^{1}(\Omega_{h};\Gamma_{h})\): find \(u_{h}\in V_{h}\) such that \[(\nabla u_{h},\nabla v_{h})_{\Omega_{h}}+(u_{h},v_{h})_{\Gamma_{h}}+(\nabla_ {\Gamma_{h}}u_{h},\nabla_{\Gamma_{h}}v_{h})_{\Gamma_{h}}=(f^{-l},v_{h})_{ \Omega_{h}}+(\tau^{-l},v_{h})_{\Gamma_{h}}\qquad\forall v_{h}\in V_{h}, \tag{1.4}\] where \(f^{-l}=f\circ L_{h}\) and \(\tau^{-l}=\tau\circ L_{h}\) mean the inverse lifts of \(f\) and \(\tau\) respectively. Then the error between the approximate and exact solutions are defined as \(u-u_{h}^{l}\) on \(\Omega\) with \(u_{h}^{l}:=u_{h}\circ L_{h}^{-1}\). It is theoretically proved by [16] and [2] that such a transformation \(L_{h}\) indeed exists. However, from the viewpoint of practical computation, it does not seem easy to construct \(L_{h}\) for general domains in a concrete way. Therefore, it is non-trivial to numerically compute \(f^{-l},\tau^{-l}\), and \(u-u_{h}^{l}\). There is a more classical and direct approach to treat the situation \(\Omega\neq\Omega_{h}\), which we call the _extension method_ (see e.g. [5, Section 4.5] and [1]; a more recent result is found in [4]). Namely, we extend \(f\) and \(\tau\) to some \(\tilde{f}\) and \(\tilde{\tau}\) which are defined in \(\mathbb{R}^{d}\), preserving their smoothness (this can be justified by the Sobolev extension theorem or the trace theorem). Then the numerical scheme reads: find \(u_{h}\in V_{h}\) such that \[(\nabla u_{h},\nabla v_{h})_{\Omega_{h}}+(u_{h},v_{h})_{\Gamma_{h}}+(\nabla_ {\Gamma_{h}}u_{h},\nabla_{\Gamma_{h}}v_{h})_{\Gamma_{h}}=(\tilde{f},v_{h})_{ \Omega_{h}}+(\tilde{\tau},v_{h})_{\Gamma_{h}}\qquad\forall v_{h}\in V_{h},\] and the error is defined as \(\tilde{u}-u_{h}\) in the approximate domain \(\Omega_{h}\). If \(f\) and \(\tau\) are given as entire functions, which is often the case in practical computation, then no special treatment for them is needed. Moreover, when computing errors numerically for verification purposes, it is usual to calculate \(\tilde{u}-u_{h}\) in the computational domain \(\Omega_{h}\) rather than \(u-u_{h}^{l}\) in \(\Omega\) simply because the former is easier to deal with. In view of these situations, we aim to justify the use of the extension method for problem (1.1)-(1.2) in the present paper. Considering \(\Omega_{h}\) which approximates \(\Omega\) by the \(\mathbb{P}_{k}\)-isoparametric elements, we establish in Section 4 the following error estimates as the main result: \[\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\leq O(h^{k}),\qquad\| \tilde{u}-u_{h}\|_{L^{2}(\Omega_{h};\Gamma_{h})}\leq O(h^{k+1}).\] They do not follow from the results of [15] or [9] directly since we need to estimate errors caused from a transformation that are absent in the transformation method. In addition, there is a completely non-trivial point that is specific to the boundary condition (1.2): even if \(u\in H^{2}(\Omega)\) with \(u|_{\Gamma}\in H^{2}(\Gamma)\), we may have only \(\tilde{u}|_{\Gamma_{h}}\in H^{3/2}(\Gamma_{h})\), which could cause loss in the rate of convergence on the boundary. To overcome this technical difficulty, a delicate analysis of interpolation errors on \(\Gamma_{h}\), including the use of the Scott-Zhang interpolation operator on the boundary, is necessary as presented in Section 3. There is another delicate point when comparing a quantity defined in \(\Gamma_{h}\) with that in \(\Gamma\). For simplicity in the explanation, let \(\Gamma_{h}\) be given as a piecewise linear (\(k=1\)) approximation to \(\Gamma\). If \(d=2\) and every node (vertex) of \(\Gamma_{h}\) lies exactly on \(\Gamma\), then the orthogonal projection \(\boldsymbol{p}:\Gamma\to\Gamma_{h}\) is bijective and it is reasonable to set a local coordinate along each boundary element \(S\in\mathcal{S}_{h}\) (see Subsection 2.2 for the notation). Namely, \(S\) and \(\boldsymbol{p}^{-1}(S)\) are represented as graphs \((y_{1},0)\) and \((y_{1},\varphi(y_{1}))\) respectively with a local coordinate \((y_{1},y_{2})\). However, if nodes do not belong to \(\Gamma\), then \(\boldsymbol{p}\) is no longer injective (see Figure 1). Furthermore, for \(d\geq 3\) the same situation necessarily occurs--no matter if boundary nodes are in \(\Gamma\) or not--since \(\partial S\) (its dimension is \(\geq 1\)) is not exactly contained in \(\Gamma\). Consequently, it is inconsistent in general to assume the following simultaneously: 1. each \(S\in\mathcal{S}_{h}\) has one-to-one correspondence to some subset \(\Gamma_{S}\subset\Gamma\); 2. both \(S\) and \(\Gamma_{S}\) admit graph representations in some rotated cartesian coordinate, whose domains of definition are the same; 3. \(\Gamma=\bigcup_{S\in\mathcal{S}_{h}}\Gamma_{S}\) is a disjoint union, that is, \(\{\Gamma_{S}\}_{S\in\mathcal{S}_{h}}\) forms an exact partition of \(\Gamma\). We remark that this inconsistency is sometimes overlooked in literature considering \(\Omega\neq\Omega_{h}\). To address the issue, we utilize the orthogonal projection \(\mathbf{\pi}:\Gamma_{h}\to\Gamma\) (its precise definition is given in Subsection 2.3) instead of \(\mathbf{p}\). This map is bijective as long as \(\Gamma_{h}\) is close enough to \(\Gamma\), so that properties (i) and (iii) hold with \(\Gamma_{S}=\mathbf{\pi}(S)\). Then we set a local coordinate along \(\mathbf{\pi}(S)\) and parametrize \(S\) through \(\mathbf{\pi}\) with the same domain as in Figure 1, avoiding the inconsistency above (we do not rely on a graph representation of \(S\) in evaluating surface integrals etc.). Finally, in Appendix C, considering the so-called natural extension of \(u_{h}\) to \(\Omega\) denoted by \(\bar{u}_{h}\), we also prove that \(u-\bar{u}_{h}\) converges to \(0\) at the optimal rate in \(H^{1}(\Omega;\Gamma)\) and \(L^{2}(\Omega;\Gamma)\) (actually there is some abuse of notation here; see Remark C.1). This result may be regarded as an extension of [17, Section 4.2.3], which discussed a Dirichlet problem for \(d=2\), to a more general setting. Whereas it is of interest mainly from the mathematical point of view, it justifies calculating errors in approximate domains \(\Omega_{h},\Gamma_{h}\) based on extensions to estimate the rate of convergence in the original domains \(\Omega,\Gamma\). ## 2. Approximation and perturbation of domains ### Assumptions on \(\Omega\) Let \(\Omega\subset\mathbb{R}^{d}\,(d\geq 2)\) be a bounded domain of \(C^{k+1,1}\)-class \((k\geq 1)\), with \(\Gamma:=\partial\Omega\). Then there exist a system of local coordinates \(\{(U_{r},\mathbf{y}_{r},\varphi_{r})\}_{r=1}^{M}\) such that \(\{U_{r}\}_{r=1}^{M}\) forms an open covering of \(\Gamma\), \(\mathbf{y}_{r}={}^{t}(y_{r1},\ldots,y_{rd-1},y_{rd})={}^{t}(\mathbf{y}_{r}^{\prime},y_ {rd})\) is a rotated coordinate of \(\mathbf{x}\), and \(\varphi_{r}:\Delta_{r}\to\mathbb{R}\) gives a graph representation \(\mathbf{\Phi}_{r}(\mathbf{y}_{r}^{\prime}):={}^{t}(\mathbf{y}_{r}^{\prime},\varphi_{r}( \mathbf{y}_{r}^{\prime}))\) of \(\Gamma\cap U_{r}\), where \(\Delta_{r}\) is an open cube in \(\mathbb{R}^{N-1}\). Because \(C^{k,1}(\Delta_{r})=W^{k+1,\infty}(\Delta_{r})\), we may assume that \[\|(\nabla^{\prime})^{m}\varphi_{r}\|_{L^{\infty}(\Delta^{\prime})}\leq C\quad (m=0,\ldots,k+1,\;r=1,\ldots,M)\] for some constant \(C>0\), where \(\nabla^{\prime}\) means the gradient with respect to \(\mathbf{y}_{r}^{\prime}\). We also introduce a notion of tubular neighborhoods \(\Gamma(\delta):=\{x\in\mathbb{R}^{N}\,:\,\mathrm{dist}(x,\Gamma)\leq\delta\}\). It is known that (see [11, Section 14.6]) there exists \(\delta_{0}>0\), which depends on the \(C^{1,1}\)-regularity of \(\Omega\), such that each \(\mathbf{x}\in\Gamma(\delta_{0})\) admits a unique representation \[\mathbf{x}=\bar{\mathbf{x}}+t\mathbf{n}(\bar{\mathbf{x}}),\qquad\bar{\mathbf{x}}\in\Gamma,\,t\in[ -\delta_{0},\delta_{0}].\] We denote the maps \(\Gamma(\delta_{0})\to\Gamma\); \(\mathbf{x}\mapsto\bar{\mathbf{x}}\) and \(\Gamma(\delta_{0})\to\mathbb{R}\); \(\mathbf{x}\mapsto t\) by \(\mathbf{\pi}(\mathbf{x})\) and \(d(\mathbf{x})\), respectively (actually, \(\mathbf{\pi}\) is an orthogonal projection to \(\Gamma\) and \(d\) agrees with the signed-distance function). The regularity of \(\Omega\) is transferred to that of \(\mathbf{\pi}\), \(d\), and \(\mathbf{n}\) (cf. [8, Section 7.8]). In particular, \(\mathbf{n}\in\mathbf{C}^{k,1}(\Gamma)\). ### Assumptions on approximate domains We make the following assumptions (H1)-(H8) on finite element partitions and approximate domains. First we introduce a regular family of triangulations \(\{\vec{\mathcal{T}}_{h}\}_{h\downarrow 0}\) of _straight \(d\)-simplices_ and define the set of nodes corresponding to the standard \(\mathbb{P}_{k}\)-finite element. 1. Every \(T\in\vec{\mathcal{T}}_{h}\) is affine-equivalent to the standard closed simplex \(\hat{T}\) of \(\mathbb{R}^{d}\), via the isomorphism \(\mathbf{\tilde{F}}_{T}(\hat{\mathbf{x}})=B_{T}\hat{\mathbf{x}}+\mathbf{b}_{T}\). The set \(\vec{\mathcal{T}}_{h}\) is mutually disjoint, that is, the intersection of every two different elements is either empty or agrees with their common face of dimension \(\leq d-1\). Figure 1. \(\Gamma\) and \(\Gamma_{h}\) for \(d=2\) and \(k=1\). Left: if \(\partial S\not\subset\Gamma\), \(\mathbf{p}\) is not injective (in the red part) and property (iii) fails to hold. Right: \(\mathbf{\pi}(S)\) and \(S\) are parametrized over the common domain \(S^{\prime}\). The representation of \(\mathbf{\pi}(S)\) is a graph but that of \(S\) is not. * \(\{\tilde{\mathcal{T}}_{h}\}_{h\downarrow 0}\) is regular in the sense that \[h_{T}\leq C\rho_{T}\quad(\forall h>0,\,\forall T\in\mathcal{T}_{h}),\] where \(h_{T}\) and \(\rho_{T}\) stand for the diameter of the smallest ball containing \(T\) and that of the largest ball contained \(T\), respectively. * We let \(\hat{\Sigma}_{k}=\{\hat{\boldsymbol{a}}_{i}\}_{i=1}^{N_{k}}\) denote the nodes in \(\hat{T}\) of the continuous \(\mathbb{P}_{k}\)-finite element (see e.g. [5, Section 2.2]). The nodal basis functions \(\hat{\phi}_{i}\in\mathbb{P}_{k}(\hat{T})\), also known as the shape functions, are then defined by \(\hat{\phi}_{i}(\hat{\boldsymbol{a}}_{j})=\delta_{ij}\) (the Kronecker delta) for \(i,j=1,\ldots,N_{k}\). **Remark 2.1**.: If \(\hat{T}\) is chosen as the standard \(d\)-simplex, i.e., \(\hat{T}=\{(\hat{x}_{1},\ldots,\hat{x}_{d})\in\mathbb{R}^{d}\mid x_{1}\geq 0, \ldots,x_{d}\geq 0,\hat{x}_{1}+\cdots+\hat{x}_{d}\leq 1\}\), then the standard position of the nodes for the \(\mathbb{P}_{k}\)-finite element is specified as \(\hat{\Sigma}_{k}=\{(\hat{i}_{1}/k,\ldots,\hat{i}_{d}/k)\in\hat{T}\mid\hat{i}_ {1},\ldots,\hat{i}_{d}\in\mathbb{N}_{\geq 0}\}\). We now introduce a partition into \(\mathbb{P}_{k}\)-isoparametric finite elements, denoted by \(\mathcal{T}_{h}\), from \(\tilde{\mathcal{T}}_{h}\), which results in approximate domains \(\Omega_{h}\). We assume that \(\Omega_{h}\) is a perturbation of a polyhedral domain. * For \(\hat{T}\in\tilde{\mathcal{T}}_{h}\) we define a parametric map \(\boldsymbol{F}\in[\mathbb{P}_{k}(\hat{T})]^{d}\) by \[\boldsymbol{F}(\hat{\boldsymbol{x}})=\sum_{i=1}^{N_{k}}\boldsymbol{a}_{i}\hat {\phi}_{i}(\hat{\boldsymbol{x}}),\] where the "mapped nodes" \(\boldsymbol{a}_{i}\in\mathbb{R}^{d}\,(i=1,\ldots,N_{k})\) satisfy \[|\boldsymbol{a}_{i}-\boldsymbol{F}_{\hat{T}}(\hat{\boldsymbol{a}}_{i})|\leq Ch _{\hat{T}}^{2}.\] If \(h_{\hat{T}}\) is small such \(\boldsymbol{F}\) becomes diffeomorphic on \(\hat{T}\) (see [6, Theorem 3]), and we set \(T:=\boldsymbol{F}(\hat{T})\). For convenience in the notation, henceforth we write \(\boldsymbol{F}\) as \(\boldsymbol{F}_{T}\), \(\tilde{\boldsymbol{F}}_{\hat{T}}\) as \(\tilde{\boldsymbol{F}}_{T}\), and \(h_{\hat{T}}\) as \(h_{T}\). * The partition \(\mathcal{T}_{h}\) is defined as the set of \(T\) constructed above. We define \(\Omega_{h}\) to be the interior of the union of \(\mathcal{T}_{h}\); in particular, \(\overline{\Omega}_{h}=\bigcup_{T\in\mathcal{T}_{h}}T\). * \(\{\mathcal{T}_{h}\}_{h\downarrow 0}\) is regular of order \(k\) in the sense of [2, Definition 3.2], that is, \[\left\|\nabla_{\hat{\boldsymbol{x}}}^{m}\boldsymbol{F}_{T}\right\|_{L^{\infty} (\hat{T})}\leq C\|B_{T}\|_{\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{d})}^{m} \leq Ch_{T}^{m}\qquad(T\in\mathcal{T}_{h},\quad m=2,\ldots,k+1),\] where \(C\) is independent of \(h\) (if \(m=k+1\) the left-hand side is obviously \(0\)). **Remark 2.2**.: (i) Throughout this paper, we assume without special emphasis that \(h\) is sufficiently small; especially that \(h\leq 1\). (ii) (H6) automatically holds if \(\boldsymbol{F}_{T}\) is an \(O(h^{k})\)-perturbation of \(\tilde{\boldsymbol{F}}_{T}\) (see [6, p. 239]). It is a reasonable assumption for \(k=2\), but is not compatible with (H8) below for \(k\geq 3\), which is why we presume (H6) independently. (iii) [16] presented a procedure to construct \(\mathcal{T}_{h}\) satisfying (H4)-(H6) for general \(d\) and \(k\), which is done inductively on \(k\). In order to get, e.g., cubic isoparametric partitions with regularity of order \(3\), one needs to know a quadratic partition of order \(2\) in advance. Then, a kind of perturbation is added to the quadratic map to satisfy the condition of order \(3\) (see [16, eq. (22)]). (iv) As a result of (H4)-(H6), for \(T\in\mathcal{T}_{h}\) we have (see [6, Theorems 3 and 4] and [16, Theorem 1]): \[\left\|\nabla_{\hat{\boldsymbol{x}}}\boldsymbol{F}_{T}\right\|_{L^{\infty}( \hat{T})}\leq C\|B_{T}\|_{\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{d})}\leq Ch _{T},\] \[C_{1}h_{T}^{d}\leq|\det(\nabla_{\hat{\boldsymbol{x}}}\boldsymbol{F}_{T})|\leq C _{2}h_{T}^{d},\] \[\|\nabla_{\boldsymbol{x}}^{m}\boldsymbol{F}_{T}^{-1}\|_{L^{\infty}(T)}\leq C\| B_{T}^{-1}\|_{\mathcal{L}(\mathbb{R}^{d},\mathbb{R}^{d})}^{m}\leq Ch_{T}^{-m} \quad(m=1,\ldots,k+1).\] We next introduce descriptions on boundary meshes. Setting \(\Gamma_{h}:=\partial\Omega_{h}\), we define the boundary mesh \(\mathcal{S}_{h}\) inherited from \(\mathcal{T}_{h}\) by \[\mathcal{S}_{h}=\{S\subset\Gamma_{h}\mid S=\boldsymbol{F}_{T}(\hat{S})\text{ for some }T\in\mathcal{T}_{h},\text{ where }\hat{S}\subset\partial\hat{T}\text{ is a }(d-1)\text{-face of }\hat{T}\}.\] Then we have \(\Gamma_{h}=\bigcup_{S\in\mathcal{T}_{h}}S\) (disjoint union). Each boundary element \(S\in\mathcal{S}_{h}\) admits a unique \(T\in\mathcal{T}_{h}\) such that \(S\subset\partial T\), which is denoted by \(T_{S}\). We let \(\boldsymbol{b}_{r}:U_{r}\to\mathbb{R}^{d-1};t(\boldsymbol{y}_{r}^{\prime},y_{rd}) \mapsto\boldsymbol{y}_{r}^{\prime}\) denote the projection to the base set. Let us now assume that \(\Omega\) is approximated by \(\Omega_{h}\) in the following sense. * \(\Gamma_{h}\) is covered by \(\{U_{r}\}_{r=1}^{M}\), and each portion \(\Gamma_{h}\cap U_{r}\) is represented as a graph \((\boldsymbol{y}_{r}^{\prime},\varphi_{rh}(\boldsymbol{y}_{r}^{\prime}))\), where \(\varphi_{rh}\) is a continuous function defined in \(\overline{\Delta_{r}}\). Moreover, each \(S\in\mathcal{S}_{h}\) is contained in some \(U_{r}\). We fix such \(r\) and agree to omit the subscript \(r\) for simplicity when there is no fear of confusion. * The restriction of \(\varphi_{rh}\) to \(\boldsymbol{b}_{r}(S)\) for each \(S\in\mathcal{S}_{h}\) is a polynomial function of degree \(\leq k\). Moreover, \(\varphi_{rh}\) approximates \(\varphi_{r}\) as accurately as a general \(\mathbb{P}_{k}\)-interpolation does; namely, we assume that (2.1) \[\|\varphi_{r}-\varphi_{rh}\|_{L^{\infty}(\boldsymbol{b}_{r}(S))} \leq Ch_{S}^{k+1}=:\delta_{S},\] (2.2) \[\|(\nabla^{\prime})^{m}(\varphi_{r}-\varphi_{rh})\|_{L^{\infty}( \boldsymbol{b}_{r}(S))} \leq Ch_{S}^{k+1-m}\qquad(m=1,\ldots,k+1),\] where the boundary mesh size is defined as \(h_{S}:=h_{T_{S}}\). These assumptions essentially imply that the local coordinate system for \(\Omega\) is compatible with \(\{\Omega_{h}\}_{h;0}\) and that \(\Gamma_{h}\) is a piecewise \(\mathbb{P}_{k}\) interpolation of \(\Gamma\). Setting \(\delta:=\max_{S\in\mathcal{S}_{h}}\delta_{S}\), we have \(\operatorname{dist}(\Gamma,\Gamma_{h})\leq\delta<\delta_{0}\) if \(h\) is sufficiently small, so that \(\boldsymbol{\pi}\) is well-defined on \(\Gamma_{h}\). ### Local coordinates for \(\Gamma\) and \(\Gamma_{h}\) In [14, Proposition 8.1], we proved that \(\boldsymbol{\pi}|_{\Gamma_{h}}\) gives a homeomorphism (and element-wisely a diffeomorphism) between \(\Gamma\) and \(\Gamma_{h}\) provided \(h\) is sufficiently small, taking advantage of the fact that \(\Gamma_{h}\) can be regarded as a \(\mathbb{P}_{k}\)-interpolation of \(\Gamma\) (there we assumed \(k=1\), but the method can be easily adapted to general \(k\geq 1\)). If we write its inverse map \(\boldsymbol{\pi}^{*}:\Gamma\to\Gamma_{h}\) as \(\boldsymbol{\pi}^{*}(\boldsymbol{x})=\bar{\boldsymbol{x}}+t^{*}(\bar{ \boldsymbol{x}})\boldsymbol{n}(\bar{\boldsymbol{x}})\), then \(t^{*}\) satisfies (cf. [14, Proposition 8.2]) \[\|t^{*}\|_{L^{\infty}(\Gamma)}\leq\delta,\qquad\|\nabla_{\Gamma}^{m}t^{*}\|_ {L^{\infty}(\Gamma)}\leq Ch^{k+1-m}\quad(m=1,\ldots,k+1), \tag{2.3}\] corresponding to (2.1) and (2.2). Here, \(\nabla_{\Gamma}\) means the surface gradient along \(\Gamma\) and the constant depends only on the \(C^{1,1}\)-regularity of \(\Omega\). This in particular implies that \(\Omega_{h}\triangle\Omega:=(\Omega_{h}\setminus\Omega)\cup(\Omega\setminus \Omega_{h})\) and \(\Gamma_{h}\cup\Gamma\) are contained in \(\Gamma(\delta)\). We refer to \(\Omega_{h}\triangle\Omega\), \(\Gamma(\delta)\) and their subsets as _boundary-skin layers_ or more simply as _boundary skins_. For \(S\in\mathcal{S}_{h}\), we may assume that \(S\cup\boldsymbol{\pi}(S)\) is contained in some local coordinate neighborhood \(U_{r}\). As announced in (H7) above, we will omit the subscript \(r\) in the subsequent argument. We define \[S^{\prime}:=\boldsymbol{b}(\boldsymbol{\pi}(S))\quad\text{(note that it differs from $\boldsymbol{b}(S)$)}\] to be the common domain of parameterizations of \(\boldsymbol{\pi}(S)\subset\Gamma\) and \(S\subset\Gamma_{h}\). In fact, \(\boldsymbol{\Phi}:S^{\prime}\to\boldsymbol{\pi}(S)\) and \(\boldsymbol{\Phi}_{h}:=\boldsymbol{\pi}^{*}\circ\boldsymbol{\Phi}:S^{\prime}\to S\) constitute smooth (at least \(C^{k,1}\)) bijections. We then obtain \(\boldsymbol{\pi}^{*}(\boldsymbol{\Phi}(\boldsymbol{z}^{\prime}))=\boldsymbol{ \Phi}(\boldsymbol{z}^{\prime})+t^{*}(\boldsymbol{\Phi}(\boldsymbol{z}^{ \prime}))\boldsymbol{n}(\boldsymbol{\Phi}(\boldsymbol{z}^{\prime}))\) for \(\boldsymbol{z}^{\prime}\in S^{\prime}\) and \[\|t^{*}\circ\boldsymbol{\Phi}\|_{L^{\infty}(S^{\prime})}\leq\delta_{S}, \qquad\|(\nabla^{\prime})^{m}(t^{*}\circ\boldsymbol{\Phi})\|_{L^{\infty}(S^{ \prime})}\leq Ch_{S}^{k+1-m}\quad(m=1,\ldots,k+1),\] which are localized versions of (2.3). Let us represent integrals associated with \(S\) in terms of the local coordinates introduced above. First, surface integrals along \(\boldsymbol{\pi}(S)\) and \(S\) are expressed as \[\int_{\boldsymbol{\pi}(S)}v\,d\gamma=\int_{S^{\prime}}v(\boldsymbol{\Phi}( \boldsymbol{y}^{\prime}))\sqrt{\det G(\boldsymbol{y}^{\prime})}\,d\boldsymbol{ y}^{\prime},\qquad\int_{S}v\,d\gamma_{h}=\int_{S^{\prime}}v(\boldsymbol{ \Phi}_{h}(\boldsymbol{y}^{\prime}))\sqrt{\det G_{h}(\boldsymbol{y}^{\prime})}\,d \boldsymbol{y}^{\prime},\] where \(G\) and \(G_{h}\) denote the Riemannian metric tensors obtained from the parameterizations \(\boldsymbol{\Phi}\) and \(\boldsymbol{\Phi}_{h}\), respectively. Namely, for tangent vectors \(\boldsymbol{g}_{\alpha}:=\frac{\partial\boldsymbol{\Phi}}{\partial z_{ \alpha}}\) and \(\boldsymbol{g}_{h,\alpha}:=\frac{\partial\boldsymbol{\Phi}_{h}}{\partial z_{ \alpha}}\) (\(\alpha=1,\ldots,d-1\)), the components of and \(G\) and \(G_{h}\), which are \((d-1)\times(d-1)\) matrices, are given by \[G_{\alpha\beta}=\boldsymbol{g}_{\alpha}\cdot\boldsymbol{g}_{\beta},\qquad G_{h, \alpha\beta}=\boldsymbol{g}_{h,\alpha}\cdot\boldsymbol{g}_{h,\beta}.\] The contravariant components of the metric tensors and the contravariant vectors on \(\Gamma\) are defined as \[G^{\alpha\beta}=(G^{-1})_{\alpha\beta},\qquad\boldsymbol{g}^{\alpha}=\sum_{ \beta=1}^{d-1}G^{\alpha\beta}\boldsymbol{g}_{\beta},\] together with their counterparts \(G_{h}^{\alpha,\beta}\) and \(\boldsymbol{g}_{h}^{\alpha}\) on \(\Gamma_{h}\). Then the surface gradients along \(\Gamma\) and \(\Gamma_{h}\) can be represented in the local coordinate as (see [12, Lemma 2.1]) \[\nabla_{\Gamma}=\sum_{\alpha=1}^{d-1}\boldsymbol{g}^{\alpha}\frac{\partial}{ \partial z_{\alpha}},\qquad\nabla_{\Gamma_{h}}=\sum_{\alpha=1}^{d-1}\boldsymbol{g }_{h}^{\alpha}\frac{\partial}{\partial z_{\alpha}}. \tag{2.4}\] In the same way as we did in [14, Theorem 8.1], we can show \(\|\mathbf{g}_{\alpha}-\mathbf{g}_{h,\alpha}\|_{L^{\infty}(S^{\prime})}\leq Ch_{S}^{k}\) and \(\|G_{\alpha\beta}-G_{h,\alpha\beta}\|_{L^{\infty}(S^{\prime})}\leq C\delta_{S}\). We then have \(\|G^{\alpha\beta}-G_{h}^{\alpha\beta}\|_{L^{\infty}(S^{\prime})}\leq C\delta_{S}\), because \[G_{h}^{-1}-G^{-1}=G^{-1}\underbrace{(G_{h}-G)}_{=O(\delta_{S})}G_{h}^{-1}.\] Note that the stability of \(G_{h}^{-1}\) follows from the representation \(G_{h}=G(I+G^{-1}X)\), with \(X=G_{h}-G\) denoting a perturbation, together with a Neumann series argument. As a result, one also gets an error estimate for contravariant vectors, i.e., \(\|\mathbf{g}^{\alpha}-\mathbf{g}_{h}^{\alpha}\|_{L^{\infty}(S^{\prime})}\leq Ch_{S}^{k}\). Derivative estimates for metric tensors and vectors can be derived as well for \(m=1,\ldots,k\): \[\begin{split}\|G_{\alpha\beta}-G_{h,\alpha\beta}\|_{W^{m,\infty} (S^{\prime})}&\leq Ch_{S}^{k-m},\qquad\|G^{\alpha\beta}-G_{h}^{ \alpha\beta}\|_{W^{m,\infty}(S^{\prime})}\leq Ch_{S}^{k-m},\\ \|\mathbf{g}_{\alpha}-\mathbf{g}_{h,\alpha}\|_{W^{m,\infty}(S^{\prime})}& \leq Ch_{S}^{k-m},\qquad\|\mathbf{g}^{\alpha}-\mathbf{g}^{h,\alpha}\|_{W^{m, \infty}(S^{\prime})}\leq Ch_{S}^{k-m}.\end{split} \tag{2.5}\] Next, let \(\mathbf{\pi}(S,\delta):=\{\bar{\mathbf{x}}+t\mathbf{n}(\bar{\mathbf{x}})\mid\bar{\mathbf{x}}\in \mathbf{\pi}(S),\;-\delta\leq t\leq\delta\}\) be a tubular neighborhood with the base \(\mathbf{\pi}(S)\), and consider volume integrals over \(\mathbf{\pi}(S,\delta)\). To this end we introduce a one-to-one transformation \(\mathbf{\Psi}:S^{\prime}\times[-\delta,\delta]\to\mathbf{\pi}(S,\delta)\) by \[\mathbf{x}=\mathbf{\Psi}(\mathbf{z}^{\prime},t):=\mathbf{\Phi}(\mathbf{z}^{\prime})+t\mathbf{n}(\mathbf{ \Phi}(\mathbf{z}^{\prime}))\Longleftrightarrow\mathbf{z}^{\prime}=\mathbf{b}(\mathbf{\pi}( \mathbf{x})),\;t=d(\mathbf{x}),\] where we recall that \(\mathbf{b}:\mathbb{R}^{d}\to\mathbb{R}^{d-1}\) is the projection. Then, by change of variables, we obtain \[\int_{\mathbf{\pi}(S,\delta)}v(\mathbf{x})\,d\mathbf{x}=\int_{S^{\prime}\times[-\delta, \delta]}v(\mathbf{\Psi}(\mathbf{z}^{\prime},t))|\det J(z^{\prime},t)|\,d\mathbf{z}^{ \prime}dt,\] where \(J:=\nabla_{(\mathbf{z}^{\prime},t)}\mathbf{\Psi}\) denotes the Jacobi matrix of \(\mathbf{\Psi}\). In the formulas above, \(\det G\), \(\det G_{h}\), and \(\det J\) can be bounded, from above and below, by positive constants depending on the \(C^{1,1}\)-regularity of \(\Omega\), provided \(h\) is sufficiently small. In particular, we obtain the following equivalence estimates: \[C_{1}\int_{\mathbf{\pi}(S)}|v|\,d\gamma\leq\int_{S^{\prime}}|v\circ \mathbf{\Phi}|\,d\mathbf{z}^{\prime}\leq C_{2}\int_{\mathbf{\pi}(S)}|v|\,d\gamma, \tag{2.7}\] \[C_{1}\int_{S}|v|\,d\gamma_{h}\leq\int_{S^{\prime}}|v\circ\mathbf{ \Phi}_{h}|\,d\mathbf{z}^{\prime}\leq C_{2}\int_{S}|v|\,d\gamma_{h},\] (2.8) \[C_{1}\int_{\mathbf{\pi}(S,\delta)}|v|\,d\mathbf{x}\leq\int_{S^{\prime} \times[-\delta,\delta]}|v\circ\mathbf{\Psi}|\,d\mathbf{z}^{\prime}dt\leq C_{2}\int_{ \mathbf{\pi}(S,\delta)}|v|\,d\mathbf{x}. \tag{2.6}\] We remark that the width \(\delta\) in (2.8) may be replaced with arbitrary \(\delta^{\prime}\in[\delta_{S},\delta]\). We also state an equivalence relation between \(W^{m,p}(\Gamma)\) and \(W^{m,p}(\Gamma_{h})\) when the transformation \(\mathbf{\pi}\) is involved. **Lemma 2.1**.: _Let \(m=0,\ldots,k+1\) and \(1\leq p\leq\infty\). For \(S\in\mathcal{S}_{h}\) and \(v\in W^{m,p}(\mathbf{\pi}(S))\), we have_ \[C_{1}\|v\|_{L^{p}(\mathbf{\pi}(S))} \leq\|v\circ\mathbf{\pi}\|_{L^{p}(\mathcal{S})}\leq C_{2}\|v\|_{L^{p} (\mathbf{\pi}(S))}, \tag{2.10}\] \[C_{1}\|\nabla_{\Gamma}v\|_{L^{p}(\mathbf{\pi}(S))} \leq\|\nabla_{\Gamma_{h}}(v\circ\mathbf{\pi})\|_{L^{p}(S)}\leq C_{2} \|\nabla_{\Gamma}v\|_{L^{p}(\mathbf{\pi}(S))},\] (2.11) \[C_{1}\|v\|_{W^{m,p}(\mathbf{\pi}(S))} \leq\|v\circ\mathbf{\pi}\|_{W^{m,p}(S)}\leq C_{2}\|v\|_{W^{m,p}(\mathbf{ \pi}(S))}\quad(m\geq 2). \tag{2.9}\] Proof.: Estimate (2.9) follows from (2.6) and (2.7) combined with \(\mathbf{\Phi}_{h}=\mathbf{\pi}^{*}\circ\mathbf{\Phi}\Longleftrightarrow\mathbf{\pi}\circ\mathbf{ \Phi}_{h}=\mathbf{\Phi}\). To obtain derivative estimates (2.10) and (2.11), it suffices to notice that we can invert (2.4) as \[\frac{\partial}{\partial z_{\alpha}}=\sum_{\beta=1}^{d-1}G_{\alpha\beta}(\mathbf{g} ^{\beta}\cdot\nabla_{\mathbf{\pi}(S)}),\qquad\frac{\partial}{\partial z_{\alpha}}= \sum_{\beta=1}^{d-1}G_{h,\alpha\beta}(\mathbf{g}^{\beta}_{h}\cdot\nabla_{S}),\] and that the derivatives of \(G_{h,\alpha\beta},G^{\alpha\beta}_{h},\mathbf{g}_{h,\alpha},\mathbf{g}^{\alpha}_{h}\) up to the \(k\)-th order are bounded independently of \(h\) in \(L^{\infty}(S^{\prime})\), due to (2.5) and \(h_{S}\leq 1\) ### Estimates for domain perturbation errors We recall the following boundary-skin estimates for \(S\in\mathcal{S}_{h}\), \(1\leq p\leq\infty\), and \(v\in W^{1,p}(\Omega\cup\Gamma(\delta))\) (note that \(\Omega\cup\Gamma(\delta)\supset\Omega\cup\Omega_{h}\)): \[\left|\int_{\mathbf{\pi}(S)}v\,d\gamma-\int_{S}v\circ\mathbf{\pi}\,d \gamma_{h}\right|\leq C\delta_{S}\|v\|_{L^{1}(\mathbf{\pi}(S))}, \tag{2.13}\] \[\|v\|_{L^{p}(\mathbf{\pi}(S,\delta^{\prime}))}\leq C(\delta^{\prime 1 }p\|v\|_{L^{p}(\mathbf{\pi}(S))}+\delta^{\prime}\|\nabla v\|_{L^{p}(\mathbf{\pi}(S, \delta^{\prime}))})\quad(\delta^{\prime}\in[\delta_{S},\delta]),\] (2.14) \[\|v-v\circ\mathbf{\pi}\|_{L^{p}(S)}\leq C\delta_{S}^{1-1/p}\|\nabla v \|_{L^{p}(\mathbf{\pi}(S,\delta_{S}))}. \tag{2.12}\] The proofs are given in [14, Theorems 8.1-8.3] for the case \(k=1\), which can be extended to \(k\geq 2\) without essential difficulty. As a version of (2.12)-(2.14), we also have \[\left|\int_{\mathbf{\pi}(S)}v\circ\mathbf{\pi}^{*}\,d\gamma-\int_{S}v\,d \gamma_{h}\right| \leq C\delta_{S}\|v\|_{L^{1}(S)},\] \[\|v\|_{L^{p}(\mathbf{\pi}(S,\delta))} \leq C(\delta_{S}^{1/p}\|v\|_{L^{p}(S)}+\delta_{S}\|\nabla v\|_{L ^{p}(\mathbf{\pi}(S,\delta))}),\] \[\|v\circ\mathbf{\pi}^{*}-v\|_{L^{p}(\mathbf{\pi}(S))} \leq C\delta_{S}^{1-1/p}\|\nabla v\|_{L^{p}(\mathbf{\pi}(S,\delta))}. \tag{2.15}\] Adding up these for \(S\in\mathcal{S}_{h}\) yields corresponding global estimates on \(\Gamma\) or \(\Gamma(\delta)\). The following estimate limited to \(\Omega_{h}\setminus\Omega\), rather than the whole boundary skin \(\Gamma(\delta)\), also holds: \[\|v\|_{L^{p}(\Omega_{h}\setminus\Omega)}\leq C(\delta^{1/p}\|v\|_{L^{p}(\Gamma _{h})}+\delta\|\nabla v\|_{L^{p}(\Omega_{h}\setminus\Omega)}), \tag{2.16}\] which is proved in [13, Lemma A.1]. Finally, denoting by \(\mathbf{n}_{h}\) the outward unit normal to \(\Gamma_{h}\), we notice that its error compared with \(\mathbf{n}\) is estimated as (see [14, Lemma 9.1]) \[\|\mathbf{n}\circ\mathbf{\pi}-\mathbf{n}_{h}\|_{L^{\infty}(S)}\leq Ch_{S}^{k}. \tag{2.17}\] We now state a version of (2.14) which involves the surface gradient. The proof will be given in Appendix A. **Lemma 2.2**.: _Let \(S\in\mathcal{S}_{h}\) and \(v\in W^{2,p}(\Omega\cup\Gamma(\delta))\) for \(1\leq p\leq\infty\). Then we have_ \[\|\nabla_{\Gamma_{h}}(v-v\circ\mathbf{\pi})\|_{L^{p}(S)} \leq Ch_{S}^{k}\|\nabla v\|_{L^{p}(S)}+C\delta_{S}^{1-1/p}\| \nabla^{2}v\|_{L^{p}(\mathbf{\pi}(S,\delta_{S}))}, \tag{2.19}\] \[\|\nabla_{\Gamma_{h}}(v-v\circ\mathbf{\pi})\|_{L^{p}(S)} \leq Ch_{S}^{k}\|\nabla v\|_{L^{p}(\mathbf{\pi}(S))}+C\delta_{S}^{1-1/p}\| \nabla^{2}v\|_{L^{p}(\mathbf{\pi}(S,\delta_{S}))}. \tag{2.18}\] **Corollary 2.1**.: _Let \(m=0,1\) and assume that \(v\in H^{2}(\Omega\cup\Gamma(\delta))\) if \(k=1\) and that \(v\in H^{3}(\Omega\cup\Gamma(\delta))\) if \(k\geq 2\). Then we have_ \[\|v-v\circ\mathbf{\pi}\|_{H^{m}(\Gamma_{h})}\leq Ch^{k+1-m}\|v\|_{H^{\min\{k+1,3\} }(\Omega\cup\Gamma(\delta))}.\] Proof.: By virtue of (2.13) and (2.14) (more precisely, their global versions) we have \[\|v-v\circ\mathbf{\pi}\|_{L^{2}(\Gamma_{h})}\leq C\delta^{1/2}\|\nabla v\|_{L^{2} (\Gamma(\delta))}\leq C\delta^{1/2}(\delta^{1/2}\|\nabla v\|_{L^{2}(\Gamma)}+ \delta\|\nabla^{2}v\|_{L^{2}(\Gamma(\delta))})\leq C\delta\|v\|_{H^{2}(\Omega \cup\Gamma(\delta))}.\] Similarly, we see from (2.19) that \[\|\nabla_{\Gamma_{h}}(v-v\circ\mathbf{\pi})\|_{L^{2}(\Gamma_{h})} \leq Ch^{k}(\|\nabla v\|_{L^{2}(\Gamma)}+\delta^{1/2}\|\nabla^{2}v \|_{L^{2}(\Gamma(\delta))})\] \[\leq\begin{cases}Ch\|v\|_{H^{2}(\Omega)}+Ch\|\nabla^{2}v\|_{L^{2} (\Omega\cup\Gamma(\delta))}&(k=1)\\ Ch^{k}\|v\|_{H^{2}(\Omega)}+C\delta^{1/2}(\delta^{1/2}\|\nabla^{2}v\|_{L^{2}( \Gamma)}+\delta\|\nabla^{3}v\|_{L^{2}(\Gamma(\delta))})&(k\geq 2)\end{cases}\] \[\leq\begin{cases}Ch\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))}&(k=1) \\ Ch^{k}\|v\|_{H^{3}(\Omega\cup\Gamma(\delta))}&(k\geq 2),\end{cases}\] where we have used \(\delta=Ch^{k+1}\) and \(h\leq 1\). Below several lemmas are introduced to address errors related with the \(L^{2}\)-inner product on surfaces. **Lemma 2.3**.: _For \(u,v\in H^{2}(\Omega\cup\Gamma(\delta))\) we have_ \[|(u,v)_{\Gamma_{h}}-(u,v)_{\Gamma}|\leq C\delta\|u\|_{H^{2}(\Omega\cup\Gamma( \delta))}\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))}.\] Proof.: Observe that \[(u,v)_{\Gamma_{h}}-(u,v)_{\Gamma}=(u-u\circ\mathbf{\pi},v)_{\Gamma_{h}}+\left[(u\circ \mathbf{\pi},v)_{\Gamma_{h}}-(u,v\circ\mathbf{\pi}^{*})_{\Gamma}\right]+(u,v\circ\mathbf{ \pi}^{*}-v)_{\Gamma}.\] The first term in the right-hand side is bounded by \(C\delta\|\tilde{u}\|_{H^{2}(\Omega\cup\Gamma(\delta))}\|v\|_{L^{2}(\Gamma_{h})}\) due to Corollary 2.1. The third term can be treated similarly. From (2.12) and (2.9) the second term is bounded by \[C\delta\|u(v\circ\mathbf{\pi}^{*})\|_{L^{1}(\Gamma)}\leq C\delta\|u\|_{L^{2}( \Gamma)}\|v\circ\mathbf{\pi}^{*}\|_{L^{2}(\Gamma)}\leq C\delta\|u\|_{L^{2}(\Gamma )}\|v\|_{L^{2}(\Gamma_{h})}.\] Using trace inequalities on \(\Gamma\) and \(\Gamma_{h}\), we arrive at the desired estimate. **Lemma 2.4**.: _For \(u\in H^{2}(\Gamma)\) and \(v\in H^{1}(\Gamma_{h})\) we have_ \[\left|((\Delta_{\Gamma}u)\circ\mathbf{\pi},v)_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}(u \circ\mathbf{\pi}),\nabla_{\Gamma_{h}}v)_{\Gamma_{h}}\right|\leq C\delta(\|u\|_{H ^{2}(\Gamma)}\|v\|_{L^{2}(\Gamma_{h})}+\|\nabla_{\Gamma}u\|_{L^{2}(\Gamma)}\| \nabla_{\Gamma_{h}}v\|_{L^{2}(\Gamma_{h})}).\] Proof.: Using an integration-by-parts formula on \(\Gamma\), we decompose the left-hand side as \[((\Delta_{\Gamma}u)\circ\mathbf{\pi},v)_{\Gamma_{h}}+(\nabla_{\Gamma_ {h}}(u\circ\mathbf{\pi}),\nabla_{\Gamma_{h}}v)_{\Gamma_{h}}\] \[=\left[((\Delta_{\Gamma}u)\circ\mathbf{\pi},v)_{\Gamma_{h}}-(\Delta_{ \Gamma}u,v\circ\mathbf{\pi}^{*})_{\Gamma}\right]+\left[-(\nabla_{\Gamma}u,\nabla_ {\Gamma}(v\circ\mathbf{\pi}^{*}))_{\Gamma}+(\nabla_{\Gamma_{h}}(u\circ\mathbf{\pi}), \nabla_{\Gamma_{h}}v)_{\Gamma_{h}}\right]\] \[=:I_{1}+I_{2}.\] By (2.12) and (2.9), \(|I_{1}|\leq C\delta\|(\Delta_{\Gamma}u)\circ\mathbf{\pi}\|_{L^{2}(\Gamma_{h})}\|v \|_{L^{2}(\Gamma_{h})}\leq Ch^{2}\|u\|_{H^{2}(\Gamma)}\|v\|_{L^{2}(\Gamma_{h})}\). For \(I_{2}\), we represent the surface integrals on \(S\) and \(\mathbf{\pi}(S)\) based on the local coordinate as follows: \[\int_{S}\nabla_{\Gamma_{h}}(u\circ\mathbf{\pi})\cdot\nabla_{\Gamma_{ h}}v\,d\gamma_{h} =\int_{S^{\prime}}\sum_{\alpha,\beta}\partial_{\alpha}(u\circ\mathbf{ \Phi})\partial_{\beta}(v\circ\mathbf{\Phi}_{h})\,G_{h}^{\alpha\beta}\sqrt{\det G_{ h}}\,dz^{\prime},\] \[\int_{\mathbf{\pi}(S)}\nabla_{\Gamma}u\cdot\nabla_{\Gamma}(v\circ\bm {\pi}^{*})\,d\gamma =\int_{S^{\prime}}\sum_{\alpha,\beta}\partial_{\alpha}(u\circ\mathbf{ \Phi})\partial_{\beta}(v\circ\mathbf{\Phi}_{h})\,G^{\alpha\beta}\sqrt{\det G}\,dz^ {\prime}.\] Since \(\|G-G_{h}\|_{L^{\infty}(S^{\prime})}\leq C\delta_{S}\), their difference is estimated by \[C\delta_{S}\|\nabla_{\mathbf{\pi}^{\prime}}(u\circ\mathbf{\Phi})\|_{L^{2}(S^{\prime}) }\|\nabla_{\mathbf{\pi}^{\prime}}(v\circ\mathbf{\Phi}_{h})\|_{L^{2}(S^{\prime})}\leq C \delta_{S}\|\nabla_{\Gamma}u\|_{L^{2}(\mathbf{\pi}(S))}\|\nabla_{\Gamma_{h}}v\|_{L ^{2}(S)}.\] Adding this up for \(S\in\mathcal{S}_{h}\) gives \(|I_{2}|\leq C\delta\|\nabla_{\Gamma}u\|_{L^{2}(\Gamma)}\|\nabla_{\Gamma_{h}}v \|_{L^{2}(\Gamma_{h})}\), and this completes the proof. **Remark 2.3**.: (i) Since \(\Gamma_{h}\) itself is not \(C^{1,1}\)-smooth globally, \((-\Delta_{\Gamma_{h}}u,v)=(\nabla_{\Gamma_{h}}u,\nabla_{\Gamma_{h}}v)\) does not hold in general (see [12, Lemma 3.1]). (ii) An argument similar to the proof above shows, for \(u,v\in H^{1}(\Gamma)\), \[\left|(\nabla_{\Gamma_{h}}(u\circ\mathbf{\pi}),\nabla_{\Gamma_{h}}(v\circ\mathbf{\pi}) )_{\Gamma_{h}}-(\nabla_{\Gamma}u,\nabla_{\Gamma}v)_{\Gamma}\right|\leq C\delta \|\nabla_{\Gamma}u\|_{L^{2}(\Gamma)}\|\nabla_{\Gamma}v\|_{L^{2}(\Gamma)}. \tag{2.20}\] **Lemma 2.5**.: _Let \(u\in H^{2}(\Omega\cup\Gamma(\delta))\) and \(v\in H^{2}(\Gamma)\). Then we have_ \[\left|(\nabla_{\Gamma_{h}}(u-u\circ\mathbf{\pi}),\nabla_{\Gamma_{h}}(v\circ\mathbf{\pi} ))_{\Gamma_{h}}\right|\leq C\delta\|u\|_{H^{2}(\Omega\cup\Gamma(\delta))}\|v \|_{H^{2}(\Gamma)}.\] Proof.: By (2.20), \[\left|(\nabla_{\Gamma_{h}}(u-u\circ\mathbf{\pi}),\nabla_{\Gamma_{h}}(v\circ\mathbf{\pi} ))_{\Gamma_{h}}-(\nabla_{\Gamma}(u\circ\mathbf{\pi}^{*}-u),\nabla_{\Gamma}v)_{ \Gamma}\right|\leq C\delta(\|u\|_{H^{1}(\Gamma_{h})}+\|u\|_{H^{1}(\Gamma)})\|v \|_{H^{1}(\Gamma)}.\] Next we observe that \[|(\nabla_{\Gamma}(u\circ\mathbf{\pi}^{*}-u),\nabla_{\Gamma}v)_{\Gamma}| =|(u\circ\mathbf{\pi}^{*}-u,\Delta_{\Gamma}v)_{\Gamma}|\leq\|u\circ\mathbf{ \pi}^{*}-u\|_{L^{2}(\Gamma)}\|v\|_{H^{2}(\Gamma)}\] \[\leq C\|u-u\circ\mathbf{\pi}\|_{L^{2}(\Gamma_{h})}\|v\|_{H^{2}(\Gamma)}.\] This combined with the boundary-skin estimate \[\|u-u\circ\mathbf{\pi}\|_{L^{2}(\Gamma_{h})}\leq C\delta^{1/2}\|\nabla u\|_{L^{2}( \Gamma(\delta))}\leq C\delta^{1/2}(\delta^{1/2}\|\nabla u\|_{H^{1}(\Gamma)}+ \delta\|\nabla^{2}u\|_{L^{2}(\Gamma(\delta))}),\] with the trace theorem in \(\Omega\), and with \(\delta\leq 1\), yields the desired estimate. ## 3. Finite element approximation ### Finite element spaces We introduce the global nodes of \(\mathcal{T}_{h}\) by \[\mathcal{N}_{h}=\{\boldsymbol{F}_{T}(\hat{\boldsymbol{a}}_{i})\in\overline{ \Omega}_{h}\mid T\in\mathcal{T}_{h},\;i=1,\ldots,N_{k}\}.\] The interior and boundary nodes are denoted by \(\hat{\mathcal{N}}_{h}=\mathcal{N}_{h}\cap\operatorname{int}\Omega_{h}\) and \(\mathcal{N}_{h}^{\partial}=\mathcal{N}_{h}\cap\Gamma_{h}\), respectively. We next define the global nodal basis functions \(\phi_{\boldsymbol{p}}\left(\boldsymbol{p}\in\mathcal{N}_{h}\right)\) by \[\phi_{\boldsymbol{p}}|_{T}=\begin{cases}0&\text{ if }\boldsymbol{p}\notin T, \\ \hat{\phi}_{i}\circ\boldsymbol{F}_{T}^{-1}&\text{ if }\boldsymbol{p}\in T\text{ and } \boldsymbol{p}=\boldsymbol{F}_{T}(\hat{\boldsymbol{a}}_{i})\text{ with }\hat{\boldsymbol{a}}_{i}\in\Sigma_{k}, \end{cases}\quad(\forall T\in\mathcal{T}_{h})\] which becomes continuous in \(\overline{\Omega}_{h}\) thanks to the assumption on \(\hat{\Sigma}_{k}\). Then \(\phi_{\boldsymbol{p}}(\boldsymbol{q})=1\) if \(\boldsymbol{p}=\boldsymbol{q}\) and \(\phi_{\boldsymbol{p}}(\boldsymbol{q})=0\) otherwise, for \(\boldsymbol{p},\boldsymbol{q}\in\mathcal{N}_{h}\). We now set the \(\mathbb{P}_{k}\)-isoparametric finite element spaces by \[V_{h}=\operatorname{span}\{\phi_{\boldsymbol{p}}\}_{\boldsymbol{p}\in \mathcal{N}_{h}}=\{v_{h}\in C(\overline{\Omega}_{h})\mid v_{h}\circ\boldsymbol {F}_{T}\in\mathbb{P}_{k}(\hat{T})\;(\forall T\in\mathcal{T}_{h})\}.\] We see that \(V_{h}\subset H^{1}(\Omega_{h};\Gamma_{h})\). In particular, the restriction of \(v_{h}\in V_{h}\) to \(\Gamma_{h}\) is represented by \(\mathbb{P}_{k}\)-isoparametric finite element bases defined on \(\Gamma_{h}\), that is, \[(v_{h}\circ\boldsymbol{F}_{T_{S}})|_{\hat{S}}\in\mathbb{P}_{k}(\hat{S})\quad( \forall S\in\mathcal{S}_{h}),\] where \(\hat{S}:=\boldsymbol{F}_{T_{S}}^{-1}(S)\) denotes the pullback of the face \(S\) in the reference coordinate (recall that \(T_{S}\) is the element in \(\mathcal{T}_{h}\) that contains \(S\)). Noticing the chain rules \(\nabla_{\boldsymbol{x}}=(\nabla_{\boldsymbol{x}}\boldsymbol{F}_{T}^{-1}) \nabla_{\hat{\boldsymbol{x}}}\), \(\nabla_{\hat{\boldsymbol{x}}}=(\nabla_{\boldsymbol{x}}\boldsymbol{F}_{T}) \nabla_{\boldsymbol{x}}\) and the estimates given in Remark 2.2(v), we obtain the following estimates concerning the transformation between \(\hat{T}\) and \(T\): **Proposition 3.1**.: _For \(T\in\mathcal{T}_{h}\) and \(v\in H^{m}(T)\) we have_ \[\|\nabla_{\boldsymbol{x}}^{m}v\|_{L^{2}(T)}\leq Ch_{T}^{-m+d/2}\|\hat{v}\|_{H^ {m}(\hat{T})},\qquad\|\nabla_{\hat{\boldsymbol{x}}}^{m}\hat{v}\|_{L^{2}(\hat{ T})}\leq Ch_{T}^{m-d/2}\|v\|_{H^{m}(T)},\] _where \(\hat{v}:=v\circ\boldsymbol{F}_{T}\in H^{m}(\hat{T})\)._ In particular, if \(T\in\mathcal{T}_{h}\), \(\boldsymbol{p}\in\mathcal{N}_{h}\cap T\), and \(\boldsymbol{p}=\boldsymbol{F}_{T}(\hat{\boldsymbol{a}}_{i})\), then \[\|\nabla_{\boldsymbol{x}}^{m}\phi_{\boldsymbol{p}}\|_{L^{2}(T)}\leq Ch_{T}^{- m+d/2}\Big{(}\sum_{l=0}^{m}\|\nabla_{\hat{\boldsymbol{x}}}^{l}\hat{\phi}_{ \boldsymbol{p}}\|_{L^{2}(\hat{T})}^{2}\Big{)}^{1/2}\leq Ch_{T}^{-m+d/2},\] where the quantities depending only on the reference element \(\hat{T}\) have been combined into the generic constant. To get an analogous estimate on the boundary \(\Gamma_{h}\), we let \(S\) be a curved \((d-1)\)-face of \(T\in\mathcal{T}_{h}\), i.e., \(S=\boldsymbol{F}_{T}(\hat{S})\) where \(\tilde{S}\) is a \((d-1)\)-face of \(\hat{T}\). Then \(\tilde{S}\) is contained in some hyperplane \(\hat{x}_{d}=\hat{\boldsymbol{a}}_{\hat{S}}^{\prime}\cdot\hat{\boldsymbol{x}}^ {\prime}+\hat{\boldsymbol{b}}_{\hat{S}}\), and we get the following parametrization of \(S\): \[\boldsymbol{F}_{S}:\hat{S}^{\prime}\to S;\quad\hat{\boldsymbol{x}}^{\prime} \mapsto\boldsymbol{F}_{T}(\hat{\boldsymbol{x}}^{\prime},\hat{\boldsymbol{a}}_{ \hat{S}}^{\prime}\cdot\hat{\boldsymbol{x}}^{\prime}+\hat{\boldsymbol{b}}_{ \hat{S}})=:\boldsymbol{F}_{T}\circ\boldsymbol{\Phi}_{\hat{S}}(\hat{\boldsymbol{x} }^{\prime}),\] where \(\hat{S}^{\prime}\) is the projected image of \(\hat{S}\) to the plane \(\{x_{d}=0\}\). A similar parametrization can be obtained for the straight \((d-1)\)-simplex \(\tilde{\boldsymbol{F}}_{T}(\tilde{S})=:\tilde{S}\), which is denoted by \(\tilde{\boldsymbol{F}}_{S}\) and is affine. We see that the covariant and contravariant vectors \(\tilde{\boldsymbol{g}}_{\alpha},\tilde{\boldsymbol{g}}^{\alpha}\), and the covariant and contravariant components of metric tensors \(\tilde{G}_{\alpha\beta},\tilde{G}^{\alpha\beta}\) with respect to \(\tilde{S}\) satisfies, for \(\alpha,\beta=1,\ldots,d-1\), \[|\tilde{\boldsymbol{g}}_{\alpha}|\leq Ch_{S},\quad|\tilde{\boldsymbol {g}}^{\alpha}|\leq Ch_{S}^{-1},\] \[C_{1}h_{S}^{d-1}\leq\sqrt{\det\tilde{G}}=\frac{\operatorname{ meas}_{d-1}(\tilde{S})}{\operatorname{meas}_{d-1}(\tilde{S})}\leq C_{2}h_{S}^{d-1},\quad| \tilde{G}_{\alpha\beta}|\leq Ch_{S}^{2},\quad|\tilde{G}^{\alpha\beta}|\leq Ch_{S} ^{-2},\] where \(h_{S}:=h_{T}\) and the regularity of the meshes has been used. These vectors and components can also be defined for the curved simplex \(S\), which are denoted by \(\bar{\boldsymbol{g}}_{\alpha},\bar{\boldsymbol{g}}^{\alpha},\tilde{G}_{\alpha \beta},\bar{G}^{\alpha\beta}\). Because \(\boldsymbol{F}_{S}\) is a perturbation of \(\tilde{\boldsymbol{F}}_{S}\), they satisfy the following estimates. **Proposition 3.2**.: _(i) Let \(m=0,\ldots,k\), and \(\alpha,\beta=1,\ldots,d-1\). Then, for \(S\in\mathcal{S}_{h}\) we have_ \[\|\nabla_{\boldsymbol{\hat{x}}^{\prime}}^{m}\bar{\boldsymbol{g}}_{ \alpha}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{m+1},\quad\|\nabla_{ \boldsymbol{\hat{x}}^{\prime}}^{m}\bar{\boldsymbol{g}}^{\alpha}\|_{L^{\infty}( \hat{S}^{\prime})}\leq Ch_{S}^{m-1},\] \[\|\nabla_{\boldsymbol{\hat{x}}^{\prime}}^{m}\bar{G}_{\alpha\beta} \|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{m+2},\quad\|\nabla_{\boldsymbol {\hat{x}}^{\prime}}^{m}\bar{G}^{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime})} \leq Ch_{S}^{m-2},\] \[C_{1}h_{S}^{(d-1)}\leq\sqrt{\det\bar{G}}\leq C_{2}h_{S}^{(d-1)}.\] _(ii) For \(v\in H^{m}(S)\) we have_ \[\|\nabla_{S}^{m}v\|_{L^{2}(S)}\leq Ch_{S}^{-m+(d-1)/2}\|v\circ\boldsymbol{F}_{ S}\|_{H^{m}(\hat{S}^{\prime})},\qquad\|\nabla_{\boldsymbol{\hat{x}}^{\prime}}^{m} (v\circ\boldsymbol{F}_{S})\|_{L^{2}(\hat{S}^{\prime})}\leq Ch_{S}^{m-(d-1)/2} \|v\|_{H^{m}(S)}.\] Proof.: (i) First let \(m=0\). Since \(\bar{\boldsymbol{g}}_{\alpha}=(\frac{\partial\boldsymbol{F}_{T}}{\partial \dot{\alpha}_{\alpha}}+\dot{a}^{\prime}_{S\alpha}\frac{\partial\boldsymbol{F}_ {T}}{\partial\dot{x}_{d}})|_{\boldsymbol{\Phi}_{S}}\), we have \(\|\bar{\boldsymbol{g}}_{\alpha}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}\), so that \(\|\bar{G}_{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{2}\). By assumption (H4), we also get \(\|\bar{\boldsymbol{g}}_{\alpha}-\bar{\boldsymbol{g}}_{\alpha}\|_{L^{\infty}( \hat{S}^{\prime})}\leq Ch_{S}^{2}\) and \(\|\bar{G}_{\alpha\beta}-\bar{G}_{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime}) }\leq Ch_{S}^{3}\), which allows us to bound \(\det\bar{G}\) from above and below. This combined with the formula \(\bar{G}^{-1}=(\det\bar{G})^{-1}\operatorname{Cof}\bar{G}\) yields \(\|\bar{G}^{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{-2}\), and, consequently, \(\|\bar{\boldsymbol{g}}^{\alpha}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{-1}\). The case \(m\geq 1\) can be addressed by induction using assumption (H6). (ii) The first inequality is a result of \(\nabla_{S}=\sum_{\alpha=1}^{d-1}\bar{\boldsymbol{g}}^{\alpha}\frac{\partial} {\partial\dot{x}_{\alpha}}\) and (i). To show the second inequality, its inverted formula \[\frac{\partial}{\partial\dot{x}_{\alpha}}=\sum_{\beta=1}^{d-1}\bar{G}_{\alpha \beta}(\bar{\boldsymbol{g}}^{\beta}\cdot\nabla_{S})\] is useful. We also notice the following for the case \(m\geq 2\): even when \(\nabla_{S}\) is acted on \(\bar{G}_{\alpha\beta},\bar{\boldsymbol{g}}^{\beta}\), or on their derivatives rather than on \(v\), the \(L^{\infty}\)-bounds of them--in terms of the order of \(h_{S}\)--are the same as in the case where all the derivatives are applied to \(v\). For example, \[\|\nabla_{S}\,\bar{G}_{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime})}=\Big{\|} \sum_{\alpha=1}^{d-1}\bar{\boldsymbol{g}}^{\alpha}\frac{\partial\bar{G}_{ \alpha\beta}}{\partial\dot{x}_{\alpha}}\Big{\|}_{L^{\infty}(\hat{S}^{\prime}) }\leq Ch_{S}^{-1}\times Ch_{S}^{3}=Ch_{S}^{2},\] which can be compared with \(\|\bar{G}_{\alpha\beta}\|_{L^{\infty}(\hat{S}^{\prime})}\leq Ch_{S}^{2}\). Therefore, \[\|\nabla_{\boldsymbol{\hat{x}}^{\prime}}^{m}(v\circ\boldsymbol{F}_ {S})\|_{L^{2}(\hat{S}^{\prime})} \leq C(h_{S}^{2}h_{S}^{-1})^{m}h_{S}^{(1-d)/2}\bigg{[}\sum_{l=0}^{ k+1}\int_{\hat{S}^{\prime}}\Big{|}\Big{(}\sum_{\alpha=1}^{d-1}\bar{\boldsymbol{g}}^{ \alpha}\frac{\partial}{\partial\dot{x}_{\alpha}}\Big{)}^{l}(v\circ\boldsymbol {F}_{S})\Big{|}^{2}\sqrt{\det\bar{G}}\,d\hat{\boldsymbol{x}}^{\prime}\bigg{]}^{ 1/2}\] \[=Ch_{S}^{m-(d-1)/2}\|v\|_{H^{k+1}(S)},\] which is the desired estimate. In particular, if \(\boldsymbol{p}\in\mathcal{N}_{h}\cap S\) and \(\boldsymbol{p}=\boldsymbol{F}_{T}(\hat{\boldsymbol{a}}_{i})\), we obtain \[\|\nabla_{S}^{m}\phi_{\boldsymbol{p}}\|_{L^{2}(S)}\leq Ch_{S}^{-m+(d-1)/2}. \tag{3.1}\] ### Scott-Zhang interpolation operator We need the interpolation operator \(\mathcal{I}_{h}\) introduced by [18], which is well-defined and stable in \(H^{1}(\Omega_{h})\). We show that it is also stable in \(H^{1}(\Gamma_{h})\) on the boundary. To each node \(\boldsymbol{p}\in\mathcal{N}_{h}\) we assign \(\sigma_{\boldsymbol{p}}\), which is either a \(d\)-curved simplex or \((d-1)\)-curved simplex, in the following way: * If \(\boldsymbol{p}\in\hat{\mathcal{N}}_{h}\), we set \(\sigma_{\boldsymbol{p}}\) to be one of the elements \(T\in\mathcal{T}_{h}\) containing \(\boldsymbol{p}\). * If \(\boldsymbol{p}\in\mathcal{N}_{h}^{0}\), we set \(\sigma_{\boldsymbol{p}}\) to be one of the boundary elements \(S\in\mathcal{S}_{h}\) containing \(\boldsymbol{p}\). For each \(\boldsymbol{p}\in\mathcal{N}_{h}\), we see that \(V_{h}|_{\sigma_{\boldsymbol{p}}}\) (the restrictions to \(\sigma_{\boldsymbol{p}}\) of the functions in \(V_{h}\)) is a finite dimensional subspace of the Hilbert space \(L^{2}(\sigma_{\boldsymbol{p}})\). We denote by \(\psi_{\boldsymbol{q}}\) the dual basis function corresponding to \(\phi_{\boldsymbol{p}}\) with respect to \(L^{2}(\sigma_{\boldsymbol{p}})\), that is, \(\{\psi_{\boldsymbol{q}}\}_{\boldsymbol{q}\in\mathcal{N}_{h}}\subset V_{h}\) is determined by \[(\phi_{\boldsymbol{p}},\psi_{\boldsymbol{q}})_{L^{2}(\sigma_{\boldsymbol{p}})}= \begin{cases}1&\text{if }\boldsymbol{p}=\boldsymbol{q},\\ 0&\text{otherwise},\end{cases}\qquad\forall\boldsymbol{p}\in\mathcal{N}_{h}.\] The support of \(\psi_{\mathbf{p}}\) is contained in a "macro element" of \(\sigma_{\mathbf{p}}\). In fact, depending on the cases \(\sigma_{\mathbf{p}}=T\in\mathcal{T}_{h}\) and \(\sigma_{\mathbf{p}}=S\in\mathcal{S}_{h}\), it holds that \[\operatorname{supp}\psi_{\mathbf{p}}\subset M_{T} :=\bigcup\mathcal{T}_{h}(T),\quad\mathcal{T}_{h}(T):=\{T_{1}\in \mathcal{T}_{h}\mid T_{1}\cap T\neq\emptyset\},\] \[\operatorname{supp}\psi_{\mathbf{p}}\subset M_{S} :=\bigcup\mathcal{S}_{h}(S),\quad\mathcal{S}_{h}(S):=\{S_{1}\in \mathcal{S}_{h}\mid S_{1}\cap S\neq\emptyset\}.\] Now we define \(\mathcal{I}_{h}:H^{1}(\Omega_{h})\to V_{h}\) by \[\mathcal{I}_{h}v=\sum_{\mathbf{p}\in\mathcal{N}_{h}}(v,\psi_{\mathbf{p}})_{L^{2}( \sigma_{\mathbf{p}})}\phi_{\mathbf{p}}.\] By direct computation one can check \(\mathcal{I}_{h}v_{h}=v_{h}\) for \(v_{h}\in V_{h}\). This invariance indeed holds at local level as shown in the lemma below. To establish it, we first notice that \(\mathcal{I}_{h}v\) in \(T\in\mathcal{T}_{h}\) (resp. in \(S\in\mathcal{S}_{h}\)) is completely determined by \(v\) in \(M_{T}\) (resp. in \(M_{S}\)), which allows us to exploit the notation \((\mathcal{I}_{h}v)|_{T}\) for \(v\in H^{1}(M_{T})\) (resp. \((\mathcal{I}_{h}v)|_{S}\) for \(v\in H^{1}(M_{S})\)). **Remark 3.1**.: The choices of \(\{\sigma_{\mathbf{p}}\}_{\mathbf{p}\in\mathcal{N}_{h}}\) and \(\{\psi_{\mathbf{p}}\}_{\mathbf{p}\in\mathcal{N}_{h}}\) are not unique. Although the definition of \(\mathcal{I}_{h}\) are dependent on those choices, the norm estimates below only depends on the shape-regularity constant and on a reference element. **Lemma 3.1**.: _Let \(\mathbf{p}\in\mathcal{N}_{h}\) and \(v\in H^{1}(\Omega_{h})\)._ _(i) If \(\sigma_{\mathbf{p}}=T\in\mathcal{T}_{h}\), then_ \[\|\psi_{\mathbf{p}}\|_{L^{\infty}(T)}\leq Ch_{T}^{-d}.\] _Moreover, if \(v\circ\mathbf{F}_{T_{1}}\in\mathbb{P}_{k}(\hat{T})\) for \(T_{1}\in\mathcal{T}_{h}(T)\), then \((\mathcal{I}_{h}v)|_{T}=v|_{T}\)._ _(ii) If \(\sigma_{\mathbf{p}}=S\in\mathcal{S}_{h}\), then_ \[\|\psi_{\mathbf{p}}\|_{L^{\infty}(S)}\leq Ch_{S}^{1-d}. \tag{3.2}\] _Moreover, if \(v\circ\mathbf{F}_{S_{1}}\in\mathbb{P}_{k}(\hat{S}^{\prime})\) for \(S_{1}\in\mathcal{S}_{h}(S)\), then \((\mathcal{I}_{h}v)|_{S}=v|_{S}\)._ Proof.: We consider only case (ii); case (i) can be treated similarly. We can represent \(\psi_{\mathbf{p}}\) as \[\psi_{\mathbf{p}}=\sum_{\mathbf{q}\in\mathcal{N}_{h}\cap M_{S}}C_{\mathbf{pq}}\phi_{\mathbf{q}},\] where \(C=(C_{\mathbf{pq}})\) is the inverse matrix of \(A=((\phi_{\mathbf{p}},\phi_{\mathbf{q}})_{L^{2}(S)})\) (its dimension is supposed to be \(D\)). Note that each component of \(A\) is bounded by \(Ch_{S}^{d-1}\) and that \(\det A\geq Ch_{S}^{D(d-1)}\). Therefore, each component of \(C=(\det A)^{-1}\operatorname{Cof}A\) is bounded by \(Ch_{S}^{(1-d)}\). This combined with \(\|\phi_{\mathbf{q}}\|_{L^{\infty}(S)}\leq C\) proves (3.2). To show the second statement, observe that \[(\mathcal{I}_{h}v)|_{S}=\sum_{\mathbf{q}\in\mathcal{N}_{h}}(v,\psi_{\mathbf{q}})_{L^{2 }(\sigma_{\mathbf{q}})}\phi_{\mathbf{q}}|_{S}. \tag{3.3}\] However, \(\phi_{\mathbf{q}}|_{S}\) is non-zero only if \(\mathbf{q}\in S\), in which case \(\sigma_{\mathbf{q}}\in\mathcal{S}_{h}(S)\). Therefore, \(v|_{\sigma_{\mathbf{q}}}\) is represented as a linear combination of \(\phi_{\mathbf{s}}|_{\sigma_{\mathbf{q}}}\left(\mathbf{s}\in\mathcal{N}_{h}\cap\sigma_{\mathbf{q }}\right)\). This implies that (3.3) agrees with \(v|_{S}\). Let us establish the stability of \(\mathcal{I}_{h}\), which is divided into two lemmas and is proved in Appendix B. **Lemma 3.2**.: _Let \(v\in H^{1}(\Omega_{h};\Gamma_{h})\), \(T\in\mathcal{T}_{h}\), and \(S\in\mathcal{S}_{h}\). Then for \(m=0,1\) we have_ \[\|\nabla^{m}(\mathcal{I}_{h}v)\|_{L^{2}(T)}\leq C\sum_{l=0}^{1}h_{T}^{l-m}\sum_ {T_{1}\in\mathcal{T}_{h}(T)}\|\nabla^{l}v\|_{L^{2}(T_{1})}, \tag{3.4}\] \[\|\nabla^{m}_{S}(\mathcal{I}_{h}v)\|_{L^{2}(S)}\leq C\sum_{l=0}^ {1}h_{S}^{l-m}\sum_{S_{1}\in\mathcal{S}_{h}(S)}\|\nabla^{l}_{S_{1}}v\|_{L^{2}(S _{1})},\] _where \(\mathcal{T}_{h}(T)=\{T_{1}\in\mathcal{T}_{h}\mid T_{1}\cap T\neq\emptyset\}\) and \(\mathcal{S}_{h}(S)=\{S_{1}\in\mathcal{S}_{h}\mid S_{1}\cap S\neq\emptyset\}\)._ **Lemma 3.3**.: _Under the same assumptions as in Lemma 3.2, we have_ \[\|v-\mathcal{I}_{h}v\|_{H^{m}(T)} \leq Ch_{T}^{1-m}\sum_{T_{1}\in\mathcal{T}_{h}(T)}\|v\|_{H^{1}(T_{1 })},\] \[\|v-\mathcal{I}_{h}v\|_{H^{m}(S)} \leq Ch_{S}^{1-m}\sum_{S_{1}\in\mathcal{S}_{h}(S)}\|v\|_{H^{1}(S_{ 1})}. \tag{3.5}\] Adding up (3.5) for \(S\in\mathcal{S}_{h}\) immediately leads to a global estimate (note that the regularity of the meshes implies \(\sup_{S\in\mathcal{S}_{h}}\#\mathcal{S}_{h}(S)\leq C\)). Together with an estimate in \(\Omega_{h}\), which can be obtained in a similar manner, we state it as follows: **Corollary 3.1**.: _Let \(m=0,1\) and \(v\in H^{1}(\Omega_{h};\Gamma_{h})\). Then_ \[\|v-\mathcal{I}_{h}v\|_{H^{m}(\Omega_{h})}\leq Ch^{1-m}\|v\|_{H^{1}(\Omega_{h })},\qquad\|v-\mathcal{I}_{h}v\|_{H^{m}(\Gamma_{h})}\leq Ch^{1-m}\|v\|_{H^{1}( \Gamma_{h})}.\] ### Interpolation error estimates First we recall the definition of the Lagrange interpolation operator and its estimates. Define \(\mathcal{I}_{h}^{L}:C(\overline{\Omega}_{h})\to V_{h}\) by \[\mathcal{I}_{h}^{L}v=\sum_{\mathbf{p}\in\mathcal{N}_{h}}v(\mathbf{p})\phi_{\mathbf{p}}.\] We allow the notation \((\mathcal{I}_{h}^{L}v)|_{T}\) if \(v\in C(T)\), \(T\in\mathcal{T}_{h}\), and \((\mathcal{I}_{h}^{L}v)|_{S}\) if \(v\in C(S)\), \(S\in\mathcal{S}_{h}\). **Proposition 3.3**.: _Let \(T\in\mathcal{T}_{h}\) and \(S\in\mathcal{S}_{h}\). Assume \(k+1>d/2\), so that \(H^{k+1}(T)\hookrightarrow C(T)\) and \(H^{k+1}(S)\hookrightarrow C(S)\) hold. Then, for \(0\leq m\leq k+1\) we have_ \[\|\nabla^{m}(v-\mathcal{I}_{h}^{L}v)\|_{L^{2}(T)} \leq Ch_{T}^{k+1-m}\|v\|_{H^{k+1}(T)} \forall v\in H^{k+1}(T), \tag{3.7}\] \[\|\nabla_{S}^{m}(v-\mathcal{I}_{h}^{L}v)\|_{L^{2}(S)} \leq Ch_{S}^{k+1-m}\|v\|_{H^{k+1}(S)} \forall v\in H^{k+1}(S). \tag{3.6}\] Proof.: By the Bramble-Hilbert theorem it holds that \[\|\nabla_{\mathbf{\dot{x}}^{\prime}}^{l}[v\circ\mathbf{F}_{S}-(\mathcal{I}_{h}^{L}v) \circ\mathbf{F}_{S})]\|_{L^{2}(\dot{S}^{\prime})}\leq C\|\nabla_{\mathbf{\dot{x}}^{ \prime}}^{k+1}(v\circ\mathbf{F}_{S})\|_{L^{2}(\dot{S}^{\prime})}\quad(l=0,\dots,m),\] where the constant \(C\) depends only on \(\hat{S}^{\prime}\). This combined with Proposition 3.2(ii) yields (3.7). Estimate (3.6) is obtained similarly (or one can refer to [6, Theorem 5]). **Remark 3.2**.: (i) Adding up (3.6) for \(T\in\mathcal{T}_{h}\) leads to the global estimate \[\|v-\mathcal{I}_{h}^{L}v\|_{H^{m}(\Omega_{h})}\leq Ch^{k+1-m}\|v\|_{H^{k+1}( \Omega_{h})}\qquad\forall v\in H^{k+1}(\Omega_{h})\quad(m=0,1). \tag{3.8}\] (ii) A corresponding global estimate on \(\Gamma_{h}\) also holds; however, it is not useful for our purpose. To explain the reason, let us suppose \(v\in H^{m}(\Omega;\Gamma)\) and extend it to some \(\tilde{v}\in H^{m}(\mathbb{R}^{d})\). Since we expect only \(\tilde{v}|_{\Gamma_{h}}\in H^{m-1/2}(\Gamma_{h})\) by the trace theorem, the direct interpolation \(\mathcal{I}_{h}^{L}\tilde{v}\) may not have a good convergence property. To overcome this technical difficulty, we consider \(\mathcal{I}_{h}^{L}(\tilde{v}\circ\mathbf{\pi})\) instead in the theorem below, taking advantage of the fact that \(v\circ\mathbf{\pi}\) is element-wisely as smooth on \(\Gamma_{h}\) as \(v\) is on \(\Gamma\). **Theorem 3.1**.: _Let \(k+1>d/2\) and \(m=0,1\). For \(v\in H^{k+1}(\Omega\cup\Gamma(\delta))\) satisfying \(v|_{\Gamma}\in H^{k+1}(\Gamma)\) we have_ \[\|v-\mathcal{I}_{h}v\|_{H^{m}(\Omega_{h};\Gamma_{h})}\leq Ch^{k+1-m}(\|v\|_{H^{ k+1}(\Omega\cup\Gamma(\delta))}+\|v\|_{H^{k+1}(\Gamma)}).\] Proof.: Let \(\mathcal{I}\) denote the identity operator. Since \(\mathcal{I}_{h}\mathcal{I}_{h}^{L}=\mathcal{I}_{h}^{L}\), one gets \(\mathcal{I}-\mathcal{I}_{h}=(\mathcal{I}-\mathcal{I}_{h})(\mathcal{I}-\mathcal{ I}_{h}^{L})\). Then it follows from Corollary 3.1 and (3.8) that \[\|v-\mathcal{I}_{h}v\|_{H^{m}(\Omega_{h})}=\|(\mathcal{I}-\mathcal{I}_{h})(v- \mathcal{I}_{h}^{L}v)\|_{H^{m}(\Omega_{h})}\leq Ch^{1-m}\|v-\mathcal{I}_{h}^{L} v\|_{H^{1}(\Omega_{h})}\leq Ch^{k+1-m}\|v\|_{H^{k+1}(\Omega_{h})}.\] To consider the boundary estimate, observe that \[v-\mathcal{I}_{h}v=(\mathcal{I}-\mathcal{I}_{h})(v-v\circ\mathbf{\pi})+(\mathcal{I} -\mathcal{I}_{h})(\mathcal{I}-\mathcal{I}_{h}^{L})(v\circ\mathbf{\pi})=:J_{1}+J_{2}.\] By Corollaries 3.1 and 2.1, \[\|J_{1}\|_{H^{m}(\Gamma_{h})}\leq Ch^{1-m}\|v-v\circ\mathbf{\pi}\|_{H^{1}(\Gamma_{h} )}\leq Ch^{k+1-m}\|v\|_{H^{\min}(k+1,3)}(\Omega\cup\Gamma(\delta)).\] From Corollary 3.1, (3.7), and (2.11) we obtain \[\|J_{2}\|_{H^{m}(\Gamma_{h})} \leq Ch^{1-m}\|v\circ\boldsymbol{\pi}-\mathcal{I}_{h}^{L}(v\circ \boldsymbol{\pi})\|_{H^{1}(\Gamma_{h})}\leq Ch^{k+1-m}\Big{(}\sum_{S\in\mathcal{ S}_{h}}\|v\circ\boldsymbol{\pi}\|_{H^{k+1}(S)}^{2}\Big{)}^{1/2}\] \[\leq Ch^{k+1-m}\Big{(}\sum_{S\in\mathcal{S}_{h}}\|v\|_{H^{k+1}( \boldsymbol{\pi}(S))}^{2}\Big{)}^{1/2}=Ch^{k+1-m}\|v\|_{H^{k+1}(\Gamma)},\] where we have used Lemma 2.1. Combining the estimates above proves the theorem. ## 4. Error estimates in an approximate domain We continue to denote by \(k\geq 1\) the order of the isoparametric finite element approximation throughout this and next sections. ### Finite element scheme based on extensions We recall that the weak formulation for (1.1)-(1.2) is given by (1.3). In order to define its finite element approximation, one needs counterparts to \(f\) and \(\tau\) given in \(\Omega_{h}\) and \(\Gamma_{h}\) respectively. For this we will exploit extensions that preserves the smoothness as mentioned in Introduction. Namely, if \(f\in H^{k-1}(\Omega)\), one can choose some \(\tilde{f}\in H^{k-1}(\mathbb{R}^{d})\) such that \(\|\tilde{f}\|_{H^{k-1}(\mathbb{R}^{d})}\leq C\|f\|_{H^{k-1}(\Omega)}\). For \(\tau\), we assume \(\tau\in H^{k-1/2}(\Gamma)\) so that it admits an extension \(\tilde{\tau}\in H^{k}(\mathbb{R}^{d})\) such that \(\|\tilde{\tau}\|_{H^{k}(\mathbb{R}^{d})}\leq C\|\tau\|_{H^{k-1}(\Gamma)}\) (the extension operator \(\tilde{\cdot}\) has different meanings for \(f\) and \(\tau\), but there should be no fear of confusion). The resulting discrete problem is to find \(u_{h}\in V_{h}\) such that \[a_{h}(u_{h},v_{h}):=(\nabla u_{h},\nabla v_{h})_{\Omega_{h}}+(u_{h},v_{h})_{ \Gamma_{h}}+(\nabla_{\Gamma_{h}}u_{h},\nabla_{\Gamma_{h}}v_{h})_{\Gamma_{h}}= (\tilde{f},v_{h})_{\Omega_{h}}+(\tilde{\tau},v_{h})_{\Gamma_{h}}\qquad\forall v _{h}\in V_{h}. \tag{4.1}\] Because the bilinear form \(a_{h}\) is uniformly coercive in \(V_{h}\), i.e., \(a_{h}(v_{h},v_{h})\geq C\|v_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}^{2}\) for all \(v_{h}\in V_{h}\) with \(C\) independent of \(h\), the existence and uniqueness of a solution \(u_{h}\) is an immediate consequence of the Lax-Milgram theorem. ### \(H^{1}\)-error estimate We define the residual functionals for \(v\in H^{1}(\Omega_{h};\Gamma_{h})\) by \[R_{u}^{1}(v) :=(-\Delta\tilde{u}-\tilde{f},v)_{\Omega_{h}\setminus\Omega}+( \partial_{n_{h}}\tilde{u}-(\partial_{n}u)\circ\boldsymbol{\pi},v)_{\Gamma_{h} }+(\tilde{u}-u\circ\boldsymbol{\pi},v)_{\Gamma_{h}}+(\tau\circ\boldsymbol{ \pi}-\tilde{\tau},v)_{\Gamma_{h}},\] \[R_{u}^{2}(v) :=\big{[}((\Delta_{\Gamma}u)\circ\boldsymbol{\pi},v)_{\Gamma_{h }}+(\nabla_{\Gamma_{h}}(u\circ\boldsymbol{\pi}),\nabla_{\Gamma_{h}}v)_{ \Gamma_{h}}\big{]}+(\nabla_{\Gamma_{h}}(\tilde{u}-u\circ\boldsymbol{\pi}), \nabla_{\Gamma_{h}}v_{h})_{\Gamma_{h}},\] \[R_{u}(v) :=R_{u}^{1}(v)+R_{u}^{2}(v), \tag{4.2}\] which completely vanish if we formally assume \(\Omega_{h}=\Omega\). Therefore, the residual terms above is considered to represent domain perturbation. Let us state consistency error estimates, or, in other words, Galerkin orthogonality relation with domain perturbation terms. **Proposition 4.1**.: _Assume that \(f\in H^{k-1}(\Omega)\), \(\tau\in H^{k-1/2}(\Gamma)\) if \(k=1,2\), and that \(f\in H^{1}(\Omega)\), \(\tau\in H^{3/2}(\Gamma)\) if \(k\geq 3\). Let \(u\) and \(u_{h}\) be the solutions of (1.3) and (4.1) respectively. Then we have_ \[a_{h}(\tilde{u}-u_{h},v_{h})=R_{u}(v_{h})\qquad\forall v_{h}\in V_{h}. \tag{4.3}\] _Moreover, the following estimate holds:_ \[|R_{u}(v)|\leq Ch^{k}(\|f\|_{H^{\min\{k-1,1\}}(\Omega)}+\|\tau\|_{H^{\min\{k-1 /2,3/2\}}(\Gamma)})\|v\|_{H^{1}(\Omega_{h};\Gamma_{h})}\qquad\forall v\in H^ {1}(\Omega_{h};\Gamma_{h}). \tag{4.4}\] Proof.: Equation (4.3) results from a direct computation as follows: \[a_{h}(\tilde{u}-u_{h},v_{h}) =(\nabla(\tilde{u}-u_{h}),\nabla v_{h})_{\Omega_{h}}+(\tilde{u}-u _{h},v_{h})_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}(\tilde{u}-u_{h}),\nabla_{\Gamma_{ h}}v_{h})_{\Gamma_{h}}\] \[=(-\Delta\tilde{u},v_{h})_{\Omega_{h}}+(\partial_{n_{h}}\tilde{u }+\tilde{u},v_{h})_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}\tilde{u},\nabla_{\Gamma_{ h}}v_{h})_{\Gamma_{h}}-(\tilde{f},v_{h})_{\Omega_{h}}-(\tilde{\tau},v_{h})_{ \Gamma_{h}}\] \[=(-\Delta\tilde{u}-\tilde{f},v_{h})_{\Omega_{h}\setminus\Omega}+( \partial_{n_{h}}\tilde{u}-(\partial_{n}u)\circ\boldsymbol{\pi},v_{h})_{\Gamma_{h} }+(\tilde{u}-u\circ\boldsymbol{\pi},v_{h})_{\Gamma_{h}}+(\tau\circ\boldsymbol{ \pi}-\tilde{\tau},v_{h})_{\Gamma_{h}}\] \[\qquad+((\Delta_{\Gamma}u)\circ\boldsymbol{\pi},v_{h})_{\Gamma_{h}} +(\nabla_{\Gamma_{h}}(u\circ\boldsymbol{\pi}),\nabla_{\Gamma_{h}}v_{h})_{\Gamma_{h }}+(\nabla_{\Gamma_{h}}(\tilde{u}-u\circ\boldsymbol{\pi}),\nabla_{\Gamma_{h}}v_{h})_{ \Gamma_{h}}\] \[=R_{u}^{1}(v_{h})+R_{u}^{2}(v_{h})=R_{u}(v_{h}).\] Let \(C_{f,\tau}\) denote a generic constant multiplied by \(\|f\|_{H^{\min\{k-1,1\}}(\Omega)}+\|\tau\|_{H^{\min\{k-1/2,3/2\}}(\Gamma)}\). We will make use of the regularity structure \(\|u\|_{H^{k+1}(\Omega;\Gamma)}\leq C(\|f\|_{H^{k-1}(\Omega)}+\|\tau\|_{H^{k-1}( \Gamma)})\) and the stability of extensions without further emphasis. Applying the boundary-skin estimate (2.16), we obtain \[|(-\Delta\tilde{u}-\tilde{f},v)_{\Omega_{h}\setminus\Omega}| \leq\begin{cases}C(\|\Delta\tilde{u}\|_{L^{2}(\Omega_{h})}+\| \tilde{f}\|_{L^{2}(\Omega_{h})})\cdot C\delta^{1/2}\|v\|_{H^{1}(\Omega_{h})}& (k=1)\\ C\delta^{1/2}(\|\tilde{u}\|_{H^{3}(\Omega_{h})}+\|\tilde{f}\|_{H^{1}(\Omega_{ h})})\cdot C\delta^{1/2}\|v\|_{H^{1}(\Omega_{h})}&(k\geq 2)\end{cases}\] \[\leq C_{f,\tau}h^{k}\|v\|_{H^{1}(\Omega_{h})},\] where we have used \(\delta=Ch^{k+1}\) and \(h\leq 1\). The second term of \(R^{1}_{u}(v)\) is estimated as \[|(\partial_{n_{h}}\tilde{u}-(\partial_{n}u)\circ\mathbf{\pi},v)_{ \Gamma_{h}}| =\big{|}\big{(}\nabla\tilde{u}\cdot(\mathbf{n}_{h}-\mathbf{n}\circ\mathbf{\pi} ),v\big{)}_{\Gamma_{h}}+\big{(}(\nabla\tilde{u}-(\nabla u)\circ\mathbf{\pi})\cdot \mathbf{n}\circ\mathbf{\pi},v\big{)}_{\Gamma_{h}}\big{|}\] \[\leq C(h^{k}\|\nabla\tilde{u}\|_{L^{2}(\Gamma_{h})}+\delta^{1/2} \|\nabla^{2}\tilde{u}\|_{L^{2}(\Gamma(\delta))})\|v\|_{L^{2}(\Gamma_{h})}\] \[\leq\begin{cases}C(h^{k}\|\tilde{u}\|_{H^{2}(\Omega_{h})}+\delta^ {1/2}\|\tilde{u}\|_{H^{2}(\Gamma(\delta))})\|v\|_{H^{1}(\Omega_{h})}&(k=1)\\ C(h^{k}\|\tilde{u}\|_{H^{2}(\Omega_{h})}+\delta\|\tilde{u}\|_{H^{3}(\Omega \cup\Gamma(\delta))})\|v\|_{H^{1}(\Omega_{h})}&(k\geq 2)\end{cases}\] \[\leq C_{f,\tau}h^{k}\|v\|_{H^{1}(\Omega_{h})},\] as a result of (2.17), (2.14), and (2.13). Similarly, the third term of \(R^{1}_{u}(v)\) is bounded by \[C\delta^{1/2}\|\nabla\tilde{u}\|_{L^{2}(\Gamma(\delta))}\|v_{h}\|_{L^{2}( \Gamma_{h})}\leq C_{f,\tau}h^{k}\|v_{h}\|_{H^{1}(\Omega_{h})}.\] For the fourth term of \(R^{1}_{u}(v)\), we need the regularity assumption \(\tau\in H^{1/2}(\Gamma)\) for \(k=1\) and \(\tau\in H^{3/2}(\Gamma)\) for \(k\geq 2\) to ensure \(\tilde{\tau}\in H^{1}(\mathbb{R}^{d})\) and \(\tilde{\tau}\in H^{2}(\mathbb{R}^{d})\), respectively. Then \(|(\tau\circ\mathbf{\pi}-\tilde{\tau},v_{h})_{\Gamma_{h}}|\) is bounded by \[C\delta^{1/2}\|\nabla\tilde{\tau}\|_{L^{2}(\Gamma(\delta))}\|v_{ h}\|_{L^{2}(\Gamma_{h})} \leq\begin{cases}C\delta^{1/2}\|\nabla\tilde{\tau}\|_{L^{2}(\Gamma (\delta))}\|v_{h}\|_{H^{1}(\Omega_{h})}&(k=1)\\ C\delta\|\tilde{\tau}\|_{H^{2}(\Omega\cup\Gamma(\delta))}\|v_{h}\|_{H^{1}( \Omega_{h})}&(k\geq 2)\end{cases}\] \[\leq C_{f,\tau}h^{k}\|v_{h}\|_{H^{1}(\Omega_{h})}.\] For \(R^{2}_{u}(v)\), we apply Lemma 2.4 and Corollary 2.1 to obtain \[\big{|}((\Delta_{\Gamma}u)\circ\mathbf{\pi},v)_{\Gamma_{h}}+(\nabla_{ \Gamma_{h}}(u\circ\mathbf{\pi}),\nabla_{\Gamma_{h}}v)_{\Gamma_{h}}\big{|} \leq C\delta(\|u\|_{H^{2}(\Gamma)}\|v\|_{L^{2}(\Gamma_{h})}+\| \nabla_{\Gamma}u\|_{L^{2}(\Gamma)}\|\nabla_{\Gamma_{h}}v\|_{L^{2}(\Gamma_{h})})\] \[\leq C_{f,\tau}h^{k}\|v_{h}\|_{H^{1}(\Gamma_{h})},\] \[\big{|}(\nabla_{\Gamma_{h}}(\tilde{u}-u\circ\mathbf{\pi}),\nabla_{ \Gamma_{h}}v)_{\Gamma_{h}}\big{|} \leq Ch^{k}\|\tilde{u}\|_{H^{\min\{k+1,3\}}(\Omega\cup\Gamma(\delta) )}\|\nabla_{\Gamma_{h}}v\|_{L^{2}(\Gamma_{h})}\leq C_{f,\tau}h^{k}\|v_{h}\|_{H ^{1}(\Gamma_{h})}.\] Combining the estimates above all together concludes (4.4). **Remark 4.1**.: If the transformation \(\tau\circ\mathbf{\pi}\) instead of the extension \(\tilde{\tau}\) is employed in the FE scheme (4.1), then assuming just \(\tau\in H^{k-1}(\Gamma)\) is sufficient to get \[|R_{u}(v)|\leq Ch^{k}(\|f\|_{H^{\min\{k-1,1\}}(\Omega)}+\|\tau\|_{H^{\min\{k-1,1 \}}(\Gamma)})\|v\|_{H^{1}(\Omega_{h};\Gamma_{h})},\] because the term involving \(\tau\) in (4.2) disappears. We are ready to state the \(H^{1}\)-error estimates. **Theorem 4.1**.: _Let \(k+1>d/2\). Assume that \(f\in L^{2}(\Omega)\), \(\tau\in H^{1/2}(\Gamma)\) for \(k=1\), that \(f\in H^{1}(\Omega)\), \(\tau\in H^{3/2}(\Gamma)\) for \(k=2\), and that \(f\in H^{k-1}(\Omega)\), \(\tau\in H^{k-1}(\Gamma)\) for \(k\geq 3\). Then we have_ \[\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\leq Ch^{k}(\|f\|_{H^{k-1}( \Omega)}+\|\tau\|_{H^{\max\{k-1,\min\{k-1/2,3/2\}}(\Gamma)}}),\] _where \(u\) and \(u_{h}\) are the solutions of (1.3) and (4.1) respectively._ Proof.: To save the space we introduce the notation \(C_{f,\tau}:=C(\|f\|_{H^{k-1}(\Omega)}+\|\tau\|_{H^{\max\{k-1,\min\{k-1/2,3/2\}}}( \Gamma)})\). It follows from the uniform coercivity of \(a_{h}\) and (4.3) that \[C\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}^{2}\leq a_{h}(\tilde{u}-u_{h},\tilde{u}-u_{h})=a_{h}(\tilde{u}-u_{h},\tilde{u}-\mathcal{I}_{h}\tilde{u})+R_{u}( \mathcal{I}_{h}\tilde{u}-u_{h}).\] In view of Theorem 3.1, the first term in the right-hand side is bounded by \[\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\|\tilde{u}-\mathcal{ I}_{h}\tilde{u}\|_{H^{1}(\Omega_{h};\Gamma_{h})} \leq Ch^{k}(\|\tilde{u}\|_{H^{k+1}(\Omega\cup\Gamma(\delta))}+\|u\|_ {H^{k+1}(\Gamma)})\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\] \[\leq C_{f,\tau}h^{k}\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{ h})}\] as a result of the regularity of \(u\) and the stability of extensions. Estimate (4.4) applied to \(R_{u}(\mathcal{I}_{h}\tilde{u}-u_{h})\) combined again with Theorem 3.1 gives the upper bound of the second term as \[C_{f,\tau}h^{k}\|\mathcal{I}_{h}\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h} )}\leq C_{f,\tau}h^{k}\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}+(C_{ f,\tau}h^{k})^{2}.\] Consequently, \[C\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}^{2}\leq C_{f,\tau}h^{k}\| \tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}+(C_{f,\tau}h^{k})^{2},\] which after an absorbing argument proves the theorem. ### \(L^{2}\)-error estimate Let \(\varphi\in L^{2}(\Omega_{h}),\psi\in L^{2}(\Gamma_{h})\) be arbitrary such that \(\|\varphi\|_{L^{2}(\Omega_{h})}=\|\psi\|_{L^{2}(\Gamma_{h})}=1\). We define \(w\in H^{2}(\Omega;\Gamma)\) as the solution of the dual problem introduced as follows: \[-\Delta w=\varphi\quad\text{in}\ \ \Omega,\qquad\tfrac{\partial w}{\partial n }+w-\Delta_{\Gamma}w=\psi\circ\boldsymbol{\pi}^{*}\quad\text{on}\ \ \Gamma, \tag{4.5}\] where \(\varphi\) is extended to \(\mathbb{R}^{d}\setminus\Omega_{h}\) by \(0\). For \(v\in H^{1}(\Omega_{h};\Gamma_{h})\) we define residual functionals w.r.t. \(w\) by \[R_{w}^{1}(v) :=(v,-\Delta\tilde{w}-\varphi)_{\Omega_{h}\setminus\Omega}+(v, \partial_{n_{h}}\tilde{w}-(\partial_{n}w)\circ\pi)_{\Gamma_{h}}+(v,\tilde{w}- w\circ\boldsymbol{\pi})_{\Gamma_{h}},\] \[R_{w}^{2}(v) :=\big{[}(v,(\Delta_{\Gamma}w)\circ\boldsymbol{\pi})_{\Gamma_{h }}+(\nabla_{\Gamma_{h}}v,\nabla_{\Gamma_{h}}(w\circ\boldsymbol{\pi}))_{\Gamma _{h}}\big{]}+(\nabla_{\Gamma_{h}}v,\nabla_{\Gamma_{h}}(\tilde{w}-w\circ \boldsymbol{\pi}))_{\Gamma_{h}}\] \[R_{w}(v) :=R_{w}^{1}(v)+R_{w}^{2}(v).\] **Lemma 4.1**.: _Let \(k\geq 1\), \(v\in H^{1}(\Omega_{h};\Gamma_{h})\), and \(w\) be as above. Then we have_ \[(v,\varphi)_{\Omega_{h}}+(v,\psi)_{\Gamma_{h}}=a_{h}(v,\tilde{w})-R_{w}(v). \tag{4.6}\] _Moreover, the following estimate holds:_ \[|R_{w}(v)|\leq Ch\|w\|_{H^{2}(\Omega;\Gamma)}\|v\|_{H^{1}(\Omega_{h};\Gamma_{ h})}. \tag{4.7}\] Proof.: A direct computation shows \[a_{h}(v,\tilde{w}) =(\nabla v,\nabla\tilde{w})_{\Omega_{h}}+(v,\tilde{w})_{\Gamma_{ h}}+(\nabla_{\Gamma_{h}}v,\nabla_{\Gamma_{h}}\tilde{w})_{\Gamma_{h}}\] \[=(v,-\Delta\tilde{w})_{\Omega_{h}}+(v,\partial_{n_{h}}\tilde{w}) _{\Gamma_{h}}+(v,\tilde{w})_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}v,\nabla_{\Gamma _{h}}\tilde{w})_{\Gamma_{h}}\] \[=[(v,\varphi)_{\Omega_{h}}+(v,-\Delta\tilde{w}-\varphi)_{\Omega_{ h}\setminus\Omega}]+(v,\partial_{n_{h}}\tilde{w}-(\partial_{n}w)\circ\pi)_{ \Gamma_{h}}\] \[\qquad+(v,\tilde{w}-w\circ\boldsymbol{\pi})_{\Gamma_{h}}+(v,( \Delta_{\Gamma}w)\circ\boldsymbol{\pi})_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}v, \nabla_{\Gamma_{h}}\tilde{w})_{\Gamma_{h}}+(v,\psi)_{\Gamma_{h}}\] \[=(v,\varphi)_{\Omega_{h}}+(v,\psi)_{\Gamma_{h}}+R_{w}^{1}(v)+R_{ w}^{2}(v),\] which is (4.6). Estimate (4.7) is obtained by almost the same manner as (4.4) for \(k=1\). The only difference is that no domain perturbation term involving \(\psi\) appears this time (cf. Remark 4.1). Next we show that \(R_{u}(v)\) admits another equivalent representation if \(v\in H^{2}(\Omega\cup\Gamma(\delta))\) and \(v|_{\Gamma}\in H^{2}(\Gamma)\). We make use of the integration by parts formula \[(\Delta u,v)^{\prime}_{\Omega_{h}\triangle\Omega}+(\nabla u,\nabla v)^{\prime}_ {\Omega_{h}\triangle\Omega}=(\partial_{n_{h}}u,v)_{\Gamma_{h}}-(\partial_{n}u, v)_{\Gamma}, \tag{4.8}\] where \((u,v)^{\prime}_{\Omega_{h}\triangle\Omega}:=(u,v)_{\Omega_{h}\setminus\Omega}-(u, v)_{\Omega\setminus\Omega_{h}}\). **Proposition 4.2**.: _Let \(k\geq 1\), \(f\in H^{1}(\Omega)\), \(\tau\in H^{3/2}(\Gamma)\). Assume that \(u\in H^{\min\{k+1,3\}}(\Omega;\Gamma)\) be the solution of (1.3). Then, for \(v\in H^{2}(\Omega\cup\Gamma(\delta))\) we have_ \[R_{u}(v)=-(\tilde{f},v)^{\prime}_{\Omega_{h}\triangle\Omega}+(\tilde{u}-\tilde{ \tau},v)^{\prime}_{\Gamma_{h}\cup\Gamma}+(\nabla\tilde{u},\nabla v)^{\prime}_{ \Omega_{h}\triangle\Omega}+(\nabla_{\Gamma_{h}}\tilde{u},\nabla_{\Gamma_{h}}v)_{ \Gamma_{h}}-(\nabla_{\Gamma}u,\nabla_{\Gamma}v)_{\Gamma}, \tag{4.9}\] _where \((u,v)^{\prime}_{\Gamma_{h}\cup\Gamma}:=(u,v)_{\Gamma_{h}}-(u,v)_{\Gamma}\). If in addition \(v|_{\Gamma}\in H^{2}(\Gamma)\), the following estimate holds:_ \[|R_{u}(v)|\leq C\delta(\|f\|_{H^{1}(\Omega)}+\|\tau\|_{H^{3/2}(\Gamma)})(\|v\|_{H^ {2}(\Omega\cup\Gamma(\delta))}+\|v\|_{H^{2}(\Gamma)}). \tag{4.10}\] Proof.: Since \(-\Delta u=f\) in \(\Omega\) and \(-\partial_{n}u-u+\tau+\Delta_{\Gamma}u=0\) on \(\Gamma\), it follows from (4.8) that \[R_{u}(v) =(-\Delta\tilde{u}-\tilde{f},v)^{\prime}_{\Omega_{h}\triangle\Omega }+(\partial_{n_{h}}\tilde{u},v)_{\Gamma_{h}}+(\tilde{u}-\tilde{\tau},v)_{\Gamma _{h}}+(\nabla_{\Gamma_{h}}\tilde{u},\nabla_{\Gamma_{h}}v)_{\Gamma_{h}}\] \[=-(\tilde{f},v)^{\prime}_{\Omega_{h}\triangle\Omega}+(\nabla\tilde {u},\nabla v)^{\prime}_{\Omega_{h}\triangle\Omega}+(\partial_{n}u,v)_{\Gamma}+ (\tilde{u}-\tilde{\tau},v)_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}\tilde{u},\nabla_{ \Gamma_{h}}v)_{\Gamma_{h}}\] \[=-(\tilde{f},v)^{\prime}_{\Omega_{h}\triangle\Omega}+(\nabla \tilde{u},\nabla v)^{\prime}_{\Omega_{h}\triangle\Omega}+(-u+\tau+\Delta_{ \Gamma}u,v)_{\Gamma}+(\tilde{u}-\tilde{\tau},v)_{\Gamma_{h}}+(\nabla_{\Gamma _{h}}\tilde{u},\nabla_{\Gamma_{h}}v)_{\Gamma_{h}},\] which after the integration by parts on \(\Gamma\) yields (4.9). By the boundary-skin estimates, the regularity structure \(\|u\|_{H^{2}(\Omega;\Gamma)}\leq C(\|f\|_{L^{2}(\Omega)}+\|\tau\|_{L^{2}( \Gamma)})\), and the stability of extensions, the first three terms on the right-hand side of (4.9) is bounded as follows: \[|(\tilde{f},v)^{\prime}_{\Omega_{h}\triangle\Omega}| \leq\|\tilde{f}\|_{L^{2}(\Gamma(\delta))}\|v\|_{L^{2}(\Gamma( \delta))}\leq C\delta\|f\|_{H^{1}(\Omega)}\|v\|_{H^{1}(\Omega\cup\Gamma( \delta))},\] \[|(\tilde{u}-\tilde{\tau},v)^{\prime}_{\Gamma_{h}\cup\Gamma}| \leq C\delta\|\tilde{u}-\tilde{\tau}\|_{H^{2}(\Omega\cup\Gamma( \delta))}\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))}\leq C\delta(\|f\|_{L^{2}( \Omega)}+\|\tau\|_{H^{3/2}(\Gamma)})\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))},\] \[|(\nabla\tilde{u},\nabla v)^{\prime}_{\Omega_{h}\triangle\Omega}| \leq C\delta\|\nabla\tilde{u}\|_{H^{1}(\Omega\cup\Gamma(\delta))}\| \nabla v\|_{H^{1}(\Omega\cup\Gamma(\delta))}\leq C\delta(\|f\|_{L^{2}(\Omega )}+\|\tau\|_{L^{2}(\Gamma)})\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))}.\] For the fourth and fifth terms of (4.9), we start from the obvious equality \[(\nabla_{\Gamma_{h}}\tilde{u},\nabla_{\Gamma_{h}}v)_{\Gamma_{h}} -(\nabla_{\Gamma}u,\nabla_{\Gamma}v)_{\Gamma}\] \[=(\nabla_{\Gamma_{h}}(\tilde{u}-u\circ\boldsymbol{\pi}),\nabla_{ \Gamma_{h}}(v-v\circ\boldsymbol{\pi}))_{\Gamma_{h}}+(\nabla_{\Gamma_{h}}(u \circ\boldsymbol{\pi}),\nabla_{\Gamma_{h}}(v-v\circ\boldsymbol{\pi}))_{ \Gamma_{h}}\] \[\qquad+(\nabla_{\Gamma_{h}}(\tilde{u}-u\circ\boldsymbol{\pi}), \nabla_{\Gamma_{h}}(v\circ\boldsymbol{\pi}))_{\Gamma_{h}}+\big{[}(\nabla_{ \Gamma_{h}}(u\circ\boldsymbol{\pi}),\nabla_{\Gamma_{h}}(v\circ\boldsymbol{ \pi}))_{\Gamma_{h}}-(\nabla_{\Gamma}u,\nabla_{\Gamma}v)_{\Gamma}\big{]}\] \[=:I_{1}+I_{2}+I_{3}+I_{4}.\] By Corollary 2.1, \(|I_{1}|\leq Ch^{2k}\|\tilde{u}\|_{H^{\min\{k+1,3\}}(\Omega\cup\Gamma(\delta))} \|v\|_{H^{\min\{k+1,3\}}(\Omega\cup\Gamma(\delta))}\) (note that \(h^{2k}\leq C\delta\)). From Lemma 2.5 we have \[|I_{2}|\leq C\delta\|u\|_{H^{2}(\Gamma)}\|v\|_{H^{2}(\Omega\cup\Gamma(\delta))},\qquad|I_{3}|\leq C\delta\|\tilde{u}\|_{H^{2}(\Omega\cup\Gamma(\delta))}\|v\|_{ H^{2}(\Gamma)}.\] Finally, \(|I_{4}|\leq C\delta\|u\|_{H^{1}(\Gamma)}\|v\|_{H^{1}(\Gamma)}\) by (2.20). Combining the estimates above concludes (4.10). **Remark 4.2**.: We need \(f\in H^{1}(\Omega)\) and \(\tau\in H^{3/2}(\Gamma)\) even for \(k=1\). We are in the position to state the \(L^{2}\)-error estimate in \(\Omega_{h}\) and on \(\Gamma_{h}\). **Theorem 4.2**.: _Let \(k+1>d/2\). Assume that \(f\in H^{1}(\Omega)\), \(\tau\in H^{3/2}(\Gamma)\) for \(k=1,2\) and that \(f\in H^{k-1}(\Omega)\), \(\tau\in H^{k-1}(\Gamma)\) for \(k\geq 3\). Then we have_ \[\|\tilde{u}-u_{h}\|_{L^{2}(\Omega_{h};\Gamma_{h})}\leq C_{f,\tau}h^{k+1},\] _where \(C_{f,\tau}:=C(\|f\|_{H^{\max\{k-1,1\}}(\Omega)}+\|\tau\|_{H^{\max\{k-1,3/2\}}( \Gamma)})\)._ Proof.: We consider the solution \(w\) of (4.5) obtained from the following choices of \(\varphi\) and \(\psi\): \[\varphi=\frac{\tilde{u}-u_{h}}{\|\tilde{u}-u_{h}\|_{L^{2}(\Omega)}},\qquad \psi=\frac{\tilde{u}-u_{h}}{\|\tilde{u}-u_{h}\|_{L^{2}(\Gamma)}}.\] Taking then \(v=\tilde{u}-u_{h}\) in (4.6) and using (4.3), we obtain \[\|\tilde{u}-u_{h}\|_{L^{2}(\Omega_{h};\Gamma_{h})} =a_{h}(\tilde{u}-u_{h},\tilde{w})-R_{w}(\tilde{u}-u_{h})\] \[=a_{h}(\tilde{u}-u_{h},\tilde{w}-w_{h})-R_{u}(\tilde{w}-w_{h})-R_{ w}(\tilde{u}-u_{h})+R_{u}(\tilde{w}),\] where we set \(w_{h}:=\mathcal{I}_{h}\tilde{w}\). Since \(\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\leq C_{f,\tau}h^{k}\) by Theorem 4.1 and \(\|w\|_{H^{2}(\Omega;\Gamma)}\leq C\), we find from Theorem 3.1 and the residual estimates (4.4), (4.7), and (4.10) that \[|a_{h}(\tilde{u}-u_{h},\tilde{w}-w_{h})| \leq C\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\|\tilde{w}- w_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\leq C_{f,\tau}h^{k+1},\] \[|R_{u}(\tilde{w}-w_{h})-R_{w}(\tilde{u}-u_{h})| \leq C_{f,\tau}h^{k}\|\tilde{w}-w_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h} )}+Ch\|\tilde{u}-u_{h}\|_{H^{1}(\Omega_{h};\Gamma_{h})}\leq C_{f,\tau}h^{k+1},\] \[|R_{u}(\tilde{w})| \leq C_{f,\tau}\delta(\|\tilde{w}\|_{H^{2}(\Omega\cup\Gamma(\delta)) }+\|w\|_{H^{2}(\Gamma)})\leq C_{f,\tau}h^{k+1},\] where the stability of extensions has been used. This proves the theorem. ## 5. Numerical example Let \(\Omega=\{(x,y)\in\mathbb{R}^{2}\mid x^{2}+y^{2}=1\}\) be the unit disk (thus \(\Gamma\) is the unit circle) and set the exact solution to be \[u(x,y)=10x^{2}y.\] With the linear finite element method, i.e., \(k=1\), we compute approximate solutions using the software FreeFEM. The surface gradient \(\nabla_{\Gamma_{h}}u_{h}\) is computed by \[\nabla_{\Gamma_{h}}u_{h}=(I-\boldsymbol{n}_{h}\otimes\boldsymbol{n}_{h}) \nabla u_{h}\quad\text{on}\ \ \Gamma_{h}.\] The errors are computed by interpolating the exact solution to the quadratic finite element spaces. The results are reported in Table 1, where \(N\) denotes the number of nodes on the boundary. We see that the \(H^{1}(\Omega_{h};\Gamma_{h})\)- and \(L^{2}(\Omega_{h};\Gamma_{h})\)-errors behave as \(O(h)\) and \(O(h^{2})\) respectively, which is consistent with the theoretical results established in Theorems 4.1 and 4.2.